ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-04-23 14:22
Elapsed17m34s
Version
E2E:Machinee2-standard-8
E2E:MaxNodes4
E2E:MinNodes4
E2E:Regionus-central1
E2E:Version1.15.11-gke.9

No Test Failures!


Error lines from build-log.txt

... skipping 109 lines ...
Context "gke_knative-boskos-15_us-central1_kserving-e2e-cls1253328154139824129" modified.
- gcloud project is knative-boskos-15
- gcloud user is prow-job@knative-tests.iam.gserviceaccount.com
- Cluster is gke_knative-boskos-15_us-central1_kserving-e2e-cls1253328154139824129
- Docker is gcr.io/knative-boskos-15/kserving-e2e-img/18415
>> Creating def7649f-1877-4c8d-9f59-6a37d83674ab namespace if it does not exist
Error from server (NotFound): namespaces "def7649f-1877-4c8d-9f59-6a37d83674ab" not found
namespace/def7649f-1877-4c8d-9f59-6a37d83674ab created
>> Installing Knative CRD
2020/04/23 14:25:43 Using base gcr.io/distroless/static:nonroot for knative.dev/serving/cmd/queue
2020/04/23 14:25:43 Using base gcr.io/distroless/static:nonroot for knative.dev/serving/cmd/controller
2020/04/23 14:25:43 Using base gcr.io/distroless/static:nonroot for knative.dev/serving/cmd/autoscaler
2020/04/23 14:25:43 Using base gcr.io/distroless/static:nonroot for knative.dev/serving/cmd/webhook
... skipping 306 lines ...
2020/04/23 14:33:26 Published gcr.io/knative-boskos-15/kserving-e2e-img/18415/wsserver-hostname@sha256:8a4061f068a8022ac153ef38c8e01e904885d05d90c7fa3b796a54e282078efc
2020/04/23 14:33:26 Published gcr.io/knative-boskos-15/kserving-e2e-img/18415/runtime@sha256:d04f7ac76428de15f6f55bc1f7454debb06cfd6bdfc3dd8b9b5073afca70a3bc
2020/04/23 14:33:26 Published gcr.io/knative-boskos-15/kserving-e2e-img/18415/timeout@sha256:cc0c0bb299bf2f9c5deff3d2e0aa732112c93ea27c4a80d6e58f23ce525a6c28
>> Waiting for Serving components to be running...
Waiting until all pods in namespace def7649f-1877-4c8d-9f59-6a37d83674ab are up......................................................................................................................................................

ERROR: timeout waiting for pods to come up
activator-5474b468bc-2n9ds                0/1   CrashLoopBackOff   6     6m26s
activator-5474b468bc-dvwv4                0/1   CrashLoopBackOff   6     6m27s
activator-5474b468bc-f4bvt                0/1   CrashLoopBackOff   6     6m27s
activator-5474b468bc-fbpqc                0/1   CrashLoopBackOff   6     6m27s
activator-5474b468bc-ldp5x                0/1   CrashLoopBackOff   6     6m42s
activator-5474b468bc-mcvzt                0/1   CrashLoopBackOff   6     6m27s
... skipping 11 lines ...
controller-85cb7588f6-8x8h4               0/1   CrashLoopBackOff   6     6m41s
networking-certmanager-6bf59bd456-rv69p   0/1   CrashLoopBackOff   6     6m40s
networking-istio-645677f495-8lp95         1/1   Running            0     6m41s
webhook-f65866fc4-mdqr2                   0/1   CrashLoopBackOff   6     6m41s


Failed Pod (data in YAML format) - activator-5474b468bc-2n9ds

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
... skipping 149 lines ...
    imageID: docker-pullable://gcr.io/knative-boskos-15/kserving-e2e-img/18415/knative.dev/serving/cmd/activator@sha256:af80ab3e16b9d3532cff2fe8786b24f10bb62d3d9e6497e9dc2036dea0272e8b
    lastState:
      terminated:
        containerID: docker://d79ecf944669cabbeff10217e1f47127d4f4c4e211d9346c3b69911fc5396458
        exitCode: 1
        finishedAt: "2020-04-23T14:38:03Z"
        reason: Error
        startedAt: "2020-04-23T14:38:03Z"
    name: activator
    ready: false
    restartCount: 6
    state:
      waiting:
        message: Back-off 5m0s restarting failed container=activator pod=activator-5474b468bc-2n9ds_def7649f-1877-4c8d-9f59-6a37d83674ab(6f9d3db4-1a34-4ac4-b147-db153b8a7427)
        reason: CrashLoopBackOff
  hostIP: 10.128.0.11
  phase: Running
  podIP: 10.12.10.4
  qosClass: Burstable
  startTime: "2020-04-23T14:32:15Z"


Pod Logs

standard_init_linux.go:211: exec user process caused "no such file or directory"
ERROR: test setup failed
***************************************
***         E2E TEST FAILED         ***
***    Start of information dump    ***
***************************************
>>> The dump is located at /logs/artifacts/k8s.dump-e2e-tests.sh.txt
error: unable to retrieve the complete list of server APIs: custom.metrics.k8s.io/v1beta1: the server is currently unable to handle the request
>>> adapters.config.istio.io (0 objects)
>>> apiservices.apiregistration.k8s.io (49 objects)
>>> attributemanifests.config.istio.io (0 objects)
>>> authorizationpolicies.security.istio.io (0 objects)
>>> backendconfigs.cloud.google.com (0 objects)
>>> capacityrequests.internal.autoscaling.k8s.io (0 objects)
... skipping 86 lines ...
>>> templates.config.istio.io (0 objects)
>>> updateinfos.nodemanagement.gke.io (0 objects)
>>> validatingwebhookconfigurations.admissionregistration.k8s.io (4 objects)
>>> virtualservices.networking.istio.io (0 objects)
>>> volumeattachments.storage.k8s.io (0 objects)
***************************************
***         E2E TEST FAILED         ***
***     End of information dump     ***
***************************************
2020/04/23 14:39:35 process.go:155: Step '/tmp/kubernetes.MilEDq6x6p/e2e-test.sh' finished in 13m58.281298993s
2020/04/23 14:39:35 main.go:316: Something went wrong: encountered 1 errors: [error during /tmp/kubernetes.MilEDq6x6p/e2e-test.sh: exit status 1]
Test subprocess exited with code 1
Artifacts were written to /logs/artifacts
Test result code is 1
+ EXIT_VALUE=1
+ set +o xtrace