ResultFAILURE
Tests 0 failed / 0 succeeded
Started2019-10-17 08:31
Elapsed13m0s
Version
E2E:Machinen1-standard-4
E2E:MaxNodes3
E2E:MinNodes1
E2E:Regionus-central1
E2E:Version1.14.6-gke.13

No Test Failures!


Error lines from build-log.txt

... skipping 635 lines ...
	/usr/local/go/src/github.com/knative/eventing-sources/test (from $GOROOT)
	/home/prow/go/src/github.com/knative/eventing-sources/test (from $GOPATH)
---------------------------------------------------
---- Checking autogenerated code is up-to-date ----
---------------------------------------------------
Generating deepcopy funcs
F1017 08:32:25.524751    3353 main.go:81] Error: Failed making a parser: unable to add directory "github.com/knative/eventing-sources/pkg/apis/sources/v1alpha1": unable to import "github.com/knative/eventing-sources/pkg/apis/sources/v1alpha1": cannot find package "github.com/knative/eventing-sources/pkg/apis/sources/v1alpha1" in any of:
	/home/prow/go/src/github.com/knative/eventing-contrib/vendor/github.com/knative/eventing-sources/pkg/apis/sources/v1alpha1 (vendor tree)
	/usr/local/go/src/github.com/knative/eventing-sources/pkg/apis/sources/v1alpha1 (from $GOROOT)
	/home/prow/go/src/github.com/knative/eventing-sources/pkg/apis/sources/v1alpha1 (from $GOPATH)
-----------------------------------------
---- Checking for forbidden licenses ----
-----------------------------------------
2019/10/17 08:32:34 Error collecting transitive dependencies: github.com/knative/eventing-contrib/cmd/awssqs_receive_adapter
 -> cannot find package "github.com/knative/eventing-sources/pkg/adapter/awssqs" in any of:
	/home/prow/go/src/github.com/knative/eventing-contrib/vendor/github.com/knative/eventing-sources/pkg/adapter/awssqs (vendor tree)
	/usr/local/go/src/github.com/knative/eventing-sources/pkg/adapter/awssqs (from $GOROOT)
	/home/prow/go/src/github.com/knative/eventing-sources/pkg/adapter/awssqs (from $GOPATH)
============================
==== BUILD TESTS FAILED ====
============================
============================
==== RUNNING UNIT TESTS ====
============================
Running tests with 'go test -race -v  ./... '
cmd/awssqs_receive_adapter/main.go:24:2: cannot find package "github.com/knative/eventing-sources/pkg/adapter/awssqs" in any of:
... skipping 268 lines ...
	/home/prow/go/src/github.com/knative/eventing-contrib/vendor/github.com/knative/eventing-sources/test (vendor tree)
	/usr/local/go/src/github.com/knative/eventing-sources/test (from $GOROOT)
	/home/prow/go/src/github.com/knative/eventing-sources/test (from $GOPATH)
Finished run, return code is 1
XML report written to /logs/artifacts/junit_H5boEqC8.xml
===========================
==== UNIT TESTS FAILED ====
===========================
===================================
==== RUNNING INTEGRATION TESTS ====
===================================
Running integration test test/e2e-tests.sh
Cluster will have a minimum of 1 and a maximum of 3 nodes.
... skipping 447 lines ...
customresourcedefinition.apiextensions.k8s.io/githubsources.sources.eventing.knative.dev created
customresourcedefinition.apiextensions.k8s.io/kuberneteseventsources.sources.eventing.knative.dev created
service/controller created
statefulset.apps/controller-manager created
Waiting until all pods in namespace knative-sources are up......................................................................................................................................................

ERROR: timeout waiting for pods to come up
controller-manager-0   0/1   ImagePullBackOff   0     5m23s
ERROR: Eventing Sources did not come up
***************************************
***         E2E TEST FAILED         ***
***    Start of information dump    ***
***************************************
>>> All resources:
NAMESPACE          NAME                                                                 READY   STATUS             RESTARTS   AGE
istio-system       pod/cluster-local-gateway-5bf5488bb-w2rj2                            1/1     Running            0          7m4s
istio-system       pod/istio-citadel-78fd647cf-fd5kb                                    1/1     Running            0          7m5s
... skipping 192 lines ...
default            7m45s       Normal    NodeReady                      node/gke-keventing-contrib-e2-default-pool-d1822cab-v4n4             Node gke-keventing-contrib-e2-default-pool-d1822cab-v4n4 status is now: NodeReady
default            7m42s       Normal    RegisteredNode                 node/gke-keventing-contrib-e2-default-pool-d1822cab-v4n4             Node gke-keventing-contrib-e2-default-pool-d1822cab-v4n4 event: Registered Node gke-keventing-contrib-e2-default-pool-d1822cab-v4n4 in Controller
default            7m40s       Normal    Starting                       node/gke-keventing-contrib-e2-default-pool-d1822cab-v4n4             Starting kube-proxy.
default            5m47s       Normal    CcpReconciled                  clusterchannelprovisioner/in-memory-channel                          ClusterChannelProvisioner reconciled: "in-memory-channel"
default            5m47s       Normal    CcpReconciled                  clusterchannelprovisioner/in-memory                                  ClusterChannelProvisioner reconciled: "in-memory"
istio-system       7m5s        Normal    Scheduled                      pod/cluster-local-gateway-5bf5488bb-w2rj2                            Successfully assigned istio-system/cluster-local-gateway-5bf5488bb-w2rj2 to gke-keventing-contrib-e2-default-pool-89c6990e-6wj4
istio-system       7m4s        Warning   FailedMount                    pod/cluster-local-gateway-5bf5488bb-w2rj2                            MountVolume.SetUp failed for volume "cluster-local-gateway-service-account-token-hllwf" : couldn't propagate object cache: timed out waiting for the condition
istio-system       7m4s        Warning   FailedMount                    pod/cluster-local-gateway-5bf5488bb-w2rj2                            MountVolume.SetUp failed for volume "istio-certs" : couldn't propagate object cache: timed out waiting for the condition
istio-system       7m4s        Warning   FailedMount                    pod/cluster-local-gateway-5bf5488bb-w2rj2                            MountVolume.SetUp failed for volume "clusterlocalgateway-certs" : couldn't propagate object cache: timed out waiting for the condition
istio-system       7m2s        Normal    Pulling                        pod/cluster-local-gateway-5bf5488bb-w2rj2                            Pulling image "docker.io/istio/proxyv2:1.0.7"
istio-system       6m53s       Normal    Pulled                         pod/cluster-local-gateway-5bf5488bb-w2rj2                            Successfully pulled image "docker.io/istio/proxyv2:1.0.7"
istio-system       6m50s       Normal    Created                        pod/cluster-local-gateway-5bf5488bb-w2rj2                            Created container istio-proxy
istio-system       6m50s       Normal    Started                        pod/cluster-local-gateway-5bf5488bb-w2rj2                            Started container istio-proxy
istio-system       7m5s        Normal    SuccessfulCreate               replicaset/cluster-local-gateway-5bf5488bb                           Created pod: cluster-local-gateway-5bf5488bb-w2rj2
istio-system       7m5s        Normal    ScalingReplicaSet              deployment/cluster-local-gateway                                     Scaled up replica set cluster-local-gateway-5bf5488bb to 1
istio-system       6m50s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/cluster-local-gateway                        unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m50s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/cluster-local-gateway                        failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m35s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/cluster-local-gateway                        unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m35s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/cluster-local-gateway                        failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m4s        Warning   FailedGetResourceMetric        horizontalpodautoscaler/cluster-local-gateway                        did not receive metrics for any ready pods
istio-system       6m4s        Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/cluster-local-gateway                        failed to get cpu utilization: did not receive metrics for any ready pods
istio-system       7m5s        Normal    Scheduled                      pod/istio-citadel-78fd647cf-fd5kb                                    Successfully assigned istio-system/istio-citadel-78fd647cf-fd5kb to gke-keventing-contrib-e2-default-pool-89c6990e-6wj4
istio-system       7m5s        Normal    Pulling                        pod/istio-citadel-78fd647cf-fd5kb                                    Pulling image "docker.io/istio/citadel:1.0.7"
istio-system       7m1s        Normal    Pulled                         pod/istio-citadel-78fd647cf-fd5kb                                    Successfully pulled image "docker.io/istio/citadel:1.0.7"
istio-system       7m1s        Normal    Created                        pod/istio-citadel-78fd647cf-fd5kb                                    Created container citadel
istio-system       7m          Normal    Started                        pod/istio-citadel-78fd647cf-fd5kb                                    Started container citadel
istio-system       7m6s        Normal    SuccessfulCreate               replicaset/istio-citadel-78fd647cf                                   Created pod: istio-citadel-78fd647cf-fd5kb
... skipping 9 lines ...
istio-system       6m44s       Normal    Pulled                         pod/istio-egressgateway-5547468b8d-kns6z                             Successfully pulled image "docker.io/istio/proxyv2:1.0.7"
istio-system       6m39s       Normal    Created                        pod/istio-egressgateway-5547468b8d-kns6z                             Created container istio-proxy
istio-system       6m39s       Normal    Started                        pod/istio-egressgateway-5547468b8d-kns6z                             Started container istio-proxy
istio-system       7m6s        Normal    SuccessfulCreate               replicaset/istio-egressgateway-5547468b8d                            Created pod: istio-egressgateway-5547468b8d-kns6z
istio-system       7m7s        Normal    ScalingReplicaSet              deployment/istio-egressgateway                                       Scaled up replica set istio-egressgateway-5547468b8d to 1
istio-system       6m51s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-egressgateway                          unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m51s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-egressgateway                          failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m4s        Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-egressgateway                          unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m4s        Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-egressgateway                          failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       7m6s        Normal    Scheduled                      pod/istio-galley-64ddbbff8b-sq8gt                                    Successfully assigned istio-system/istio-galley-64ddbbff8b-sq8gt to gke-keventing-contrib-e2-default-pool-89c6990e-6wj4
istio-system       6m51s       Warning   FailedMount                    pod/istio-galley-64ddbbff8b-sq8gt                                    MountVolume.SetUp failed for volume "certs" : secret "istio.istio-galley-service-account" not found
istio-system       6m34s       Normal    Pulling                        pod/istio-galley-64ddbbff8b-sq8gt                                    Pulling image "docker.io/istio/galley:1.0.7"
istio-system       6m32s       Normal    Pulled                         pod/istio-galley-64ddbbff8b-sq8gt                                    Successfully pulled image "docker.io/istio/galley:1.0.7"
istio-system       6m31s       Normal    Created                        pod/istio-galley-64ddbbff8b-sq8gt                                    Created container validator
istio-system       6m31s       Normal    Started                        pod/istio-galley-64ddbbff8b-sq8gt                                    Started container validator
istio-system       7m6s        Normal    SuccessfulCreate               replicaset/istio-galley-64ddbbff8b                                   Created pod: istio-galley-64ddbbff8b-sq8gt
istio-system       7m7s        Normal    ScalingReplicaSet              deployment/istio-galley                                              Scaled up replica set istio-galley-64ddbbff8b to 1
... skipping 3 lines ...
istio-system       6m50s       Normal    Created                        pod/istio-ingressgateway-686d54b4bf-jql9z                            Created container istio-proxy
istio-system       6m50s       Normal    Started                        pod/istio-ingressgateway-686d54b4bf-jql9z                            Started container istio-proxy
istio-system       7m5s        Normal    SuccessfulCreate               replicaset/istio-ingressgateway-686d54b4bf                           Created pod: istio-ingressgateway-686d54b4bf-jql9z
istio-system       7m6s        Normal    ScalingReplicaSet              deployment/istio-ingressgateway                                      Scaled up replica set istio-ingressgateway-686d54b4bf to 1
istio-system       7m6s        Normal    EnsuringLoadBalancer           service/istio-ingressgateway                                         Ensuring load balancer
istio-system       6m51s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-ingressgateway                         unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m51s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-ingressgateway                         failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m35s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-ingressgateway                         unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m35s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-ingressgateway                         failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m21s       Normal    UpdatedLoadBalancer            service/istio-ingressgateway                                         Updated load balancer with new hosts
istio-system       6m4s        Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-ingressgateway                         did not receive metrics for any ready pods
istio-system       6m4s        Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-ingressgateway                         failed to get cpu utilization: did not receive metrics for any ready pods
istio-system       6m4s        Normal    EnsuredLoadBalancer            service/istio-ingressgateway                                         Ensured load balancer
istio-system       7m5s        Normal    Scheduled                      pod/istio-pilot-7bd454bf5c-gz2b2                                     Successfully assigned istio-system/istio-pilot-7bd454bf5c-gz2b2 to gke-keventing-contrib-e2-default-pool-03c3e5a7-v2ht
istio-system       7m2s        Normal    Pulling                        pod/istio-pilot-7bd454bf5c-gz2b2                                     Pulling image "docker.io/istio/pilot:1.0.7"
istio-system       6m35s       Normal    Pulled                         pod/istio-pilot-7bd454bf5c-gz2b2                                     Successfully pulled image "docker.io/istio/pilot:1.0.7"
istio-system       6m35s       Normal    Created                        pod/istio-pilot-7bd454bf5c-gz2b2                                     Created container discovery
istio-system       6m35s       Normal    Started                        pod/istio-pilot-7bd454bf5c-gz2b2                                     Started container discovery
... skipping 21 lines ...
istio-system       6m51s       Normal    SuccessfulCreate               replicaset/istio-pilot-7bd454bf5c                                    Created pod: istio-pilot-7bd454bf5c-ldvp6
istio-system       6m51s       Normal    SuccessfulCreate               replicaset/istio-pilot-7bd454bf5c                                    Created pod: istio-pilot-7bd454bf5c-nprkw
istio-system       7m6s        Normal    ScalingReplicaSet              deployment/istio-pilot                                               Scaled up replica set istio-pilot-7bd454bf5c to 1
istio-system       6m51s       Normal    SuccessfulRescale              horizontalpodautoscaler/istio-pilot                                  New size: 3; reason: Current number of replicas below Spec.MinReplicas
istio-system       6m51s       Normal    ScalingReplicaSet              deployment/istio-pilot                                               Scaled up replica set istio-pilot-7bd454bf5c to 3
istio-system       6m35s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-pilot                                  unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m35s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-pilot                                  failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       5m34s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-pilot                                  did not receive metrics for any ready pods
istio-system       5m34s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-pilot                                  failed to get cpu utilization: did not receive metrics for any ready pods
istio-system       7m6s        Normal    Scheduled                      pod/istio-policy-64b5ff6bd8-cw85f                                    Successfully assigned istio-system/istio-policy-64b5ff6bd8-cw85f to gke-keventing-contrib-e2-default-pool-89c6990e-6wj4
istio-system       7m5s        Normal    Pulling                        pod/istio-policy-64b5ff6bd8-cw85f                                    Pulling image "docker.io/istio/mixer:1.0.7"
istio-system       7m3s        Normal    Pulled                         pod/istio-policy-64b5ff6bd8-cw85f                                    Successfully pulled image "docker.io/istio/mixer:1.0.7"
istio-system       7m3s        Normal    Created                        pod/istio-policy-64b5ff6bd8-cw85f                                    Created container mixer
istio-system       7m2s        Normal    Started                        pod/istio-policy-64b5ff6bd8-cw85f                                    Started container mixer
istio-system       7m2s        Normal    Pulling                        pod/istio-policy-64b5ff6bd8-cw85f                                    Pulling image "docker.io/istio/proxyv2:1.0.7"
istio-system       6m53s       Normal    Pulled                         pod/istio-policy-64b5ff6bd8-cw85f                                    Successfully pulled image "docker.io/istio/proxyv2:1.0.7"
istio-system       6m50s       Normal    Created                        pod/istio-policy-64b5ff6bd8-cw85f                                    Created container istio-proxy
istio-system       6m50s       Normal    Started                        pod/istio-policy-64b5ff6bd8-cw85f                                    Started container istio-proxy
istio-system       7m6s        Normal    SuccessfulCreate               replicaset/istio-policy-64b5ff6bd8                                   Created pod: istio-policy-64b5ff6bd8-cw85f
istio-system       7m6s        Normal    ScalingReplicaSet              deployment/istio-policy                                              Scaled up replica set istio-policy-64b5ff6bd8 to 1
istio-system       6m51s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-policy                                 unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m51s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-policy                                 failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m35s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-policy                                 unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m35s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-policy                                 failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m4s        Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-policy                                 did not receive metrics for any ready pods
istio-system       6m4s        Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-policy                                 failed to get cpu utilization: did not receive metrics for any ready pods
istio-system       7m7s        Normal    Scheduled                      pod/istio-security-post-install-j5zgf                                Successfully assigned istio-system/istio-security-post-install-j5zgf to gke-keventing-contrib-e2-default-pool-03c3e5a7-v2ht
istio-system       7m7s        Normal    Pulling                        pod/istio-security-post-install-j5zgf                                Pulling image "quay.io/coreos/hyperkube:v1.7.6_coreos.0"
istio-system       6m53s       Normal    Pulled                         pod/istio-security-post-install-j5zgf                                Successfully pulled image "quay.io/coreos/hyperkube:v1.7.6_coreos.0"
istio-system       6m48s       Normal    Created                        pod/istio-security-post-install-j5zgf                                Created container hyperkube
istio-system       6m48s       Normal    Started                        pod/istio-security-post-install-j5zgf                                Started container hyperkube
istio-system       7m7s        Normal    SuccessfulCreate               job/istio-security-post-install                                      Created pod: istio-security-post-install-j5zgf
istio-system       7m5s        Normal    Scheduled                      pod/istio-sidecar-injector-6956c47586-hsxjs                          Successfully assigned istio-system/istio-sidecar-injector-6956c47586-hsxjs to gke-keventing-contrib-e2-default-pool-89c6990e-6wj4
istio-system       6m49s       Warning   FailedMount                    pod/istio-sidecar-injector-6956c47586-hsxjs                          MountVolume.SetUp failed for volume "certs" : secret "istio.istio-sidecar-injector-service-account" not found
istio-system       6m33s       Normal    Pulling                        pod/istio-sidecar-injector-6956c47586-hsxjs                          Pulling image "docker.io/istio/sidecar_injector:1.0.7"
istio-system       6m30s       Normal    Pulled                         pod/istio-sidecar-injector-6956c47586-hsxjs                          Successfully pulled image "docker.io/istio/sidecar_injector:1.0.7"
istio-system       6m30s       Normal    Created                        pod/istio-sidecar-injector-6956c47586-hsxjs                          Created container sidecar-injector-webhook
istio-system       6m30s       Normal    Started                        pod/istio-sidecar-injector-6956c47586-hsxjs                          Started container sidecar-injector-webhook
istio-system       7m5s        Normal    SuccessfulCreate               replicaset/istio-sidecar-injector-6956c47586                         Created pod: istio-sidecar-injector-6956c47586-hsxjs
istio-system       7m6s        Normal    ScalingReplicaSet              deployment/istio-sidecar-injector                                    Scaled up replica set istio-sidecar-injector-6956c47586 to 1
... skipping 5 lines ...
istio-system       6m50s       Normal    Pulled                         pod/istio-telemetry-55b97c59c4-ngk87                                 Container image "docker.io/istio/proxyv2:1.0.7" already present on machine
istio-system       6m50s       Normal    Created                        pod/istio-telemetry-55b97c59c4-ngk87                                 Created container istio-proxy
istio-system       6m50s       Normal    Started                        pod/istio-telemetry-55b97c59c4-ngk87                                 Started container istio-proxy
istio-system       7m5s        Normal    SuccessfulCreate               replicaset/istio-telemetry-55b97c59c4                                Created pod: istio-telemetry-55b97c59c4-ngk87
istio-system       7m6s        Normal    ScalingReplicaSet              deployment/istio-telemetry                                           Scaled up replica set istio-telemetry-55b97c59c4 to 1
istio-system       6m51s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-telemetry                              unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m51s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-telemetry                              failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
istio-system       6m35s       Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-telemetry                              unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m35s       Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-telemetry                              failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
istio-system       6m4s        Warning   FailedGetResourceMetric        horizontalpodautoscaler/istio-telemetry                              did not receive metrics for any ready pods
istio-system       6m4s        Warning   FailedComputeMetricsReplicas   horizontalpodautoscaler/istio-telemetry                              failed to get cpu utilization: did not receive metrics for any ready pods
knative-eventing   5m51s       Normal    Scheduled                      pod/eventing-controller-f747584b7-lwct6                              Successfully assigned knative-eventing/eventing-controller-f747584b7-lwct6 to gke-keventing-contrib-e2-default-pool-d1822cab-v4n4
knative-eventing   5m50s       Normal    Pulling                        pod/eventing-controller-f747584b7-lwct6                              Pulling image "gcr.io/knative-releases/github.com/knative/eventing/cmd/controller@sha256:de1727c9969d369f2c3c7d628c7b8c46937315ffaaf9fe3ca242ae2a1965f744"
knative-eventing   5m49s       Normal    Pulled                         pod/eventing-controller-f747584b7-lwct6                              Successfully pulled image "gcr.io/knative-releases/github.com/knative/eventing/cmd/controller@sha256:de1727c9969d369f2c3c7d628c7b8c46937315ffaaf9fe3ca242ae2a1965f744"
knative-eventing   5m49s       Normal    Created                        pod/eventing-controller-f747584b7-lwct6                              Created container eventing-controller
knative-eventing   5m49s       Normal    Started                        pod/eventing-controller-f747584b7-lwct6                              Started container eventing-controller
knative-eventing   5m51s       Normal    SuccessfulCreate               replicaset/eventing-controller-f747584b7                             Created pod: eventing-controller-f747584b7-lwct6
... skipping 15 lines ...
knative-eventing   5m29s       Normal    Created                        pod/in-memory-channel-dispatcher-9f7d664d4-p2psr                     Created container dispatcher
knative-eventing   5m29s       Normal    Started                        pod/in-memory-channel-dispatcher-9f7d664d4-p2psr                     Started container dispatcher
knative-eventing   5m45s       Normal    Pulled                         pod/in-memory-channel-dispatcher-9f7d664d4-p2psr                     Container image "docker.io/istio/proxyv2:1.0.7" already present on machine
knative-eventing   5m45s       Normal    Created                        pod/in-memory-channel-dispatcher-9f7d664d4-p2psr                     Created container istio-proxy
knative-eventing   5m45s       Normal    Started                        pod/in-memory-channel-dispatcher-9f7d664d4-p2psr                     Started container istio-proxy
knative-eventing   5m29s       Normal    Pulled                         pod/in-memory-channel-dispatcher-9f7d664d4-p2psr                     Container image "gcr.io/knative-releases/github.com/knative/eventing/cmd/fanoutsidecar@sha256:f388c5226fb7f29b74038bef5591de05820bcbf7c13e7f5e202f1c5f9d9ab224" already present on machine
knative-eventing   5m42s       Warning   BackOff                        pod/in-memory-channel-dispatcher-9f7d664d4-p2psr                     Back-off restarting failed container
knative-eventing   5m51s       Normal    SuccessfulCreate               replicaset/in-memory-channel-dispatcher-9f7d664d4                    Created pod: in-memory-channel-dispatcher-9f7d664d4-p2psr
knative-eventing   5m51s       Normal    ScalingReplicaSet              deployment/in-memory-channel-dispatcher                              Scaled up replica set in-memory-channel-dispatcher-9f7d664d4 to 1
knative-eventing   5m51s       Normal    Scheduled                      pod/webhook-5fd955f896-2zf2j                                         Successfully assigned knative-eventing/webhook-5fd955f896-2zf2j to gke-keventing-contrib-e2-default-pool-03c3e5a7-v2ht
knative-eventing   5m50s       Normal    Pulling                        pod/webhook-5fd955f896-2zf2j                                         Pulling image "gcr.io/knative-releases/github.com/knative/eventing/cmd/webhook@sha256:3c0f22b9f9bd9608f804c6b3b8cbef9cc8ebc54bb67d966d3e047f377feb4ccb"
knative-eventing   5m49s       Normal    Pulled                         pod/webhook-5fd955f896-2zf2j                                         Successfully pulled image "gcr.io/knative-releases/github.com/knative/eventing/cmd/webhook@sha256:3c0f22b9f9bd9608f804c6b3b8cbef9cc8ebc54bb67d966d3e047f377feb4ccb"
knative-eventing   5m49s       Normal    Created                        pod/webhook-5fd955f896-2zf2j                                         Created container webhook
... skipping 9 lines ...
knative-serving    6m7s        Normal    Pulled                         pod/activator-5c79fc6bfd-hqzjj                                       Successfully pulled image "gcr.io/knative-releases/github.com/knative/serving/cmd/activator@sha256:c75dc977b2a4d16f01f89a1741d6895990b7404b03ffb45725a63104d267b74a"
knative-serving    6m7s        Normal    Created                        pod/activator-5c79fc6bfd-hqzjj                                       Created container activator
knative-serving    6m6s        Normal    Started                        pod/activator-5c79fc6bfd-hqzjj                                       Started container activator
knative-serving    6m6s        Normal    Pulled                         pod/activator-5c79fc6bfd-hqzjj                                       Container image "docker.io/istio/proxyv2:1.0.7" already present on machine
knative-serving    6m6s        Normal    Created                        pod/activator-5c79fc6bfd-hqzjj                                       Created container istio-proxy
knative-serving    6m6s        Normal    Started                        pod/activator-5c79fc6bfd-hqzjj                                       Started container istio-proxy
knative-serving    6m5s        Warning   Unhealthy                      pod/activator-5c79fc6bfd-hqzjj                                       Liveness probe failed: HTTP probe failed with statuscode: 503
knative-serving    6m4s        Warning   Unhealthy                      pod/activator-5c79fc6bfd-hqzjj                                       Readiness probe failed: HTTP probe failed with statuscode: 503
knative-serving    6m14s       Normal    SuccessfulCreate               replicaset/activator-5c79fc6bfd                                      Created pod: activator-5c79fc6bfd-hqzjj
knative-serving    6m14s       Normal    ScalingReplicaSet              deployment/activator                                                 Scaled up replica set activator-5c79fc6bfd to 1
knative-serving    6m14s       Normal    Scheduled                      pod/autoscaler-55b8974858-nc6qd                                      Successfully assigned knative-serving/autoscaler-55b8974858-nc6qd to gke-keventing-contrib-e2-default-pool-03c3e5a7-v2ht
knative-serving    6m12s       Normal    Pulling                        pod/autoscaler-55b8974858-nc6qd                                      Pulling image "docker.io/istio/proxy_init:1.0.7"
knative-serving    6m8s        Normal    Pulled                         pod/autoscaler-55b8974858-nc6qd                                      Successfully pulled image "docker.io/istio/proxy_init:1.0.7"
knative-serving    6m8s        Normal    Created                        pod/autoscaler-55b8974858-nc6qd                                      Created container istio-init
... skipping 20 lines ...
knative-serving    6m8s        Normal    Created                        pod/webhook-6975ff7649-62584                                         Created container webhook
knative-serving    6m8s        Normal    Started                        pod/webhook-6975ff7649-62584                                         Started container webhook
knative-serving    6m14s       Normal    SuccessfulCreate               replicaset/webhook-6975ff7649                                        Created pod: webhook-6975ff7649-62584
knative-serving    6m14s       Normal    ScalingReplicaSet              deployment/webhook                                                   Scaled up replica set webhook-6975ff7649 to 1
knative-sources    5m26s       Normal    Scheduled                      pod/controller-manager-0                                             Successfully assigned knative-sources/controller-manager-0 to gke-keventing-contrib-e2-default-pool-03c3e5a7-v2ht
knative-sources    4m4s        Normal    Pulling                        pod/controller-manager-0                                             Pulling image "github.com/knative/eventing-sources/cmd/manager"
knative-sources    4m3s        Warning   Failed                         pod/controller-manager-0                                             Failed to pull image "github.com/knative/eventing-sources/cmd/manager": rpc error: code = Unknown desc = Error response from daemon: error parsing HTTP 404 response body: no error details found in HTTP response body: "{\"error\":\"Not Found\"}"
knative-sources    4m3s        Warning   Failed                         pod/controller-manager-0                                             Error: ErrImagePull
knative-sources    3m39s       Normal    BackOff                        pod/controller-manager-0                                             Back-off pulling image "github.com/knative/eventing-sources/cmd/manager"
knative-sources    17s         Warning   Failed                         pod/controller-manager-0                                             Error: ImagePullBackOff
knative-sources    5m26s       Normal    SuccessfulCreate               statefulset/controller-manager                                       create Pod controller-manager-0 in StatefulSet controller-manager successful
kube-system        7m47s       Warning   ClusterUnhealthy               configmap/cluster-autoscaler-status                                  Cluster has no nodes.
kube-system        7m52s       Normal    LeaderElection                 endpoints/cluster-autoscaler                                         gke-9c26ecd4c3d1b122ddfe-a323-dec7-vm became leader
kube-system        7m55s       Warning   FailedScheduling               pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           no nodes available to schedule pods
kube-system        7m45s       Normal    Scheduled                      pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Successfully assigned kube-system/event-exporter-v0.2.5-7df89f4b8f-dllw5 to gke-keventing-contrib-e2-default-pool-d1822cab-v4n4
kube-system        7m43s       Warning   FailedMount                    pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           MountVolume.SetUp failed for volume "event-exporter-sa-token-p2626" : couldn't propagate object cache: timed out waiting for the condition
kube-system        7m41s       Normal    Pulling                        pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Pulling image "k8s.gcr.io/event-exporter:v0.2.5"
kube-system        7m32s       Normal    Pulled                         pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Successfully pulled image "k8s.gcr.io/event-exporter:v0.2.5"
kube-system        7m31s       Normal    Created                        pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Created container event-exporter
kube-system        7m31s       Normal    Started                        pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Started container event-exporter
kube-system        7m31s       Normal    Pulled                         pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Container image "k8s.gcr.io/prometheus-to-sd:v0.5.0" already present on machine
kube-system        7m29s       Normal    Created                        pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Created container prometheus-to-sd-exporter
kube-system        7m28s       Normal    Started                        pod/event-exporter-v0.2.5-7df89f4b8f-dllw5                           Started container prometheus-to-sd-exporter
kube-system        7m57s       Normal    SuccessfulCreate               replicaset/event-exporter-v0.2.5-7df89f4b8f                          Created pod: event-exporter-v0.2.5-7df89f4b8f-dllw5
kube-system        7m57s       Normal    ScalingReplicaSet              deployment/event-exporter-v0.2.5                                     Scaled up replica set event-exporter-v0.2.5-7df89f4b8f to 1
kube-system        7m53s       Warning   FailedScheduling               pod/fluentd-gcp-scaler-54ccb89d5-s7vbb                               no nodes available to schedule pods
kube-system        7m45s       Normal    Scheduled                      pod/fluentd-gcp-scaler-54ccb89d5-s7vbb                               Successfully assigned kube-system/fluentd-gcp-scaler-54ccb89d5-s7vbb to gke-keventing-contrib-e2-default-pool-d1822cab-v4n4
kube-system        7m43s       Warning   FailedMount                    pod/fluentd-gcp-scaler-54ccb89d5-s7vbb                               MountVolume.SetUp failed for volume "fluentd-gcp-scaler-token-6dg6z" : couldn't propagate object cache: timed out waiting for the condition
kube-system        7m41s       Normal    Pulled                         pod/fluentd-gcp-scaler-54ccb89d5-s7vbb                               Container image "k8s.gcr.io/fluentd-gcp-scaler:0.5.2" already present on machine
kube-system        7m40s       Normal    Created                        pod/fluentd-gcp-scaler-54ccb89d5-s7vbb                               Created container fluentd-gcp-scaler
kube-system        7m39s       Normal    Started                        pod/fluentd-gcp-scaler-54ccb89d5-s7vbb                               Started container fluentd-gcp-scaler
kube-system        7m53s       Normal    SuccessfulCreate               replicaset/fluentd-gcp-scaler-54ccb89d5                              Created pod: fluentd-gcp-scaler-54ccb89d5-s7vbb
kube-system        7m53s       Normal    ScalingReplicaSet              deployment/fluentd-gcp-scaler                                        Scaled up replica set fluentd-gcp-scaler-54ccb89d5 to 1
kube-system        7m10s       Normal    Scheduled                      pod/fluentd-gcp-v3.1.1-2hdff                                         Successfully assigned kube-system/fluentd-gcp-v3.1.1-2hdff to gke-keventing-contrib-e2-default-pool-89c6990e-6wj4
... skipping 21 lines ...
kube-system        7m32s       Normal    Pulled                         pod/fluentd-gcp-v3.1.1-788s2                                         Container image "k8s.gcr.io/prometheus-to-sd:v0.5.0" already present on machine
kube-system        7m32s       Normal    Created                        pod/fluentd-gcp-v3.1.1-788s2                                         Created container prometheus-to-sd-exporter
kube-system        7m32s       Normal    Started                        pod/fluentd-gcp-v3.1.1-788s2                                         Started container prometheus-to-sd-exporter
kube-system        7m21s       Normal    Killing                        pod/fluentd-gcp-v3.1.1-788s2                                         Stopping container fluentd-gcp
kube-system        7m21s       Normal    Killing                        pod/fluentd-gcp-v3.1.1-788s2                                         Stopping container prometheus-to-sd-exporter
kube-system        7m45s       Normal    Scheduled                      pod/fluentd-gcp-v3.1.1-j86wp                                         Successfully assigned kube-system/fluentd-gcp-v3.1.1-j86wp to gke-keventing-contrib-e2-default-pool-d1822cab-v4n4
kube-system        7m43s       Warning   FailedMount                    pod/fluentd-gcp-v3.1.1-j86wp                                         MountVolume.SetUp failed for volume "config-volume" : couldn't propagate object cache: timed out waiting for the condition
kube-system        7m43s       Warning   FailedMount                    pod/fluentd-gcp-v3.1.1-j86wp                                         MountVolume.SetUp failed for volume "fluentd-gcp-token-9kvcg" : couldn't propagate object cache: timed out waiting for the condition
kube-system        7m38s       Normal    Pulling                        pod/fluentd-gcp-v3.1.1-j86wp                                         Pulling image "gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17-16060"
kube-system        7m22s       Normal    Pulled                         pod/fluentd-gcp-v3.1.1-j86wp                                         Successfully pulled image "gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17-16060"
kube-system        7m22s       Warning   Failed                         pod/fluentd-gcp-v3.1.1-j86wp                                         Error: cannot find volume "varrun" to mount into container "fluentd-gcp"
kube-system        7m22s       Normal    Pulled                         pod/fluentd-gcp-v3.1.1-j86wp                                         Container image "k8s.gcr.io/prometheus-to-sd:v0.5.0" already present on machine
kube-system        7m22s       Warning   Failed                         pod/fluentd-gcp-v3.1.1-j86wp                                         Error: cannot find volume "fluentd-gcp-token-9kvcg" to mount into container "prometheus-to-sd-exporter"
kube-system        7m25s       Normal    Scheduled                      pod/fluentd-gcp-v3.1.1-kgqts                                         Successfully assigned kube-system/fluentd-gcp-v3.1.1-kgqts to gke-keventing-contrib-e2-default-pool-d1822cab-v4n4
kube-system        7m22s       Normal    Pulled                         pod/fluentd-gcp-v3.1.1-kgqts                                         Container image "gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17-16060" already present on machine
kube-system        7m22s       Normal    Created                        pod/fluentd-gcp-v3.1.1-kgqts                                         Created container fluentd-gcp
kube-system        7m22s       Normal    Started                        pod/fluentd-gcp-v3.1.1-kgqts                                         Started container fluentd-gcp
kube-system        7m22s       Normal    Pulled                         pod/fluentd-gcp-v3.1.1-kgqts                                         Container image "k8s.gcr.io/prometheus-to-sd:v0.5.0" already present on machine
kube-system        7m21s       Normal    Created                        pod/fluentd-gcp-v3.1.1-kgqts                                         Created container prometheus-to-sd-exporter
... skipping 73 lines ...
kube-system        7m34s       Normal    SuccessfulCreate               replicaset/kube-dns-5877696fb4                                       Created pod: kube-dns-5877696fb4-7kr4g
kube-system        7m53s       Warning   FailedScheduling               pod/kube-dns-autoscaler-57d56b4f56-nvcxm                             no nodes available to schedule pods
kube-system        7m45s       Normal    Scheduled                      pod/kube-dns-autoscaler-57d56b4f56-nvcxm                             Successfully assigned kube-system/kube-dns-autoscaler-57d56b4f56-nvcxm to gke-keventing-contrib-e2-default-pool-d1822cab-v4n4
kube-system        7m42s       Normal    Pulled                         pod/kube-dns-autoscaler-57d56b4f56-nvcxm                             Container image "k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0" already present on machine
kube-system        7m41s       Normal    Created                        pod/kube-dns-autoscaler-57d56b4f56-nvcxm                             Created container autoscaler
kube-system        7m40s       Normal    Started                        pod/kube-dns-autoscaler-57d56b4f56-nvcxm                             Started container autoscaler
kube-system        7m56s       Warning   FailedCreate                   replicaset/kube-dns-autoscaler-57d56b4f56                            Error creating: pods "kube-dns-autoscaler-57d56b4f56-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found
kube-system        7m53s       Normal    SuccessfulCreate               replicaset/kube-dns-autoscaler-57d56b4f56                            Created pod: kube-dns-autoscaler-57d56b4f56-nvcxm
kube-system        7m59s       Normal    ScalingReplicaSet              deployment/kube-dns-autoscaler                                       Scaled up replica set kube-dns-autoscaler-57d56b4f56 to 1
kube-system        7m59s       Normal    ScalingReplicaSet              deployment/kube-dns                                                  Scaled up replica set kube-dns-5877696fb4 to 1
kube-system        7m34s       Normal    ScalingReplicaSet              deployment/kube-dns                                                  Scaled up replica set kube-dns-5877696fb4 to 2
kube-system        7m43s       Normal    Pulled                         pod/kube-proxy-gke-keventing-contrib-e2-default-pool-03c3e5a7-v2ht   Container image "gke.gcr.io/kube-proxy:v1.14.6-gke.13" already present on machine
kube-system        7m43s       Normal    Created                        pod/kube-proxy-gke-keventing-contrib-e2-default-pool-03c3e5a7-v2ht   Created container kube-proxy
... skipping 41 lines ...
kube-system        7m35s       Normal    Pulled                         pod/prometheus-to-sd-jr224                                           Successfully pulled image "k8s.gcr.io/prometheus-to-sd:v0.5.2"
kube-system        7m32s       Normal    Created                        pod/prometheus-to-sd-jr224                                           Created container prometheus-to-sd
kube-system        7m29s       Normal    Started                        pod/prometheus-to-sd-jr224                                           Started container prometheus-to-sd
kube-system        7m45s       Warning   FailedScheduling               pod/prometheus-to-sd-s2n7q                                           0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
kube-system        7m45s       Warning   FailedScheduling               pod/prometheus-to-sd-s2n7q                                           0/2 nodes are available: 1 node(s) didn't match node selector, 1 node(s) had taints that the pod didn't tolerate.
kube-system        7m44s       Normal    Scheduled                      pod/prometheus-to-sd-s2n7q                                           Successfully assigned kube-system/prometheus-to-sd-s2n7q to gke-keventing-contrib-e2-default-pool-d1822cab-v4n4
kube-system        7m43s       Warning   FailedMount                    pod/prometheus-to-sd-s2n7q                                           MountVolume.SetUp failed for volume "prometheus-to-sd-token-j9btc" : couldn't propagate object cache: timed out waiting for the condition
kube-system        7m35s       Normal    Pulling                        pod/prometheus-to-sd-s2n7q                                           Pulling image "k8s.gcr.io/prometheus-to-sd:v0.5.2"
kube-system        7m21s       Normal    Pulled                         pod/prometheus-to-sd-s2n7q                                           Successfully pulled image "k8s.gcr.io/prometheus-to-sd:v0.5.2"
kube-system        7m21s       Normal    Created                        pod/prometheus-to-sd-s2n7q                                           Created container prometheus-to-sd
kube-system        7m17s       Normal    Started                        pod/prometheus-to-sd-s2n7q                                           Started container prometheus-to-sd
kube-system        7m44s       Warning   FailedScheduling               pod/prometheus-to-sd-ts8rw                                           0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 node(s) didn't match node selector.
kube-system        7m44s       Normal    Scheduled                      pod/prometheus-to-sd-ts8rw                                           Successfully assigned kube-system/prometheus-to-sd-ts8rw to gke-keventing-contrib-e2-default-pool-89c6990e-6wj4
... skipping 13 lines ...
kube-system        7m58s       Normal    SuccessfulCreate               replicaset/stackdriver-metadata-agent-cluster-level-865445848        Created pod: stackdriver-metadata-agent-cluster-level-865445848-s5mhq
kube-system        7m58s       Normal    ScalingReplicaSet              deployment/stackdriver-metadata-agent-cluster-level                  Scaled up replica set stackdriver-metadata-agent-cluster-level-865445848 to 1
============================================================
Namespace, Pod, Container: knative-eventing, eventing-controller-f747584b7-lwct6, eventing-controller
{"level":"info","ts":"2019-10-17T08:39:00.931Z","caller":"logging/config.go:96","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n  \"level\": \"info\",\n  \"development\": false,\n  \"outputPaths\": [\"stdout\"],\n  \"errorOutputPaths\": [\"stderr\"],\n  \"encoding\": \"json\",\n  \"encoderConfig\": {\n    \"timeKey\": \"ts\",\n    \"levelKey\": \"level\",\n    \"nameKey\": \"logger\",\n    \"callerKey\": \"caller\",\n    \"messageKey\": \"msg\",\n    \"stacktraceKey\": \"stacktrace\",\n    \"lineEnding\": \"\",\n    \"levelEncoder\": \"\",\n    \"timeEncoder\": \"iso8601\",\n    \"durationEncoder\": \"\",\n    \"callerEncoder\": \"\"\n  }\n}\n"}
{"level":"info","ts":"2019-10-17T08:39:00.931Z","caller":"logging/config.go:97","msg":"Logging level set to info"}
{"level":"warn","ts":"2019-10-17T08:39:00.931Z","caller":"logging/config.go:65","msg":"Fetch GitHub commit ID from kodata failed: \"ref: refs/heads/upstream/release-0.5\" is not a valid GitHub commit ID"}
{"level":"info","ts":"2019-10-17T08:39:00.931Z","logger":"controller","caller":"controller/main.go:84","msg":"Starting the controller","knative.dev/controller":"controller"}
{"level":"info","ts":1571301541.3062668,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"subscription-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1571301541.3064663,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"channel-default-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1571301541.306621,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"broker-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1571301541.3067465,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"broker-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1571301541.3067837,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"broker-controller","source":"kind source: /, Kind="}
... skipping 17 lines ...
{"level":"info","ts":1571301541.6086428,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"channel-default-controller","worker count":1}
{"level":"info","ts":1571301541.6086369,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"trigger-controller","worker count":1}
{"level":"info","ts":1571301541.6089206,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"broker-controller","worker count":1}
{"level":"info","ts":1571301541.6089437,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"subscription-controller","worker count":1}
----------------------------------------------------------
Namespace, Pod, Container (Previous instance): knative-eventing, eventing-controller-f747584b7-lwct6, eventing-controller
Error from server (BadRequest): previous terminated container "eventing-controller" in pod "eventing-controller-f747584b7-lwct6" not found
============================================================
Namespace, Pod, Container: knative-eventing, in-memory-channel-controller-5f587cc68c-wldtx, controller
{"level":"info","ts":"2019-10-17T08:39:01.950Z","caller":"logging/config.go:96","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n\t\t\"level\": \"info\",\n\t\t\"development\": false,\n\t\t\"outputPaths\": [\"stdout\"],\n\t\t\"errorOutputPaths\": [\"stderr\"],\n\t\t\"encoding\": \"json\",\n\t\t\"encoderConfig\": {\n\t\t\t\"timeKey\": \"ts\",\n\t\t\t\"levelKey\": \"level\",\n\t\t\t\"nameKey\": \"logger\",\n\t\t\t\"callerKey\": \"caller\",\n\t\t\t\"messageKey\": \"msg\",\n\t\t\t\"stacktraceKey\": \"stacktrace\",\n\t\t\t\"lineEnding\": \"\",\n\t\t\t\"levelEncoder\": \"\",\n\t\t\t\"timeEncoder\": \"iso8601\",\n\t\t\t\"durationEncoder\": \"\",\n\t\t\t\"callerEncoder\": \"\"\n\t\t}\n\t}"}
{"level":"info","ts":"2019-10-17T08:39:01.950Z","caller":"logging/config.go:97","msg":"Logging level set to info"}
{"level":"warn","ts":"2019-10-17T08:39:01.950Z","caller":"logging/config.go:65","msg":"Fetch GitHub commit ID from kodata failed: open /var/run/ko/HEAD: no such file or directory"}
{"level":"info","ts":"2019-10-17T08:39:02.302Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:99","msg":"Reconciling ClusterChannelProvisioner.","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"/in-memory"}
{"level":"info","ts":"2019-10-17T08:39:02.322Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:110","msg":"ClusterChannelProvisioner reconciled","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"/in-memory"}
{"level":"info","ts":"2019-10-17T08:39:02.334Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:99","msg":"Reconciling ClusterChannelProvisioner.","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"/in-memory-channel"}
{"level":"info","ts":"2019-10-17T08:39:02.353Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:110","msg":"ClusterChannelProvisioner reconciled","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"/in-memory-channel"}
{"level":"info","ts":"2019-10-17T08:39:02.372Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:99","msg":"Reconciling ClusterChannelProvisioner.","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"knative-eventing/in-memory"}
{"level":"info","ts":"2019-10-17T08:39:02.373Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:110","msg":"ClusterChannelProvisioner reconciled","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"knative-eventing/in-memory"}
... skipping 2 lines ...
{"level":"info","ts":"2019-10-17T08:39:02.373Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:99","msg":"Reconciling ClusterChannelProvisioner.","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"knative-eventing/in-memory-channel"}
{"level":"info","ts":"2019-10-17T08:39:02.373Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:110","msg":"ClusterChannelProvisioner reconciled","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"knative-eventing/in-memory-channel"}
{"level":"info","ts":"2019-10-17T08:39:02.374Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:99","msg":"Reconciling ClusterChannelProvisioner.","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"/in-memory-channel"}
{"level":"info","ts":"2019-10-17T08:39:02.374Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:110","msg":"ClusterChannelProvisioner reconciled","eventing.knative.dev/clusterChannelProvisioner":"in-memory","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"/in-memory-channel"}
----------------------------------------------------------
Namespace, Pod, Container (Previous instance): knative-eventing, in-memory-channel-controller-5f587cc68c-wldtx, controller
Error from server (BadRequest): previous terminated container "controller" in pod "in-memory-channel-controller-5f587cc68c-wldtx" not found
============================================================
Namespace, Pod, Container: knative-eventing, in-memory-channel-dispatcher-9f7d664d4-p2psr, dispatcher
{"level":"info","ts":"2019-10-17T08:39:21.190Z","caller":"fanoutsidecar/main.go:189","msg":"Fanout sidecar listening","address":":8080"}
----------------------------------------------------------
Namespace, Pod, Container (Previous instance): knative-eventing, in-memory-channel-dispatcher-9f7d664d4-p2psr, dispatcher
{"level":"error","ts":"2019-10-17T08:39:05.226Z","caller":"fanoutsidecar/main.go:131","msg":"Error starting manager.","error":"Get https://10.43.240.1:443/api?timeout=32s: dial tcp 10.43.240.1:443: connect: connection refused","stacktrace":"main.setupConfigMapNoticer\n\t/go/src/github.com/knative/eventing/cmd/fanoutsidecar/main.go:131\nmain.main\n\t/go/src/github.com/knative/eventing/cmd/fanoutsidecar/main.go:92\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
{"level":"fatal","ts":"2019-10-17T08:39:05.226Z","caller":"fanoutsidecar/main.go:94","msg":"Unable to create configMap noticer.","error":"Get https://10.43.240.1:443/api?timeout=32s: dial tcp 10.43.240.1:443: connect: connection refused","stacktrace":"main.main\n\t/go/src/github.com/knative/eventing/cmd/fanoutsidecar/main.go:94\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
============================================================
Namespace, Pod, Container: knative-eventing, in-memory-channel-dispatcher-9f7d664d4-p2psr, istio-proxy
2019-10-17T08:39:04.810121Z	info	Version mjog@devinstance.c.mixologist-142215.internal-docker.io/istio-6407364027e24ed738dbeca64e82146a7424aba5-dirty-6407364027e24ed738dbeca64e82146a7424aba5-dirty-Modified
2019-10-17T08:39:04.810164Z	info	Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.40.1.11", ID:"in-memory-channel-dispatcher-9f7d664d4-p2psr.knative-eventing", Domain:"knative-eventing.svc.cluster.local", Metadata:map[string]string(nil)}
2019-10-17T08:39:04.810512Z	info	Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
... skipping 123 lines ...
[2019-10-17 08:39:23.547][13][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:500] add/update cluster outbound|443||controller.knative-sources.svc.cluster.local starting warming
[2019-10-17 08:39:23.547][13][info][upstream] external/envoy/source/common/upstream/cluster_manager_impl.cc:512] warming cluster outbound|443||controller.knative-sources.svc.cluster.local complete
[2019-10-17 08:39:23.614][13][info][upstream] external/envoy/source/server/lds_api.cc:80] lds: add/update listener '10.43.253.181_443'
[2019-10-17 08:40:06.362][13][info][main] external/envoy/source/server/drain_manager_impl.cc:63] shutting down parent after drain
----------------------------------------------------------
Namespace, Pod, Container (Previous instance): knative-eventing, in-memory-channel-dispatcher-9f7d664d4-p2psr, istio-proxy
Error from server (BadRequest): previous terminated container "istio-proxy" in pod "in-memory-channel-dispatcher-9f7d664d4-p2psr" not found
============================================================
Namespace, Pod, Container: knative-eventing, webhook-5fd955f896-2zf2j, webhook
{"level":"info","ts":"2019-10-17T08:39:00.782Z","caller":"logging/config.go:96","msg":"Successfully created the logger.","knative.dev/jsonconfig":"{\n  \"level\": \"info\",\n  \"development\": false,\n  \"outputPaths\": [\"stdout\"],\n  \"errorOutputPaths\": [\"stderr\"],\n  \"encoding\": \"json\",\n  \"encoderConfig\": {\n    \"timeKey\": \"ts\",\n    \"levelKey\": \"level\",\n    \"nameKey\": \"logger\",\n    \"callerKey\": \"caller\",\n    \"messageKey\": \"msg\",\n    \"stacktraceKey\": \"stacktrace\",\n    \"lineEnding\": \"\",\n    \"levelEncoder\": \"\",\n    \"timeEncoder\": \"iso8601\",\n    \"durationEncoder\": \"\",\n    \"callerEncoder\": \"\"\n  }\n}\n"}
{"level":"info","ts":"2019-10-17T08:39:00.782Z","caller":"logging/config.go:97","msg":"Logging level set to info"}
{"level":"warn","ts":"2019-10-17T08:39:00.782Z","caller":"logging/config.go:65","msg":"Fetch GitHub commit ID from kodata failed: \"ref: refs/heads/upstream/release-0.5\" is not a valid GitHub commit ID"}
{"level":"info","ts":"2019-10-17T08:39:00.782Z","logger":"webhook","caller":"webhook/main.go:56","msg":"Starting the Eventing Webhook","knative.dev/controller":"webhook"}
{"level":"info","ts":"2019-10-17T08:39:00.799Z","logger":"webhook","caller":"channeldefaulter/channel_defaulter.go:98","msg":"Updated channelDefaulter config","knative.dev/controller":"webhook","role":"channelDefaulter","config":{"namespaceDefaults":{"some-namespace":{"kind":"ClusterChannelProvisioner","name":"some-other-provisioner","apiVersion":"eventing.knative.dev/v1alpha1"}},"clusterDefault":{"kind":"ClusterChannelProvisioner","name":"in-memory","apiVersion":"eventing.knative.dev/v1alpha1"}}}
{"level":"info","ts":"2019-10-17T08:39:00.891Z","logger":"webhook","caller":"webhook/webhook.go:182","msg":"Did not find existing secret, creating one","knative.dev/controller":"webhook"}
{"level":"info","ts":"2019-10-17T08:39:01.244Z","logger":"webhook","caller":"webhook/webhook.go:313","msg":"Found certificates for webhook...","knative.dev/controller":"webhook"}
{"level":"info","ts":"2019-10-17T08:39:01.279Z","logger":"webhook","caller":"webhook/webhook.go:432","msg":"Created a webhook","knative.dev/controller":"webhook"}
{"level":"info","ts":"2019-10-17T08:39:01.279Z","logger":"webhook","caller":"webhook/webhook.go:325","msg":"Successfully registered webhook","knative.dev/controller":"webhook"}
----------------------------------------------------------
Namespace, Pod, Container (Previous instance): knative-eventing, webhook-5fd955f896-2zf2j, webhook
Error from server (BadRequest): previous terminated container "webhook" in pod "webhook-5fd955f896-2zf2j" not found
============================================================
No resources found.
No resources found.
***************************************
***         E2E TEST FAILED         ***
***     End of information dump     ***
***************************************
2019/10/17 08:44:52 process.go:155: Step '/home/prow/go/src/github.com/knative/eventing-contrib/test/e2e-tests.sh --run-tests' finished in 7m21.043294628s
2019/10/17 08:44:52 main.go:316: Something went wrong: encountered 1 errors: [error during /home/prow/go/src/github.com/knative/eventing-contrib/test/e2e-tests.sh --run-tests: exit status 1]
Test subprocess exited with code 1
Artifacts were written to /logs/artifacts
Test result code is 1
==================================
==== INTEGRATION TESTS FAILED ====
==================================
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@ Release validation tests failed, aborting @@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2019/10/17 08:44:52 Cleaning up Docker data root...
... skipping 22 lines ...