From d93e9eb86a104c77275e84077fd20435ab1c9cfa Mon Sep 17 00:00:00 2001 From: Andy Lai <31747472+hklai@users.noreply.github.com> Date: Fri, 18 Jan 2019 20:56:49 -0800 Subject: [PATCH] Merge release-1.1 into master (#11096) * fix the test (#10837) Signed-off-by: Kuat Yessenov * Allow prometheus scraper to fetch port outside of sidecar umbrella (#10492) See issue #10487 - kubernetes-pods job is now keeping all targets without sidecar or with expicit prometheus.io/scheme=http annotation - kubernetes-pods-istio-secure is now discarding targets with expicit prometheus.io/scheme=http annotation * Relax test for kubeenv metric to only error on 'unknowns' (#10787) * Relax test for kubeenv metric to only error on 'unknowns' * Add check to ensure that at least one metric is found * Address lint issues * Fix Citadel Kube JWT authentication result (#10836) * Fix Citadel Kube JWT authentication. * Small fix. * Fix unittest. * Add unit test for coverage. * Adding Sidecar CRD and renaming Sidecar role (#10852) * Sidecar config implementation Signed-off-by: Shriram Rajagopalan * build fixes Signed-off-by: Shriram Rajagopalan * adding CRD template Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * model.Sidecar to model.SidecarProxy Signed-off-by: Shriram Rajagopalan * nits * gen files in galley Signed-off-by: Shriram Rajagopalan * nit Signed-off-by: Shriram Rajagopalan * test fix Signed-off-by: Shriram Rajagopalan * e2e tests Signed-off-by: Shriram Rajagopalan * comments Signed-off-by: Shriram Rajagopalan * compile fix Signed-off-by: Shriram Rajagopalan * final snafu Signed-off-by: Shriram Rajagopalan * fix yaml path * typo * bad file name * future work Signed-off-by: Shriram Rajagopalan * fix bad namespace * assorted fixes Signed-off-by: Shriram Rajagopalan * fixing CDS Signed-off-by: Shriram Rajagopalan * formatting Signed-off-by: Shriram Rajagopalan * vendor update Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * build fixes Signed-off-by: Shriram Rajagopalan * undos Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * nits Signed-off-by: Shriram Rajagopalan * test fix Signed-off-by: Shriram Rajagopalan * validation Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * comments Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * new crd yaml Signed-off-by: Shriram Rajagopalan * nix listener port Signed-off-by: Shriram Rajagopalan * kubernetes hack for parsing namespace Signed-off-by: Shriram Rajagopalan * some code cleanups and more TODOs Signed-off-by: Shriram Rajagopalan * more undos Signed-off-by: Shriram Rajagopalan * spell check Signed-off-by: Shriram Rajagopalan * more nits Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * leftovers Signed-off-by: Shriram Rajagopalan * undo tests * more undos Signed-off-by: Shriram Rajagopalan * more undos Signed-off-by: Shriram Rajagopalan * del Signed-off-by: Shriram Rajagopalan * sidecarproxy Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * compile fixes Signed-off-by: Shriram Rajagopalan * run log Configure before running server and validation (#10643) * run log Configure before running server and validation * remove p.logConfigure func from patchTable * fix lint * fix rebase error * fix rebase error * fix lint * add domain parameter to proxy of istio-policy. (#10857) * Use strings.EqualFold to compare strings (#10859) * Call check licenses only once (#10866) * add sample httpbin service in nodeport type (#10833) * Skip prow e2e test cleanup (#10878) * Use 128bit traceids in envoy (#10811) * Use 128bit traceids in envoy * Update unit test golden files for bootstrap config * Update to latest istio/api changes with MCP enhancements (#10628) * sync with latest istio.io/api This PR syncs to the latest changes from istio.io/api. Notably, this PR includes the enhanced MCP service definitions and protos (ResourceSink and ResourceSource) along with several API cleanups. Minimal changes have been made to fix the build and tests so that subsequent istio.io/api changes can be merged into istio/istio. An additional PR will be introduced to implement the enhanced MCP service layer. * address review comments * remove bad find/replace * Add a newline at the end of each certificate returned by Vault (#10879) * Add a newline at the end of a certificate * Fix the mock test * Fix a lint error * Filter flaky query from galley dashboard test (#10176) * IPv4 forwarding off for some CircleCI builds (#10777) * Log additional information about build machine * Attempt to enable IPv4 forwarding * tabs to spaces * stop mcpclient when mixer stops (#10772) * stop mcpclient when mixer stops * fix test * pushLds should not verify versions (#10861) * add integration test that mTLS through identity provisioned by SDS flow (#10887) * add integration test that mTLS through identity provisioned by SDS flow * format * remove unused files (#10890) * fix pilot goroutine leak (#10892) * fix pilot goroutine leak * remove done channel * Add missing copyright header (#10841) * Do not fail envoy health probe if a config was rejected (#9786) (#10154) * Do not fail envoy health probe if a config was rejected (#9786) * Adjust so that rejection is also an allowed state of health probe for envoy. Co-authored-by: Ralf Pannemans * Add unit tests for envoy health probe Co-authored-by: Ralf Pannemans * Fixed linting Co-authored-by: Ralf Pannemans * Fix another linting problem Co-authored-by: Ralf Pannemans * Add new stats to String() method * Use better wording in log message * Fix linting Co-authored-by: Ulrich Kramer * Move everything related to spiffe URIs to package spiffe (#9090) * Move everything related to spiffe URIs to package spiffe Co-authored-by: Ulrich Kramer * Fix end-to-end tests after merge Co-authored-by: Julia Plachetka * Adapt and fix unit tests. Co-authored-by: Ralf Pannemans * Adapt and fix unit tests. * Fix lint errors and unit tests * Fix lint errors * Fix lint errors * Fix lint errors. Exit integration test in case of nonexisting secret * Remove duplicate trustDomain * Fixed compile errors * Fixed lint errors * Fixed lint errors * Do not panic and small fixes * Do not panic when spiffe uri is missing some configuration values * Remove environment variable ISTIO_SA_DOMAIN_CANONICAL * Fix SNA typo * Comment why testing for a kube registry Co-authored-by: Holger Oehm * goimports-ed Co-authored-by: Holger Oehm * Adapt test to getSpiffeId no longer panicing Co-authored-by: Holger Oehm * Fix formatting Co-authored-by: Holger Oehm * Fix lint errors and unit tests Co-authored-by: Holger Oehm * Fix double declared imports Co-authored-by: Ralf Pannemans * Fix more import related linting Co-authored-by: Ralf Pannemans * Add retry to metrics check in TestTcpMetrics (#10816) * Add retry to metrics check in TestTcpMetrics * Small cleanup * Fix typo * set trust domain (#10905) * Fix New Test Framework tests running in kubernetes environment (#10889) * Fix New Test Framework tests running in kubernetes environment After the change https://github.com/istio/istio/pull/10562 Istio Deployment in new test framework started failing. This PR tries to fix that * Minor fix * Add Pod and Node sources to Galley. (#10846) * Add Pod and Node sources to Galley. Also plumbing annotations and labels through from the source. * adding access for pods/nodes to deployment. * plumbing labels/annotations through Pilot * implement empty header value expression (#10885) Signed-off-by: Kuat Yessenov * provide some context on bootstrap errors (#10696) - rebased on release-1.1 * fix(#10911): add namespace for crd installation jobs (#10912) * restore MCP registry (#10921) * fix a typo to get familiar with the PR process (#10853) Signed-off-by: YaoZengzeng * Mixer route cache (#10539) * rebase * add test * fix lint * Revert "Mixer route cache (#10539)" (#10936) This reverts commit 024adb0e5edfd902939211d321e5459758046905. * Clean up the Helm readiness checking in test cases (#10929) * Clean up the Helm readiness checking in test cases The e2e test cases are often flakey because of the logic of Helm readiness checking in the test cases. Instead of checking of the Pod is in the "RUNNING" state, check that Tiller is able to provide service via the `helm version` operation. If the server is not ready, this will return 1, otherwise 0 will be returned. * Fix CLI call error We have an older version of helm which lacks the proper flag. Instead we rely on the retry with a 10 second context timer. * Test for PERMISSIVE mode, checks Pilot LDS output. (#10614) * injector changes for health check, pilot agent take over app readiness check. (#9266) * WIP injector change to modify istio-proxy. * move out to app_probe.go * Iterating sidecartmpl to find the statusPort. * use the same name for ready path. * Get rewrite work, almost. * Some clean up on test and check one container criteria. * fix the injected test file. * Add inject test for readiness probe itself. * Add missing added test file. * fix helm test. * fix lint. * update header based finding the port. * return to previous injected file status. * fixing TestIntoResource test. * sed fixing all remaining injecting files. * handling named port. * fixing merginge failure. * remove the debug print. * lint fixing. * Apply the suggestions for finding statusPort arg. * Address comments, regex support more port value format. * add app_probe_test.go * add more test. * merge fix the test. * WIP adding test not working. * change k8s env applycontents. * pilot_test.go working adding the policy. * adding authn in the setup. * progress, app is in istio-system. * simplify the pilot_test.go * get config dump for app a. * config is dumped and testhttp pass. * WIP need to figure out why config dump is different than lds output. * finally hacked to get lds output. * almost ready to verify the listener config * get test working, remove some debugging print. * move to permissive_test.go * clean up on test file. * add back auth_permissive_test.go * add some doc and remove infolog. * refine comments. * goimports fix. * bin/fmt.sh * apply comments. * add one more test case. * rename the ConstructDiscoveryRequest. * comment out unimplemented test. * change back logging level. * Sidecar config implementation (#10717) * Sidecar config implementation Signed-off-by: Shriram Rajagopalan * build fixes Signed-off-by: Shriram Rajagopalan * adding CRD template Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * model.Sidecar to model.SidecarProxy Signed-off-by: Shriram Rajagopalan * nits * gen files in galley Signed-off-by: Shriram Rajagopalan * nit Signed-off-by: Shriram Rajagopalan * test fix Signed-off-by: Shriram Rajagopalan * e2e tests Signed-off-by: Shriram Rajagopalan * comments Signed-off-by: Shriram Rajagopalan * compile fix Signed-off-by: Shriram Rajagopalan * final snafu Signed-off-by: Shriram Rajagopalan * fix yaml path * typo * bad file name * future work Signed-off-by: Shriram Rajagopalan * fix bad namespace * assorted fixes Signed-off-by: Shriram Rajagopalan * fixing CDS Signed-off-by: Shriram Rajagopalan * formatting Signed-off-by: Shriram Rajagopalan * vendor update Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * build fixes Signed-off-by: Shriram Rajagopalan * undos Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * nits Signed-off-by: Shriram Rajagopalan * test fix Signed-off-by: Shriram Rajagopalan * validation Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * comments Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * new crd yaml Signed-off-by: Shriram Rajagopalan * nix listener port Signed-off-by: Shriram Rajagopalan * kubernetes hack for parsing namespace Signed-off-by: Shriram Rajagopalan * some code cleanups and more TODOs Signed-off-by: Shriram Rajagopalan * more undos Signed-off-by: Shriram Rajagopalan * spell check Signed-off-by: Shriram Rajagopalan * more nits Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * leftovers Signed-off-by: Shriram Rajagopalan * compile fixes Signed-off-by: Shriram Rajagopalan * undo lint fix * temp undo * ingress and egress listeners on ports Signed-off-by: Shriram Rajagopalan * nits Signed-off-by: Shriram Rajagopalan * if-else Signed-off-by: Shriram Rajagopalan * missing inbound port fixes Signed-off-by: Shriram Rajagopalan * remove constants Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * final fix Signed-off-by: Shriram Rajagopalan * lints Signed-off-by: Shriram Rajagopalan * fix http host header Signed-off-by: Shriram Rajagopalan * lint Signed-off-by: Shriram Rajagopalan * more if-elses Signed-off-by: Shriram Rajagopalan * more lint Signed-off-by: Shriram Rajagopalan * more lint and code cov Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * simplifications * remove GetSidecarScope Signed-off-by: Shriram Rajagopalan * coverage Signed-off-by: Shriram Rajagopalan * missing configs Signed-off-by: Shriram Rajagopalan * 80 Signed-off-by: Shriram Rajagopalan * remove invalid test case * fixing rds bug Signed-off-by: Shriram Rajagopalan * remove comment Signed-off-by: Shriram Rajagopalan * RDS unit tests Signed-off-by: Shriram Rajagopalan * format Signed-off-by: Shriram Rajagopalan * lint again Signed-off-by: Shriram Rajagopalan * Filter Nodes/Pods in Galley temporarily until custom sources land. (#10938) This is due to the fact that Pod yaml cannot currently be parsed into unstructured types. See: #10891. * fix concurrent map read/write (#10895) * fix concurrent map read/write * simplify EndpointShardsByService * Update integration test job (#10888) * Fix integration test scripts * Making TestMain exit with the proper return code * Update local env references to native * Fix linter errors * Skipping integration tests in codecov since they fail * grant execute permission to e2e_pilotv2_auth_sds.sh (#10908) * grant execute permission to e2e_pilotv2_auth_sds.sh * fix typo * fix typo * typo * coredump * remove deprecated plugin from nodeagent (#10952) * Fix flaky test by reducing poll interval. (#10962) * Add interceptor to create noop spans when sampling is false (#10826) * Add interceptor to create noop spans when sampling is false * Add tests using mocktracer to determine whether span is created * Update dependencies to include OpenTracing mocktracer * Minor change * Updated dependencies again * Add support for ErrSpanContextNotFound error * Fix test and add one for x-b3-sampled=true * Fix lint error * set cluster.LoadAssignment only when service discovery type equals Cluster_STATIC Cluster_STRICT_DNS or Cluster_LOGICAL_DNS (#10926) * Remove Envoy's deprecated --v2-config-only (release-1.1). (#10960) Signed-off-by: Piotr Sikora * update check proxy version (#10769) * Add AWS CloudwatchLogs Adapter (code from #10400) (#10882) * Add AWS CloudwatchLogs Adapter (code from #10400) * Improve codecov * Even moar coverage * remove duplicate LoadAssignment set (#10977) * Enable server side control over maximum connection age (#10870) * add server side maximum connection age control to keepalive options * add server maximum connection age to the gRPC server keepalive options * missing space between concatenated strings * added tests for default values and setting via command line * fix golangci unconvert comment * add helm value file to google ca param (#10563) * add helm value file to preconfig param for googleca * cleanup * Allow pulling images from private repository (#10763) * Only compute diff for ServiceEntry (#10446) * Only compute diff for ServiceEntry This change prevents coredatamodel controller to compute the diff for all the types and it narrows it down to only ServiceEntry. * Add a dummy event for other config types - this dummy event allows DiscoveryServer to purge it's cache * Trigger a single clear cache event * add exponential backoff for retryable CSR error in nodeagent (#10969) * backoff * add unit test * clean up * lint * lint * address comment * typo * Fix flakiness in redisquota tests (#10906) * Fix flakiness in redisquota tests by adding retry for getting requests reported by prometheus One of the things I observed in flaky tests is that total number of requests reported by prometheus was not equal to traffic sent by Fortio. Thus adding a retry to make sure prometheus is queries till we get all requests reported. * Add a buffer for 5 requests to be allowed to be not reported. This buffer is within the error we allow for 200s and 429s reporting. * Fix based on reviews * Fix lint errors * Adding make sync to integ test script (#10984) * Removing Galley pod and node datasets from tests (#10953) * Use common image for node agent (#10949) * Use comment image for node agent * Revert node-agent-k8s * Sort the package * fix MCP server goroutine leak (#10893) * fix MCP server goroutine leak * fix race condition * fix race condition between reqChannel blocking and stream context done (#10998) * add default namespce for istio-init namespace. (#11012) * Handle outbound traffic policy (#10869) * add passthru listener only for mesh config outbound traffic policy ALLOW_ANY * add outbound traffic policy to configmap template and values * add the listener and blackhole cluster in case of outbound policy REGISTRY_ONLY * update DefaultMeshConfig with OutboundTrafficPolicy * use ALLOW_ANY outbound policy by default in tests * add OutboundTrafficPolicy to the default meshconfig of galley * Revert "use ALLOW_ANY outbound policy by default in tests" This reverts commit 90457899dc27d4e8cae8016520b1e320a77b2bb1. * use REGISTRY_ONLY OutboundTrafficPolicy for galley tests * adopt notion of collections throughout galley/mcp (#10963) * adopt notion of collections throughout galley/mcp * add missing 's/TypeURLs()/Collections()' * fix linter errors and missing dep * linter fixes * another linter fix * address review comments * use correct collection name in copilot test * fix TestConversion/config.istio.io_v1alpha2_circonus * update copilot e2e tests * fix pilot/pkg/config/coredatamodel/controller_test.go unit test * re-add TypeURL and remove typeurl from collections * add Bearer prefix in oauth token that passed to GoogleCA (#11018) * Add bionic and deb_slim base images, optimize size for xenial (#10992) * Remove redundant pieces of code (#11014) * Increase timeout (#11019) * mixer: gateway regression (#10966) * gateway test Signed-off-by: Kuat Yessenov * prepare a test Signed-off-by: Kuat Yessenov * Merge the new tests for isolation=none, some fixes (#10958) * Merge the new tests for isolation=none, some fixes * Add a local directory with certs, can be used with the basedir for local tests * If a BaseDir meta is specified, use it as prefix for the certs - so tests don't need / access * Add the pilot constant and doc * Fix mangled sidecarByNamespace, scope issue * Fix binding inbound listeners to 0.0.0.0, test * Format * Lint * Add back the validation * Reduce flakiness, golden diff reported as warning * Manual format, make fmt doesn't seem to help * Fix authn test * Fix authn test * Reduce parallel to avoid flakiness, fix copilot test * format * remove 'crds' option in relevant manifests (#11013) * remove crds option in istio chart. * delete crds option in values*.yaml * add istio-init as prerequisite of istio chart. * Delete this superfluous script. (#11028) * Refactor in preparation for reverse and incremental MCP (#11005) This PR refactors the MCP client, server, and monitoring packages in preparation for introducing reverse MCP. This includes the following changes: * Structs/Interfaces common to MCP sinks are moved into the sink package. * Structs/Interfaces common to MCP sources are moved into source packages. * The client and server metrics reporting logic is merged into a single reporter interface and implementation, since the majority of code is duplicated. This makes it easier to use a single reporter interface across all source/sink and client/server combinations. * Plumb through source/sink options * Port Mixer's TestTcpMetricTest in new Test framework (#10844) * Port Mixer's TestTcpMetricTest in new Test framework * Look at values file too to determine if mtls is enabled for the test or not. * Add unix domain socket client and server to pilot test apps (#10874) * Add unix domain socket client and server to pilot test apps Signed-off-by: Shriram Rajagopalan * snafu Signed-off-by: Shriram Rajagopalan * appends Signed-off-by: Shriram Rajagopalan * compile fix Signed-off-by: Shriram Rajagopalan * template fixes Signed-off-by: Shriram Rajagopalan * more gotpl Signed-off-by: Shriram Rajagopalan * undos * undo Signed-off-by: Shriram Rajagopalan * Fixing new framework integration test (#11038) Fixes are as follows: 1) PolicyBackend close is failing when closing the listener in natice environment. Thus ignoring it's error and making policy backend a system component, so that it is just reset between the tests and not really closed. 2) Skipping conversion test in local environment as it requires kubernetes environment. 3) Increasing timeout of tests in kubernetes environment 4) Adding test namespace in mixer check test. * Use proxyLabels that were collected earlier (#11016) * Fix comment on defaultNodeSelector comment (#10980) * tracing: Provide default configuration when no host specified for k8s ingress (#10914) * tracing: Provide default configuration when no host specified for k8s ingress * Remove jaeger ingress in favour of one ingress with context based on provider * Updated to remove $ from .Values * Add ymesika to pilot owners (#11053) * Restart Galley in native test fw. component to avoid race. (#11048) There is a race between Galley reading the updated mesh config file and processing of input config files. This change restarts Galley every time mesh config is updated, to avoid race. * Update Istio API to include selector changes in AuthN/AuthZ. (#11046) The following changes are included from istio.io/api: aec9db9 Add option to select worload using lables for authn policy. (#755) 2dadb9e add optional incremental flag to ResponseSink and ResourceSource services (#762) d341fc8 assorted doc updates (#757) 48ad354 Update RBAC for Authorization v2 API. (#748) f818794 add optional header operations (#753) Signed-off-by: Yangmin Zhu * update proxy SHA (#11036) * update proxy SHA * Update Proxy SHA to d2d0c62a045d12924180082e8e4b6fbe0a20de1d * Add an example helm values yaml for Vault integration user guide (#11024) * Add an example helm values yaml for Vault integration user guide * Add a comment * Add retry logic to the SDS grpc server of Node Agent (#11063) * Quick fix for https://github.com/istio/istio/issues/10779 (#11061) * Basic fix to Ingress conversion. * Makes changes based on Ingress changes. * Linter fix. * Remove labels as well. * session affinity (#10730) * handle special char in trustdomain (to construct sa for secure naming) (#11066) * replace special char * update comment * enabled customized cluster domain for chart. (#11050) * enabled customized cluster domain for chart. * update webhook unit test data. * Restructure Galley sources (#11062) * Restructure Galley sources This is a series of simple moves in preparation for #10995 * addressing comments * assign back to s.mesh when reload the mesh config file (#11000) Signed-off-by: YaoZengzeng * Moving Galley source to dynamic package. (#11081) This is in preparation for #10995. Trying to do this move in order to preserve history. * Add reasonable default retry policy. (#10566) Partially addresses #7665. * Reduce flakiness in metrics test in new test framework (#11070) * Reduce flakiness in metrics test in new test framework * Fix based on review * Fix merge --- Gopkg.lock | 30 +- Gopkg.toml | 4 + Makefile | 15 +- bin/linters.sh | 1 - bin/testEnvRootMinikube.sh | 26 +- codecov.skip | 1 + codecov.threshold | 8 + docker/Dockerfile.bionic_debug | 36 + docker/Dockerfile.deb_debug | 37 + docker/Dockerfile.xenial_debug | 5 +- downloadIstio.sh | 35 - galley/cmd/galley/cmd/probe.go | 12 +- galley/cmd/galley/cmd/root.go | 38 +- galley/cmd/galley/main.go | 3 +- galley/cmd/shared/shared.go | 45 - galley/pkg/crd/validation/validation.go | 10 +- .../kube/converter/legacy/legacymixer.pb.go | 92 - .../kube/converter/legacy/legacymixer.proto | 36 - galley/pkg/meshconfig/cache.go | 1 + galley/pkg/meshconfig/defaults.go | 1 + galley/pkg/metadata/kube/types.go | 625 +- galley/pkg/metadata/kube/types_test.go | 2 +- galley/pkg/metadata/types.go | 389 +- galley/pkg/metadata/types_test.go | 18 +- galley/pkg/runtime/conversions/ingress.go | 38 +- .../pkg/runtime/conversions/ingress_test.go | 12 +- galley/pkg/runtime/monitoring.go | 16 +- galley/pkg/runtime/processor_test.go | 20 +- galley/pkg/runtime/resource/resource.go | 59 +- galley/pkg/runtime/resource/resource_test.go | 161 +- galley/pkg/runtime/resource/schema.go | 52 +- galley/pkg/runtime/resource/schema_test.go | 93 +- galley/pkg/runtime/source.go | 14 +- galley/pkg/runtime/source_test.go | 28 +- galley/pkg/runtime/state.go | 137 +- galley/pkg/runtime/state_test.go | 62 +- galley/pkg/server/args.go | 6 - galley/pkg/server/configmap.go | 21 +- galley/pkg/server/server.go | 79 +- galley/pkg/server/server_test.go | 52 +- galley/pkg/{ => source}/fs/fssource.go | 34 +- galley/pkg/{ => source}/fs/fssource_test.go | 20 +- galley/pkg/{ => source}/fs/fsutilities.go | 0 .../kube/client}/interfaces.go | 2 +- .../kube/client}/interfaces_test.go | 2 +- .../kube/dynamic}/converter/config.go | 0 .../kube/dynamic}/converter/converter.go | 54 +- .../kube/dynamic}/converter/converter_test.go | 468 +- .../kube/dynamic}/converter/proto.go | 0 .../kube/dynamic}/converter/proto_test.go | 8 +- .../kube/dynamic}/listener.go | 51 +- .../kube/dynamic}/listener_test.go | 29 +- .../source => source/kube/dynamic}/source.go | 58 +- .../kube/dynamic}/source_test.go | 53 +- .../gen.go => source/kube/log/scope.go} | 11 +- .../kube/schema/check/check.go} | 25 +- .../kube/schema/check/check_test.go} | 6 +- .../kube/schema/instance.go} | 26 +- .../kube/schema/instance_test.go} | 4 +- .../kube/schema}/resourcespec.go | 4 +- .../kube/schema}/resourcespec_test.go | 2 +- .../kube/stats/stats.go} | 17 +- galley/pkg/testing/testdata/dataset.gen.go | 1002 ++- .../v1alpha2/circonus_expected.json | 32 +- .../v1beta1/ingress_basic_expected.json | 4 +- .../extensions/v1beta1/ingress_merge_0.skip | 0 .../v1beta1/ingress_merge_0_expected.json | 12 +- .../v1beta1/ingress_merge_1_expected.json | 12 +- .../v1alpha3/destinationRule_expected.json | 4 +- .../v1alpha3/gateway_expected.json | 4 +- .../testdata/dataset/v1/node.yaml.skip | 191 + .../testdata/dataset/v1/node_expected.json | 31 + .../testing/testdata/dataset/v1/pod.yaml.skip | 275 + .../testdata/dataset/v1/pod_expected.json | 282 + galley/tools/gen-meta/main.go | 66 +- galley/tools/gen-meta/metadata.yaml | 263 +- galley/tools/mcpc/main.go | 21 +- .../deployment_manager/istio-cluster.jinja | 1 + install/kubernetes/helm/istio-init/README.md | 4 +- .../helm/istio-init/files/crd-10.yaml | 21 + .../helm/istio-init/templates/job-crd-10.yaml | 1 + .../helm/istio-init/templates/job-crd-11.yaml | 1 + .../templates/job-crd-certmanager-10.yaml | 1 + .../templates/sidecar-injector-configmap.yaml | 4 +- .../kubernetes/helm/istio-remote/values.yaml | 14 +- install/kubernetes/helm/istio/README.md | 12 +- .../helm/istio/templates/configmap.yaml | 17 +- .../templates/sidecar-injector-configmap.yaml | 4 +- .../istio/values-istio-example-sds-vault.yaml | 31 + .../helm/istio/values-istio-gateways.yaml | 7 - .../helm/istio/values-istio-googleca.yaml | 22 + .../helm/istio/values-istio-minimal.yaml | 6 - install/kubernetes/helm/istio/values.yaml | 28 +- .../galley/templates/clusterrole.yaml | 9 + .../gateways/templates/deployment.yaml | 4 +- .../gateways/templates/preconfigured.yaml | 2 +- .../subcharts/grafana/templates/gateway.yaml | 4 +- .../ingress/templates/deployment.yaml | 4 +- .../subcharts/kiali/templates/gateway.yaml | 4 +- .../subcharts/mixer/templates/config.yaml | 4 +- .../subcharts/mixer/templates/deployment.yaml | 6 +- .../subcharts/mixer/templates/service.yaml | 3 + .../helm/subcharts/mixer/values.yaml | 1 + .../subcharts/pilot/templates/deployment.yaml | 11 +- .../pilot/templates/meshexpansion.yaml | 14 +- .../prometheus/templates/configmap.yaml | 10 +- .../prometheus/templates/gateway.yaml | 4 +- .../security/templates/enable-mesh-mtls.yaml | 2 +- .../security/templates/meshexpansion.yaml | 8 +- .../tracing/templates/deployment-jaeger.yaml | 4 +- .../subcharts/tracing/templates/gateway.yaml | 4 +- .../tracing/templates/ingress-jaeger.yaml | 41 - .../subcharts/tracing/templates/ingress.yaml | 20 +- .../helm/subcharts/tracing/values.yaml | 16 +- .../pkg/writer/envoy/clusters/clusters.go | 6 +- .../pkg/writer/envoy/configdump/listener.go | 4 +- mixer/adapter/cloudwatch/client.go | 8 + mixer/adapter/cloudwatch/cloudwatch.go | 83 +- mixer/adapter/cloudwatch/cloudwatch_test.go | 166 +- .../config/adapter.cloudwatch.config.pb.html | 59 +- .../adapter/cloudwatch/config/cloudwatch.yaml | 2 +- mixer/adapter/cloudwatch/config/config.pb.go | 568 +- mixer/adapter/cloudwatch/config/config.proto | 23 +- .../cloudwatch/config/config.proto_descriptor | Bin 4518 -> 5881 bytes mixer/adapter/cloudwatch/logHandler.go | 158 + mixer/adapter/cloudwatch/logHandler_test.go | 355 + .../{handler.go => metricHandler.go} | 0 ...{handler_test.go => metricHandler_test.go} | 26 +- .../cloudwatch/operatorconfig/cloudwatch.yaml | 32 +- mixer/pkg/config/mcp/backend.go | 69 +- mixer/pkg/config/mcp/backend_test.go | 143 +- mixer/pkg/config/mcp/conversion.go | 118 +- mixer/pkg/config/store/queue.go | 11 +- mixer/pkg/config/store/store.go | 11 +- mixer/pkg/runtime/dispatcher/session.go | 3 + mixer/pkg/runtime/testing/data/data.go | 3 + mixer/pkg/server/interceptor.go | 104 + mixer/pkg/server/interceptor_test.go | 156 + mixer/pkg/server/server.go | 11 +- mixer/pkg/server/server_test.go | 3 - mixer/test/client/env/envoy.go | 4 +- mixer/test/client/env/http_client.go | 2 +- mixer/test/client/env/ports.go | 1 + mixer/test/client/env/setup.go | 3 + mixer/test/client/gateway/gateway_test.go | 327 + .../client/pilotplugin/pilotplugin_test.go | 4 +- .../pilotplugin_mtls/pilotplugin_mtls_test.go | 4 +- .../pilotplugin_tcp/pilotplugin_tcp_test.go | 4 +- .../route_directive/route_directive_test.go | 16 +- pilot/OWNERS | 1 + pilot/cmd/pilot-agent/main.go | 11 +- .../pilot-agent/status/ready/probe_test.go | 4 +- pilot/cmd/pilot-discovery/main.go | 21 +- pilot/docker/envoy_pilot.yaml.tmpl | 2 +- pilot/docker/envoy_policy.yaml.tmpl | 2 +- pilot/pkg/bootstrap/server.go | 71 +- pilot/pkg/config/coredatamodel/controller.go | 183 +- .../config/coredatamodel/controller_test.go | 267 +- pilot/pkg/config/kube/crd/types.go | 113 + .../testdata/webhook/daemonset.yaml.injected | 2 + .../deploymentconfig-multi.yaml.injected | 2 + .../webhook/deploymentconfig.yaml.injected | 2 + .../testdata/webhook/frontend.yaml.injected | 2 + .../hello-config-map-name.yaml.injected | 2 + .../webhook/hello-multi.yaml.injected | 4 + .../webhook/hello-probes.yaml.injected | 2 + .../inject/testdata/webhook/job.yaml.injected | 2 + .../webhook/list-frontend.yaml.injected | 2 + .../testdata/webhook/list.yaml.injected | 4 + .../testdata/webhook/multi-init.yaml.injected | 2 + .../testdata/webhook/replicaset.yaml.injected | 2 + .../replicationcontroller.yaml.injected | 2 + .../resource_annotations.yaml.injected | 2 + .../webhook/statefulset.yaml.injected | 2 + .../webhook/status_annotations.yaml.injected | 2 + ...c-annotations-empty-includes.yaml.injected | 2 + ...raffic-annotations-wildcards.yaml.injected | 2 + .../webhook/traffic-annotations.yaml.injected | 2 + .../webhook/user-volume.yaml.injected | 2 + pilot/pkg/model/config.go | 31 + pilot/pkg/model/context.go | 120 +- pilot/pkg/model/context_test.go | 2 +- pilot/pkg/model/push_context.go | 199 +- pilot/pkg/model/service.go | 7 +- pilot/pkg/model/sidecar.go | 389 + pilot/pkg/model/validation.go | 181 +- pilot/pkg/model/validation_test.go | 295 +- pilot/pkg/networking/core/v1alpha3/cluster.go | 405 +- .../networking/core/v1alpha3/cluster_test.go | 505 +- .../pkg/networking/core/v1alpha3/configgen.go | 41 +- .../networking/core/v1alpha3/envoyfilter.go | 2 +- .../core/v1alpha3/envoyfilter_test.go | 2 +- pilot/pkg/networking/core/v1alpha3/gateway.go | 6 +- .../pkg/networking/core/v1alpha3/httproute.go | 75 +- .../core/v1alpha3/httproute_test.go | 240 + .../pkg/networking/core/v1alpha3/listener.go | 1182 ++- .../networking/core/v1alpha3/listener_test.go | 133 +- .../networking/core/v1alpha3/networkfilter.go | 5 +- .../core/v1alpha3/route/retry/retry.go | 111 + .../core/v1alpha3/route/retry/retry_test.go | 178 + .../networking/core/v1alpha3/route/route.go | 80 +- .../core/v1alpha3/route/route_test.go | 2 +- pilot/pkg/networking/core/v1alpha3/tls.go | 59 +- .../networking/plugin/authn/authentication.go | 28 +- .../plugin/authn/authentication_test.go | 6 +- pilot/pkg/networking/plugin/authz/rbac.go | 6 +- pilot/pkg/networking/plugin/health/health.go | 2 +- pilot/pkg/networking/plugin/mixer/mixer.go | 27 +- .../pkg/networking/plugin/mixer/mixer_test.go | 142 - pilot/pkg/networking/plugin/plugin.go | 6 + pilot/pkg/networking/util/util.go | 37 +- pilot/pkg/networking/util/util_test.go | 45 +- pilot/pkg/proxy/envoy/infra_auth.go | 18 +- pilot/pkg/proxy/envoy/infra_auth_test.go | 18 +- pilot/pkg/proxy/envoy/proxy.go | 3 - pilot/pkg/proxy/envoy/v2/README.md | 2 +- pilot/pkg/proxy/envoy/v2/ads.go | 38 +- pilot/pkg/proxy/envoy/v2/cds.go | 4 +- pilot/pkg/proxy/envoy/v2/discovery.go | 33 +- pilot/pkg/proxy/envoy/v2/eds.go | 72 +- pilot/pkg/proxy/envoy/v2/eds_test.go | 4 +- pilot/pkg/proxy/envoy/v2/ep_filters.go | 3 +- pilot/pkg/proxy/envoy/v2/lds.go | 10 +- pilot/pkg/proxy/envoy/v2/lds_test.go | 154 + pilot/pkg/proxy/envoy/v2/mem.go | 8 +- pilot/pkg/proxy/envoy/v2/rds.go | 2 +- .../pkg/proxy/envoy/v2/testdata/none_cds.json | 211 + .../proxy/envoy/v2/testdata/none_ecds.json | 117 + .../pkg/proxy/envoy/v2/testdata/none_eds.json | 146 + .../envoy/v2/testdata/none_lds_http.json | 1231 +++ .../proxy/envoy/v2/testdata/none_lds_tcp.json | 2050 +++++ .../pkg/proxy/envoy/v2/testdata/none_rds.json | 208 + pilot/pkg/proxy/envoy/v2/xds_test.go | 10 +- .../aggregate/controller_test.go | 2 +- .../pkg/serviceregistry/consul/controller.go | 4 +- pilot/pkg/serviceregistry/kube/controller.go | 9 +- .../serviceregistry/kube/controller_test.go | 18 +- pilot/pkg/serviceregistry/kube/conversion.go | 23 +- .../serviceregistry/kube/conversion_test.go | 5 + pilot/pkg/serviceregistry/memory/discovery.go | 6 +- .../serviceregistry/memory/discovery_mock.go | 4 +- pilot/pkg/serviceregistry/platform.go | 2 + pkg/adsc/adsc.go | 159 +- pkg/bootstrap/bootstrap_config.go | 13 +- pkg/bootstrap/testdata/all_golden.json | 3 +- pkg/bootstrap/testdata/running_golden.json | 3 +- .../testdata/tracing_zipkin_golden.json | 3 +- pkg/ctrlz/topics/assets.gen.go | 8 +- pkg/features/pilot/pilot.go | 23 +- pkg/keepalive/options.go | 28 +- pkg/keepalive/options_test.go | 48 + pkg/mcp/client/client.go | 225 +- pkg/mcp/client/client_test.go | 465 +- pkg/mcp/client/monitoring.go | 173 - pkg/mcp/configz/assets.gen.go | 13 +- pkg/mcp/configz/assets/templates/config.html | 10 +- pkg/mcp/configz/configz.go | 40 +- pkg/mcp/configz/configz_test.go | 60 +- pkg/mcp/creds/pollingWatcher.go | 6 +- pkg/mcp/creds/pollingWatcher_test.go | 6 +- pkg/mcp/env/env.go | 48 + pkg/mcp/env/env_test.go | 77 + pkg/mcp/internal/test/auth_checker.go | 41 + pkg/mcp/internal/test/types.go | 198 + pkg/mcp/{server => monitoring}/monitoring.go | 129 +- pkg/mcp/server/listchecker.go | 79 +- pkg/mcp/server/listchecker_test.go | 12 +- pkg/mcp/server/server.go | 213 +- pkg/mcp/server/server_test.go | 406 +- pkg/mcp/sink/journal.go | 141 + pkg/mcp/sink/journal_test.go | 81 + pkg/mcp/sink/sink.go | 125 + pkg/mcp/sink/sink_test.go | 59 + pkg/mcp/snapshot/inmemory.go | 82 +- pkg/mcp/snapshot/inmemory_test.go | 187 +- pkg/mcp/snapshot/snapshot.go | 61 +- pkg/mcp/snapshot/snapshot_test.go | 180 +- pkg/mcp/source/source.go | 97 + pkg/mcp/testing/monitoring/reporter.go | 124 +- pkg/mcp/testing/server.go | 31 +- pkg/spiffe/spiffe.go | 61 + pkg/spiffe/spiffe_test.go | 83 + pkg/test/application/echo/batch.go | 16 + pkg/test/application/echo/client/main.go | 3 + pkg/test/application/echo/echo.go | 79 +- pkg/test/application/echo/server/main.go | 11 +- pkg/test/config.go | 14 +- pkg/test/deployment/helm.go | 36 +- pkg/test/env/istio.go | 2 + pkg/test/envoy/envoy.go | 2 - pkg/test/fakes/policy/backend.go | 7 +- pkg/test/framework/api/components/bookinfo.go | 38 + pkg/test/framework/api/components/galley.go | 2 +- .../framework/api/descriptors/descriptors.go | 2 +- pkg/test/framework/operations.go | 4 +- .../components/apps/agent/pilot_agent.go | 8 + .../runtime/components/apps/native.go | 13 + .../runtime/components/bookinfo/configs.go | 27 +- .../runtime/components/bookinfo/kube.go | 34 +- .../environment/kube/environment.go | 46 +- .../components/environment/kube/settings.go | 8 + .../runtime/components/galley/client.go | 23 +- .../runtime/components/galley/comparison.go | 6 +- .../runtime/components/galley/native.go | 17 +- .../runtime/components/prometheus/kube.go | 4 +- pkg/test/kube/accessor.go | 15 +- prow/e2e-suite.sh | 3 + prow/e2e_pilotv2_auth_sds.sh | 32 + prow/istio-integ-k8s-tests.sh | 4 +- prow/istio-integ-local-tests.sh | 4 +- samples/httpbin/httpbin-nodeport.yaml | 50 + security/cmd/istio_ca/main.go | 14 +- security/pkg/caclient/client.go | 13 +- security/pkg/k8s/controller/workloadsecret.go | 16 +- .../pkg/k8s/controller/workloadsecret_test.go | 8 +- security/pkg/k8s/tokenreview/k8sauthn.go | 22 +- security/pkg/nodeagent/cache/secretcache.go | 49 +- .../pkg/nodeagent/cache/secretcache_test.go | 9 +- .../caclient/providers/google/client.go | 11 +- .../caclient/providers/vault/client.go | 4 +- .../caclient/providers/vault/client_test.go | 2 +- security/pkg/nodeagent/sds/server.go | 12 +- .../pkg/nodeagent/secrets/secretfileserver.go | 58 - .../pkg/nodeagent/secrets/secretserver.go | 58 - .../nodeagent/secrets/secretserver_test.go | 132 - security/pkg/nodeagent/secrets/server.go | 220 - security/pkg/nodeagent/secrets/server_test.go | 152 - security/pkg/pki/util/san.go | 18 +- security/pkg/pki/util/san_test.go | 42 - security/pkg/platform/gcp.go | 5 +- security/pkg/platform/onprem.go | 4 +- security/pkg/registry/kube/serviceaccount.go | 8 +- .../pkg/registry/kube/serviceaccount_test.go | 15 +- .../pkg/server/ca/authenticate/kube_jwt.go | 21 +- .../server/ca/authenticate/kube_jwt_test.go | 38 +- security/pkg/server/ca/server.go | 4 +- security/pkg/server/ca/server_test.go | 2 +- .../tests/integration/kubernetes_utils.go | 3 +- tests/e2e/framework/kubernetes.go | 85 +- tests/e2e/framework/multicluster.go | 2 +- tests/e2e/tests/dashboard/dashboard_test.go | 4 + tests/e2e/tests/mixer/mixer_test.go | 243 +- .../tests/pilot/cloudfoundry/copilot_test.go | 142 +- tests/e2e/tests/pilot/mcp_test.go | 82 +- .../tests/pilot/mesh_config_verify_test.go | 2 +- tests/e2e/tests/pilot/mock/mcp/server.go | 40 +- .../pilot/performance/serviceentry_test.go | 56 +- tests/e2e/tests/pilot/pilot_test.go | 17 +- tests/e2e/tests/pilot/pod_churn_test.go | 384 + tests/e2e/tests/pilot/routing_test.go | 130 + tests/e2e/tests/pilot/testdata/app.yaml.tmpl | 14 +- .../service-entry-http-scope-private.yaml | 19 + .../service-entry-http-scope-public.yaml | 19 + .../service-entry-tcp-scope-public.yaml | 21 + .../v1alpha3/sidecar-scope-ns1-ns2.yaml | 24 + .../virtualservice-http-scope-private.yaml | 16 + .../virtualservice-http-scope-public.yaml | 20 + tests/integration2/README.md | 2 +- .../integration2/citadel/citadel_test_util.go | 4 +- .../citadel/secret_creation_test.go | 6 +- .../integration2/examples/basic/basic_test.go | 6 +- .../galley/conversion/conversion_test.go | 10 +- .../galley/conversion/main_test.go | 4 +- .../galley/validation/main_test.go | 4 +- tests/integration2/mixer/check_test.go | 2 + tests/integration2/mixer/main_test.go | 4 +- tests/integration2/mixer/scenarios_test.go | 115 +- .../pilot/security/authn_permissive_test.go | 154 + tests/integration2/qualification/main_test.go | 4 +- tests/integration2/tests.mk | 8 +- tests/istio.mk | 12 +- tests/testdata/bootstrap_tmpl.json | 2 +- tests/testdata/config/byon.yaml | 4 + .../testdata/config/destination-rule-all.yaml | 2 + .../config/destination-rule-fqdn.yaml | 1 + .../config/destination-rule-passthrough.yaml | 1 + .../testdata/config/destination-rule-ssl.yaml | 13 +- tests/testdata/config/egressgateway.yaml | 1 + tests/testdata/config/external_services.yaml | 5 + tests/testdata/config/gateway-all.yaml | 1 + tests/testdata/config/gateway-tcp-a.yaml | 1 + tests/testdata/config/ingress.yaml | 2 + tests/testdata/config/ingressgateway.yaml | 1 + tests/testdata/config/none.yaml | 279 + tests/testdata/config/rule-content-route.yaml | 1 + .../rule-default-route-append-headers.yaml | 2 + .../rule-default-route-cors-policy.yaml | 2 + tests/testdata/config/rule-default-route.yaml | 3 + .../testdata/config/rule-fault-injection.yaml | 3 + .../testdata/config/rule-ingressgateway.yaml | 1 + .../config/rule-redirect-injection.yaml | 2 + tests/testdata/config/rule-regex-route.yaml | 3 + .../config/rule-route-via-egressgateway.yaml | 1 + .../testdata/config/rule-websocket-route.yaml | 4 + .../testdata/config/rule-weighted-route.yaml | 3 + tests/testdata/config/se-example-gw.yaml | 101 + tests/testdata/config/se-example.yaml | 230 + .../testdata/config/virtual-service-all.yaml | 1 + tests/testdata/local/etc/certs/cert-chain.pem | 19 + tests/testdata/local/etc/certs/key.pem | 27 + tests/testdata/local/etc/certs/root-cert.pem | 18 + tests/util/helm_utils.go | 35 + tests/util/pilot_server.go | 2 + tools/deb/envoy_bootstrap_v2.json | 3 +- tools/istio-docker.mk | 12 + .../private/protocol/json/jsonutil/build.go | 286 + .../protocol/json/jsonutil/unmarshal.go | 226 + .../private/protocol/jsonrpc/jsonrpc.go | 111 + .../aws-sdk-go/service/cloudwatchlogs/api.go | 7456 +++++++++++++++++ .../cloudwatchlogsiface/interface.go | 217 + .../aws-sdk-go/service/cloudwatchlogs/doc.go | 57 + .../service/cloudwatchlogs/errors.go | 60 + .../service/cloudwatchlogs/service.go | 95 + vendor/github.com/google/go-cmp/LICENSE | 27 + .../github.com/google/go-cmp/cmp/compare.go | 553 ++ .../go-cmp/cmp/internal/diff/debug_disable.go | 17 + .../go-cmp/cmp/internal/diff/debug_enable.go | 122 + .../google/go-cmp/cmp/internal/diff/diff.go | 363 + .../go-cmp/cmp/internal/function/func.go | 49 + .../go-cmp/cmp/internal/value/format.go | 277 + .../google/go-cmp/cmp/internal/value/sort.go | 111 + .../github.com/google/go-cmp/cmp/options.go | 453 + vendor/github.com/google/go-cmp/cmp/path.go | 309 + .../github.com/google/go-cmp/cmp/reporter.go | 53 + .../google/go-cmp/cmp/unsafe_panic.go | 15 + .../google/go-cmp/cmp/unsafe_reflect.go | 23 + .../mocktracer/mocklogrecord.go | 105 + .../opentracing-go/mocktracer/mockspan.go | 282 + .../opentracing-go/mocktracer/mocktracer.go | 105 + .../opentracing-go/mocktracer/propagation.go | 120 + vendor/istio.io/api/.gitattributes | 2 + .../mcp/v1alpha1/istio.mcp.v1alpha1.pb.html | 395 +- vendor/istio.io/api/mcp/v1alpha1/mcp.pb.go | 1859 +++- vendor/istio.io/api/mcp/v1alpha1/mcp.proto | 142 +- .../istio.io/api/mcp/v1alpha1/metadata.pb.go | 400 +- .../istio.io/api/mcp/v1alpha1/metadata.proto | 49 +- .../{envelope.pb.go => resource.pb.go} | 160 +- .../{envelope.proto => resource.proto} | 10 +- vendor/istio.io/api/policy/v1beta1/cfg.pb.go | 5 + vendor/istio.io/api/policy/v1beta1/cfg.proto | 5 + .../v1beta1/istio.policy.v1beta1.pb.html | 4 + vendor/istio.io/api/proto.lock | 303 +- vendor/istio.io/api/prototool.yaml | 3 + .../python/istio_api/mcp/v1alpha1/mcp_pb2.py | 369 +- .../istio_api/mcp/v1alpha1/metadata_pb2.py | 123 +- .../{envelope_pb2.py => resource_pb2.py} | 34 +- .../rbac/v1alpha1/istio.rbac.v1alpha1.pb.html | 36 +- vendor/istio.io/api/rbac/v1alpha1/rbac.pb.go | 1780 +++- vendor/istio.io/api/rbac/v1alpha1/rbac.proto | 148 +- 449 files changed, 35721 insertions(+), 7202 deletions(-) create mode 100644 docker/Dockerfile.bionic_debug create mode 100644 docker/Dockerfile.deb_debug delete mode 100755 downloadIstio.sh delete mode 100644 galley/cmd/shared/shared.go delete mode 100644 galley/pkg/kube/converter/legacy/legacymixer.pb.go delete mode 100644 galley/pkg/kube/converter/legacy/legacymixer.proto rename galley/pkg/{ => source}/fs/fssource.go (92%) rename galley/pkg/{ => source}/fs/fssource_test.go (91%) rename galley/pkg/{ => source}/fs/fsutilities.go (100%) rename galley/pkg/{kube => source/kube/client}/interfaces.go (99%) rename galley/pkg/{kube => source/kube/client}/interfaces_test.go (98%) rename galley/pkg/{kube => source/kube/dynamic}/converter/config.go (100%) rename galley/pkg/{kube => source/kube/dynamic}/converter/converter.go (89%) rename galley/pkg/{kube => source/kube/dynamic}/converter/converter_test.go (63%) rename galley/pkg/{kube => source/kube/dynamic}/converter/proto.go (100%) rename galley/pkg/{kube => source/kube/dynamic}/converter/proto_test.go (85%) rename galley/pkg/{kube/source => source/kube/dynamic}/listener.go (76%) rename galley/pkg/{kube/source => source/kube/dynamic}/listener_test.go (97%) rename galley/pkg/{kube/source => source/kube/dynamic}/source.go (64%) rename galley/pkg/{kube/source => source/kube/dynamic}/source_test.go (88%) rename galley/pkg/{kube/converter/legacy/gen.go => source/kube/log/scope.go} (71%) rename galley/pkg/{kube/source/init.go => source/kube/schema/check/check.go} (77%) rename galley/pkg/{kube/source/init_test.go => source/kube/schema/check/check_test.go} (99%) rename galley/pkg/{kube/schema.go => source/kube/schema/instance.go} (66%) rename galley/pkg/{kube/schema_test.go => source/kube/schema/instance_test.go} (96%) rename galley/pkg/{kube => source/kube/schema}/resourcespec.go (96%) rename galley/pkg/{kube => source/kube/schema}/resourcespec_test.go (99%) rename galley/pkg/{kube/source/monitoring.go => source/kube/stats/stats.go} (86%) delete mode 100644 galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_0.skip create mode 100644 galley/pkg/testing/testdata/dataset/v1/node.yaml.skip create mode 100644 galley/pkg/testing/testdata/dataset/v1/node_expected.json create mode 100644 galley/pkg/testing/testdata/dataset/v1/pod.yaml.skip create mode 100644 galley/pkg/testing/testdata/dataset/v1/pod_expected.json create mode 100644 install/kubernetes/helm/istio/values-istio-example-sds-vault.yaml create mode 100644 install/kubernetes/helm/istio/values-istio-googleca.yaml delete mode 100644 install/kubernetes/helm/subcharts/tracing/templates/ingress-jaeger.yaml create mode 100644 mixer/adapter/cloudwatch/logHandler.go create mode 100644 mixer/adapter/cloudwatch/logHandler_test.go rename mixer/adapter/cloudwatch/{handler.go => metricHandler.go} (100%) rename mixer/adapter/cloudwatch/{handler_test.go => metricHandler_test.go} (88%) create mode 100644 mixer/pkg/server/interceptor.go create mode 100644 mixer/pkg/server/interceptor_test.go create mode 100644 mixer/test/client/gateway/gateway_test.go create mode 100644 pilot/pkg/model/sidecar.go create mode 100644 pilot/pkg/networking/core/v1alpha3/route/retry/retry.go create mode 100644 pilot/pkg/networking/core/v1alpha3/route/retry/retry_test.go delete mode 100644 pilot/pkg/networking/plugin/mixer/mixer_test.go create mode 100644 pilot/pkg/proxy/envoy/v2/testdata/none_cds.json create mode 100644 pilot/pkg/proxy/envoy/v2/testdata/none_ecds.json create mode 100644 pilot/pkg/proxy/envoy/v2/testdata/none_eds.json create mode 100644 pilot/pkg/proxy/envoy/v2/testdata/none_lds_http.json create mode 100644 pilot/pkg/proxy/envoy/v2/testdata/none_lds_tcp.json create mode 100644 pilot/pkg/proxy/envoy/v2/testdata/none_rds.json create mode 100644 pkg/keepalive/options_test.go delete mode 100644 pkg/mcp/client/monitoring.go create mode 100644 pkg/mcp/env/env.go create mode 100644 pkg/mcp/env/env_test.go create mode 100644 pkg/mcp/internal/test/auth_checker.go create mode 100644 pkg/mcp/internal/test/types.go rename pkg/mcp/{server => monitoring}/monitoring.go (59%) create mode 100644 pkg/mcp/sink/journal.go create mode 100644 pkg/mcp/sink/journal_test.go create mode 100644 pkg/mcp/sink/sink.go create mode 100644 pkg/mcp/sink/sink_test.go create mode 100644 pkg/mcp/source/source.go create mode 100644 pkg/spiffe/spiffe.go create mode 100644 pkg/spiffe/spiffe_test.go create mode 100644 pkg/test/framework/api/components/bookinfo.go create mode 100755 prow/e2e_pilotv2_auth_sds.sh create mode 100644 samples/httpbin/httpbin-nodeport.yaml delete mode 100644 security/pkg/nodeagent/secrets/secretfileserver.go delete mode 100644 security/pkg/nodeagent/secrets/secretserver.go delete mode 100644 security/pkg/nodeagent/secrets/secretserver_test.go delete mode 100644 security/pkg/nodeagent/secrets/server.go delete mode 100644 security/pkg/nodeagent/secrets/server_test.go create mode 100644 tests/e2e/tests/pilot/pod_churn_test.go create mode 100644 tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-private.yaml create mode 100644 tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-public.yaml create mode 100644 tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-tcp-scope-public.yaml create mode 100644 tests/e2e/tests/pilot/testdata/networking/v1alpha3/sidecar-scope-ns1-ns2.yaml create mode 100644 tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-private.yaml create mode 100644 tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-public.yaml create mode 100644 tests/integration2/pilot/security/authn_permissive_test.go create mode 100644 tests/testdata/config/none.yaml create mode 100644 tests/testdata/config/se-example-gw.yaml create mode 100644 tests/testdata/config/se-example.yaml create mode 100644 tests/testdata/local/etc/certs/cert-chain.pem create mode 100644 tests/testdata/local/etc/certs/key.pem create mode 100644 tests/testdata/local/etc/certs/root-cert.pem create mode 100644 vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go create mode 100644 vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go create mode 100644 vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go create mode 100644 vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go create mode 100644 vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface/interface.go create mode 100644 vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/doc.go create mode 100644 vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go create mode 100644 vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go create mode 100644 vendor/github.com/google/go-cmp/LICENSE create mode 100644 vendor/github.com/google/go-cmp/cmp/compare.go create mode 100644 vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go create mode 100644 vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go create mode 100644 vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go create mode 100644 vendor/github.com/google/go-cmp/cmp/internal/function/func.go create mode 100644 vendor/github.com/google/go-cmp/cmp/internal/value/format.go create mode 100644 vendor/github.com/google/go-cmp/cmp/internal/value/sort.go create mode 100644 vendor/github.com/google/go-cmp/cmp/options.go create mode 100644 vendor/github.com/google/go-cmp/cmp/path.go create mode 100644 vendor/github.com/google/go-cmp/cmp/reporter.go create mode 100644 vendor/github.com/google/go-cmp/cmp/unsafe_panic.go create mode 100644 vendor/github.com/google/go-cmp/cmp/unsafe_reflect.go create mode 100644 vendor/github.com/opentracing/opentracing-go/mocktracer/mocklogrecord.go create mode 100644 vendor/github.com/opentracing/opentracing-go/mocktracer/mockspan.go create mode 100644 vendor/github.com/opentracing/opentracing-go/mocktracer/mocktracer.go create mode 100644 vendor/github.com/opentracing/opentracing-go/mocktracer/propagation.go create mode 100644 vendor/istio.io/api/.gitattributes rename vendor/istio.io/api/mcp/v1alpha1/{envelope.pb.go => resource.pb.go} (56%) rename vendor/istio.io/api/mcp/v1alpha1/{envelope.proto => resource.proto} (79%) rename vendor/istio.io/api/python/istio_api/mcp/v1alpha1/{envelope_pb2.py => resource_pb2.py} (65%) diff --git a/Gopkg.lock b/Gopkg.lock index a9dd7bea8190..73c9c1cd8caf 100644 --- a/Gopkg.lock +++ b/Gopkg.lock @@ -156,7 +156,7 @@ version = "0.10.0" [[projects]] - digest = "1:791e15db2252f9b5a4fb18b3b77ea0cd9c9cf2978cbec0c4183fceb191945fc3" + digest = "1:07f4e60d8a08ca0759227bc0a7a12867bb3461f2c4ed376b5160153552199b4f" name = "github.com/aws/aws-sdk-go" packages = [ "aws", @@ -180,12 +180,16 @@ "internal/sdkrand", "internal/shareddefaults", "private/protocol", + "private/protocol/json/jsonutil", + "private/protocol/jsonrpc", "private/protocol/query", "private/protocol/query/queryutil", "private/protocol/rest", "private/protocol/xml/xmlutil", "service/cloudwatch", "service/cloudwatch/cloudwatchiface", + "service/cloudwatchlogs", + "service/cloudwatchlogs/cloudwatchlogsiface", "service/sts", ] pruneopts = "NUT" @@ -538,6 +542,19 @@ pruneopts = "NUT" revision = "4030bb1f1f0c35b30ca7009e9ebd06849dd45306" +[[projects]] + digest = "1:2e3c336fc7fde5c984d2841455a658a6d626450b1754a854b3b32e7a8f49a07a" + name = "github.com/google/go-cmp" + packages = [ + "cmp", + "cmp/internal/diff", + "cmp/internal/function", + "cmp/internal/value", + ] + pruneopts = "NUT" + revision = "3af367b6b30c263d47e8895973edcca9a49cf029" + version = "v0.2.0" + [[projects]] digest = "1:51bee9f1987dcdb9f9a1b4c20745d78f6bf6f5f14ad4e64ca883eb64df4c0045" name = "github.com/google/go-github" @@ -930,12 +947,13 @@ revision = "dca24d1902afee9d2be02425a2d7c9f910063631" [[projects]] - digest = "1:7da29c22bcc5c2ffb308324377dc00b5084650348c2799e573ed226d8cc9faf0" + digest = "1:6e36c3eab25478d363e7eb669f253d9f9a620e640b0e1d86d3e909cbb1a56ab6" name = "github.com/opentracing/opentracing-go" packages = [ ".", "ext", "log", + "mocktracer", ] pruneopts = "NUT" revision = "1949ddbfd147afd4d964a9f00b24eb291e0e7c38" @@ -1607,7 +1625,7 @@ [[projects]] branch = "master" - digest = "1:75acdd886eb4e137c6bac421d84e6b718a0554a01000cb7249059a38293f6d79" + digest = "1:5c0dc1aa0fd13c711ae67f55a2dd155ff3ca5b57ba6e4a914cba3f402769e867" name = "istio.io/api" packages = [ "authentication/v1alpha1", @@ -1623,7 +1641,7 @@ "rbac/v1alpha1", ] pruneopts = "T" - revision = "056eb85d96f09441775d79283c149d93fcbd0982" + revision = "40a08a31eaf1c0ad0d3261c9f604a55873d99d2f" [[projects]] branch = "release-1.11" @@ -1978,6 +1996,8 @@ "github.com/aws/aws-sdk-go/awstesting/unit", "github.com/aws/aws-sdk-go/service/cloudwatch", "github.com/aws/aws-sdk-go/service/cloudwatch/cloudwatchiface", + "github.com/aws/aws-sdk-go/service/cloudwatchlogs", + "github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface", "github.com/cactus/go-statsd-client/statsd", "github.com/cactus/go-statsd-client/statsd/statsdtest", "github.com/cenkalti/backoff", @@ -2036,6 +2056,7 @@ "github.com/golang/protobuf/ptypes/timestamp", "github.com/golang/protobuf/ptypes/wrappers", "github.com/golang/sync/errgroup", + "github.com/google/go-cmp/cmp", "github.com/google/go-github/github", "github.com/google/uuid", "github.com/googleapis/gax-go", @@ -2058,6 +2079,7 @@ "github.com/opentracing/opentracing-go", "github.com/opentracing/opentracing-go/ext", "github.com/opentracing/opentracing-go/log", + "github.com/opentracing/opentracing-go/mocktracer", "github.com/pborman/uuid", "github.com/pkg/errors", "github.com/pmezard/go-difflib/difflib", diff --git a/Gopkg.toml b/Gopkg.toml index 3169feca4149..01e0ad6bdb1e 100644 --- a/Gopkg.toml +++ b/Gopkg.toml @@ -362,3 +362,7 @@ ignored = [ [[override]] name = "github.com/imdario/mergo" revision = "6633656539c1639d9d78127b7d47c622b5d7b6dc" + +[[constraint]] + name = "github.com/google/go-cmp" + version = "0.2.0" diff --git a/Makefile b/Makefile index cab6ab768054..f299fbdc1851 100644 --- a/Makefile +++ b/Makefile @@ -450,7 +450,7 @@ test: | $(JUNIT_REPORT) $(MAKE) --keep-going $(TEST_OBJ) \ 2>&1 | tee >($(JUNIT_REPORT) > $(JUNIT_UNIT_TEST_XML)) -GOTEST_PARALLEL ?= '-test.parallel=4' +GOTEST_PARALLEL ?= '-test.parallel=2' # This is passed to mixer and other tests to limit how many builds are used. # In CircleCI, set in "Project Settings" -> "Environment variables" as "-p 2" if you don't have xlarge machines GOTEST_P ?= @@ -772,6 +772,19 @@ generate_e2e_test_yaml: $(HELM) $(HOME)/.helm helm-repo-add --values install/kubernetes/helm/istio/values.yaml \ install/kubernetes/helm/istio >> install/kubernetes/istio-auth-non-mcp.yaml + cat install/kubernetes/namespace.yaml > install/kubernetes/istio-auth-sds.yaml + cat install/kubernetes/helm/istio-init/files/crd-* >> install/kubernetes/istio-auth-sds.yaml + $(HELM) template --set global.tag=${TAG} \ + --name=istio \ + --namespace=istio-system \ + --set global.hub=${HUB} \ + --set global.mtls.enabled=true \ + --set global.proxy.enableCoreDump=true \ + --set istio_cni.enabled=${ENABLE_ISTIO_CNI} \ + ${EXTRA_HELM_SETTINGS} \ + --values install/kubernetes/helm/istio/values-istio-sds-auth.yaml \ + install/kubernetes/helm/istio >> install/kubernetes/istio-auth-sds.yaml + # files generated by the default invocation of updateVersion.sh FILES_TO_CLEAN+=install/consul/istio.yaml \ install/kubernetes/addons/grafana.yaml \ diff --git a/bin/linters.sh b/bin/linters.sh index f1556abd6da5..08bfd0ed86a0 100755 --- a/bin/linters.sh +++ b/bin/linters.sh @@ -142,4 +142,3 @@ install_gometalinter run_gometalinter run_helm_lint check_grafana_dashboards -check_licenses diff --git a/bin/testEnvRootMinikube.sh b/bin/testEnvRootMinikube.sh index 0d9a9a8dc7bf..9c5e5ead8a26 100755 --- a/bin/testEnvRootMinikube.sh +++ b/bin/testEnvRootMinikube.sh @@ -61,7 +61,31 @@ function startMinikubeNone() { export MINIKUBE_WANTREPORTERRORPROMPT=false export MINIKUBE_HOME=$HOME export CHANGE_MINIKUBE_NONE_USER=true - echo "IP forwarding setting: $(cat /proc/sys/net/ipv4/ip_forward)" + + # Troubleshoot problem with Docker build on some CircleCI machines + if [ -f /proc/sys/net/ipv4/ip_forward ]; then + echo "IP forwarding setting: $(cat /proc/sys/net/ipv4/ip_forward)" + echo "My hostname is:" + hostname + echo "My distro is:" + cat /etc/*-release + echo "Contents of /etc/sysctl.d/" + ls -l /etc/sysctl.d/ || true + echo "Contents of /etc/sysctl.conf" + grep ip_forward /etc/sysctl.conf + echo "Config files setting ip_forward" + find /etc/sysctl.d/ -type f -exec grep ip_forward \{\} \; -print + if [ "$(cat /proc/sys/net/ipv4/ip_forward)" -eq 0 ]; then + whoami + echo "Cannot build images without IPv4 forwarding, attempting to turn on forwarding" + sudo sysctl -w net.ipv4.ip_forward=1 + if [ "$(cat /proc/sys/net/ipv4/ip_forward)" -eq 0 ]; then + echo "Cannot build images without IPv4 forwarding" + exit 1 + fi + fi + fi + sudo -E minikube start \ --kubernetes-version=v1.9.0 \ --vm-driver=none \ diff --git a/codecov.skip b/codecov.skip index 8508abce6078..097146ae3970 100644 --- a/codecov.skip +++ b/codecov.skip @@ -14,6 +14,7 @@ istio.io/istio/tests/codecov istio.io/istio/tests/e2e istio.io/istio/tests/integration2/examples istio.io/istio/tests/integration2/qualification +istio.io/istio/tests/integration2 istio.io/istio/tests/integration_old istio.io/istio/tests/local istio.io/istio/tests/util diff --git a/codecov.threshold b/codecov.threshold index 5bace5117349..2747860d48e6 100644 --- a/codecov.threshold +++ b/codecov.threshold @@ -25,4 +25,12 @@ istio.io/istio/pkg/mcp/creds/watcher.go=100 istio.io/istio/pkg/test=100 istio.io/istio/security/proto=100 istio.io/istio/security/pkg/nodeagent=15 +istio.io/istio/galley/pkg/testing=100 +# Temporary until integ tests are restored +istio.io/istio/pilot/pkg/networking/plugin=50 +istio.io/istio/galley/pkg/runtime=30 +istio.io/istio/galley/pkg/kube/converter/legacy/legacymixer.pb.go=10 +istio.io/istio/galley/pkg/meshconfig/cache.go=100 +istio.io/istio/pkg/mcp/server/monitoring.go=50 +istio.io/istio/pilot/pkg/model/authentication.go=20 diff --git a/docker/Dockerfile.bionic_debug b/docker/Dockerfile.bionic_debug new file mode 100644 index 000000000000..a633c49e3fe4 --- /dev/null +++ b/docker/Dockerfile.bionic_debug @@ -0,0 +1,36 @@ +FROM ubuntu:bionic +# Base image for debug builds. +# Built manually uploaded as "istionightly/base_debug" + +# Do not add more stuff to this list that isn't small or critically useful. +# If you occasionally need something on the container do +# sudo apt-get update && apt-get whichever +RUN apt-get update && \ + apt-get install --no-install-recommends -y \ + curl \ + iptables \ + iproute2 \ + iputils-ping \ + knot-dnsutils \ + netcat \ + tcpdump \ + net-tools \ + lsof \ + sudo && \ + apt-get clean -y && \ + rm -rf /var/cache/debconf/* /var/lib/apt/lists/* \ + /var/log/* /tmp/* /var/tmp/* + +# Required: +# iptables (1.8M) (required in init, for debugging in the other cases) +# iproute2 (1.5M) (required for init) + +# Debug: +# curl (11M) +# tcpdump (5M) +# netcat (0.2M) +# net-tools (0.7M): netstat +# lsof (0.5M): for debugging open socket, file descriptors +# knot-dnsutils (4.5M): dig/nslookup + +# Alternative: dnsutils(44M) for dig/nslookup diff --git a/docker/Dockerfile.deb_debug b/docker/Dockerfile.deb_debug new file mode 100644 index 000000000000..d22fbcd49f8f --- /dev/null +++ b/docker/Dockerfile.deb_debug @@ -0,0 +1,37 @@ +FROM debian:9-slim + +# Base image for debug builds - base is 22M +# Built manually uploaded as "istionightly/base_debug" + +# Do not add more stuff to this list that isn't small or critically useful. +# If you occasionally need something on the container do +# sudo apt-get update && apt-get whichever +RUN apt-get update && \ + apt-get install --no-install-recommends -y \ + curl \ + iptables \ + iproute2 \ + iputils-ping \ + knot-dnsutils \ + netcat \ + tcpdump \ + net-tools \ + lsof \ + sudo && apt-get upgrade -y && \ + apt-get clean -y && \ + rm -rf /var/cache/debconf/* /var/lib/apt/lists/* \ + /var/log/* /tmp/* /var/tmp/* + +# Required: +# iptables (1.8M) (required in init, for debugging in the other cases) +# iproute2 (1.5M) (required for init) + +# Debug: +# curl (11M) +# tcpdump (5M) +# netcat (0.2M) +# net-tools (0.7M): netstat +# lsof (0.5M): for debugging open socket, file descriptors +# knot-dnsutils (4.5M): dig/nslookup + +# Alternative: dnsutils(44M) for dig/nslookup diff --git a/docker/Dockerfile.xenial_debug b/docker/Dockerfile.xenial_debug index 072ac798da54..2c751b57f040 100644 --- a/docker/Dockerfile.xenial_debug +++ b/docker/Dockerfile.xenial_debug @@ -10,16 +10,15 @@ ENV DEBIAN_FRONTEND=noninteractive RUN apt-get update && \ apt-get install --no-install-recommends -y \ + ca-certificates \ curl \ iptables \ iproute2 \ iputils-ping \ - dnsutils \ + knot-dnsutils \ netcat \ tcpdump \ net-tools \ - libc6-dbg gdb \ - elvis-tiny \ lsof \ linux-tools-generic \ sudo && apt-get upgrade -y && \ diff --git a/downloadIstio.sh b/downloadIstio.sh deleted file mode 100755 index 45a2e9c3a48e..000000000000 --- a/downloadIstio.sh +++ /dev/null @@ -1,35 +0,0 @@ -#!/bin/bash - -# Copyright 2018 Istio Authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DO NOT UPDATE THIS VERSION OR SCRIPT LIGHTLY - THIS IS THE "STABLE" VERSION -ISTIO_VERSION="1.0.2" - -NAME="istio-$ISTIO_VERSION" -OS="$(uname)" -if [ "x${OS}" = "xDarwin" ] ; then - OSEXT="osx" -else - # TODO we should check more/complain if not likely to work, etc... - OSEXT="linux" -fi -URL="https://github.com/istio/istio/releases/download/${ISTIO_VERSION}/istio-${ISTIO_VERSION}-${OSEXT}.tar.gz" -echo "Downloading $NAME from $URL ..." -curl -L "$URL" | tar xz -echo "Downloaded into $NAME:" -ls $NAME -BINDIR="$(cd $NAME/bin && pwd)" -echo "Add $BINDIR to your path; e.g copy paste in your shell and/or ~/.profile:" -echo "export PATH=\"\$PATH:$BINDIR\"" diff --git a/galley/cmd/galley/cmd/probe.go b/galley/cmd/galley/cmd/probe.go index 7668e2470821..5e60a81a80dd 100644 --- a/galley/cmd/galley/cmd/probe.go +++ b/galley/cmd/galley/cmd/probe.go @@ -15,13 +15,14 @@ package cmd import ( + "fmt" + "github.com/spf13/cobra" - "istio.io/istio/galley/cmd/shared" "istio.io/istio/pkg/probe" ) -func probeCmd(printf, fatalf shared.FormatFn) *cobra.Command { +func probeCmd() *cobra.Command { var ( probeOptions probe.Options ) @@ -31,12 +32,13 @@ func probeCmd(printf, fatalf shared.FormatFn) *cobra.Command { Short: "Check the liveness or readiness of a locally-running server", Run: func(cmd *cobra.Command, _ []string) { if !probeOptions.IsValid() { - fatalf("some options are not valid") + fmt.Fprintf(cmd.OutOrStdout(), "some options are not valid") + return } if err := probe.NewFileClient(&probeOptions).GetStatus(); err != nil { - fatalf("fail on inspecting path %s: %v", probeOptions.Path, err) + fmt.Fprintf(cmd.OutOrStdout(), "fail on inspecting path %s: %v", probeOptions.Path, err) } - printf("OK") + fmt.Fprintf(cmd.OutOrStdout(), "OK") }, } probeCmd.PersistentFlags().StringVar(&probeOptions.Path, "probe-path", "", diff --git a/galley/cmd/galley/cmd/root.go b/galley/cmd/galley/cmd/root.go index 1b7680dfd14a..17aaacf5409c 100644 --- a/galley/cmd/galley/cmd/root.go +++ b/galley/cmd/galley/cmd/root.go @@ -22,7 +22,6 @@ import ( "github.com/spf13/cobra" "github.com/spf13/cobra/doc" - "istio.io/istio/galley/cmd/shared" "istio.io/istio/galley/pkg/crd/validation" "istio.io/istio/galley/pkg/server" istiocmd "istio.io/istio/pkg/cmd" @@ -33,16 +32,13 @@ import ( ) var ( - flags = struct { - kubeConfig string - resyncPeriod time.Duration - }{} - + resyncPeriod time.Duration + kubeConfig string loggingOptions = log.DefaultOptions() ) // GetRootCmd returns the root of the cobra command-tree. -func GetRootCmd(args []string, printf, fatalf shared.FormatFn) *cobra.Command { +func GetRootCmd(args []string) *cobra.Command { var ( serverArgs = server.DefaultArgs() @@ -65,35 +61,34 @@ func GetRootCmd(args []string, printf, fatalf shared.FormatFn) *cobra.Command { if len(args) > 0 { return fmt.Errorf("%q is an invalid argument", args[0]) } - return nil + err := log.Configure(loggingOptions) + return err }, Run: func(cmd *cobra.Command, args []string) { - serverArgs.KubeConfig = flags.kubeConfig - serverArgs.ResyncPeriod = flags.resyncPeriod + serverArgs.KubeConfig = kubeConfig + serverArgs.ResyncPeriod = resyncPeriod serverArgs.CredentialOptions.CACertificateFile = validationArgs.CACertFile serverArgs.CredentialOptions.KeyFile = validationArgs.KeyFile serverArgs.CredentialOptions.CertificateFile = validationArgs.CertFile - serverArgs.LoggingOptions = loggingOptions if livenessProbeOptions.IsValid() { livenessProbeController = probe.NewFileController(&livenessProbeOptions) } if readinessProbeOptions.IsValid() { readinessProbeController = probe.NewFileController(&readinessProbeOptions) - } if !serverArgs.EnableServer && !validationArgs.EnableValidation { - fatalf("Galley must be running under at least one mode: server or validation") + log.Fatala("Galley must be running under at least one mode: server or validation") } if err := validationArgs.Validate(); err != nil { - fatalf("Invalid validationArgs: %v", err) + log.Fatalf("Invalid validationArgs: %v", err) } if serverArgs.EnableServer { - go server.RunServer(serverArgs, printf, fatalf, livenessProbeController, readinessProbeController) + go server.RunServer(serverArgs, livenessProbeController, readinessProbeController) } if validationArgs.EnableValidation { - go validation.RunValidation(validationArgs, printf, fatalf, flags.kubeConfig, livenessProbeController, readinessProbeController) + go validation.RunValidation(validationArgs, kubeConfig, livenessProbeController, readinessProbeController) } galleyStop := make(chan struct{}) go server.StartSelfMonitoring(galleyStop, monitoringPort) @@ -104,15 +99,14 @@ func GetRootCmd(args []string, printf, fatalf shared.FormatFn) *cobra.Command { go server.StartProbeCheck(livenessProbeController, readinessProbeController, galleyStop) istiocmd.WaitSignal(galleyStop) - }, } rootCmd.SetArgs(args) rootCmd.PersistentFlags().AddGoFlagSet(flag.CommandLine) - rootCmd.PersistentFlags().StringVar(&flags.kubeConfig, "kubeconfig", "", + rootCmd.PersistentFlags().StringVar(&kubeConfig, "kubeconfig", "", "Use a Kubernetes configuration file instead of in-cluster configuration") - rootCmd.PersistentFlags().DurationVar(&flags.resyncPeriod, "resyncPeriod", 0, + rootCmd.PersistentFlags().DurationVar(&resyncPeriod, "resyncPeriod", 0, "Resync period for rescanning Kubernetes resources") rootCmd.PersistentFlags().StringVar(&validationArgs.CertFile, "tlsCertFile", "/etc/certs/cert-chain.pem", "File containing the x509 Certificate for HTTPS.") @@ -134,7 +128,7 @@ func GetRootCmd(args []string, printf, fatalf shared.FormatFn) *cobra.Command { rootCmd.PersistentFlags().BoolVar(&enableProfiling, "enableProfiling", false, "Enable profiling for Galley") - //server config + // server config rootCmd.PersistentFlags().StringVarP(&serverArgs.APIAddress, "server-address", "", serverArgs.APIAddress, "Address to use for Galley's gRPC API, e.g. tcp://127.0.0.1:9092 or unix:///path/to/file") rootCmd.PersistentFlags().UintVarP(&serverArgs.MaxReceivedMessageSize, "server-maxReceivedMessageSize", "", serverArgs.MaxReceivedMessageSize, @@ -157,7 +151,7 @@ func GetRootCmd(args []string, printf, fatalf shared.FormatFn) *cobra.Command { serverArgs.IntrospectionOptions.AttachCobraFlags(rootCmd) - //validation config + // validation config rootCmd.PersistentFlags().StringVar(&validationArgs.WebhookConfigFile, "validation-webhook-config-file", "", "File that contains k8s validatingwebhookconfiguration yaml. Validation is disabled if file is not specified") @@ -174,7 +168,7 @@ func GetRootCmd(args []string, printf, fatalf shared.FormatFn) *cobra.Command { rootCmd.PersistentFlags().StringVar(&validationArgs.WebhookName, "webhook-name", "istio-galley", "Name of the k8s validatingwebhookconfiguration") - rootCmd.AddCommand(probeCmd(printf, fatalf)) + rootCmd.AddCommand(probeCmd()) rootCmd.AddCommand(version.CobraCommand()) rootCmd.AddCommand(collateral.CobraCommand(rootCmd, &doc.GenManHeader{ Title: "Istio Galley Server", diff --git a/galley/cmd/galley/main.go b/galley/cmd/galley/main.go index 58364a838b32..3f273c7dfe0a 100644 --- a/galley/cmd/galley/main.go +++ b/galley/cmd/galley/main.go @@ -18,11 +18,10 @@ import ( "os" "istio.io/istio/galley/cmd/galley/cmd" - "istio.io/istio/galley/cmd/shared" ) func main() { - rootCmd := cmd.GetRootCmd(os.Args[1:], shared.Printf, shared.Fatalf) + rootCmd := cmd.GetRootCmd(os.Args[1:]) if err := rootCmd.Execute(); err != nil { os.Exit(-1) diff --git a/galley/cmd/shared/shared.go b/galley/cmd/shared/shared.go deleted file mode 100644 index 0cef79bc79f4..000000000000 --- a/galley/cmd/shared/shared.go +++ /dev/null @@ -1,45 +0,0 @@ -// Copyright 2018 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -// Package shared contains types and functions that are used across the full -// set of galley commands. -package shared - -import ( - "encoding/json" - "fmt" - "os" -) - -// FormatFn formats the supplied arguments according to the format string -// provided and executes some set of operations with the result. -type FormatFn func(format string, args ...interface{}) - -// Fatalf is a FormatFn that prints the formatted string to os.Stderr and then -// calls os.Exit(). -var Fatalf = func(format string, args ...interface{}) { - _, _ = fmt.Fprintf(os.Stderr, format+"\n", args...) // #nosec - os.Exit(-1) -} - -// Printf is a FormatFn that prints the formatted string to os.Stdout. -var Printf = func(format string, args ...interface{}) { - fmt.Printf(format+"\n", args...) -} - -// Serialize the given object in nicely formatted JSON. -func Serialize(v interface{}) string { - b, _ := json.MarshalIndent(v, "", " ") - return string(b) -} diff --git a/galley/pkg/crd/validation/validation.go b/galley/pkg/crd/validation/validation.go index da63df60cb49..87a15b13a6ab 100644 --- a/galley/pkg/crd/validation/validation.go +++ b/galley/pkg/crd/validation/validation.go @@ -25,7 +25,6 @@ import ( multierror "github.com/hashicorp/go-multierror" - "istio.io/istio/galley/cmd/shared" "istio.io/istio/mixer/adapter" "istio.io/istio/mixer/pkg/config" "istio.io/istio/mixer/pkg/config/store" @@ -35,6 +34,7 @@ import ( "istio.io/istio/pilot/pkg/model" "istio.io/istio/pkg/cmd" "istio.io/istio/pkg/kube" + "istio.io/istio/pkg/log" "istio.io/istio/pkg/probe" ) @@ -92,20 +92,20 @@ func webhookHTTPSHandlerReady(client httpClient, vc *WebhookParameters) error { } //RunValidation start running Galley validation mode -func RunValidation(vc *WebhookParameters, printf, fatalf shared.FormatFn, kubeConfig string, +func RunValidation(vc *WebhookParameters, kubeConfig string, livenessProbeController, readinessProbeController probe.Controller) { - printf("Galley validation started with\n%s", vc) + log.Infof("Galley validation started with\n%s", vc) mixerValidator := createMixerValidator() clientset, err := kube.CreateClientset(kubeConfig, "") if err != nil { - fatalf("could not create k8s clientset: %v", err) + log.Fatalf("could not create k8s clientset: %v", err) } vc.MixerValidator = mixerValidator vc.PilotDescriptor = model.IstioConfigTypes vc.Clientset = clientset wh, err := NewWebhook(*vc) if err != nil { - fatalf("cannot create validation webhook service: %v", err) + log.Fatalf("cannot create validation webhook service: %v", err) } if livenessProbeController != nil { validationLivenessProbe := probe.NewProbe() diff --git a/galley/pkg/kube/converter/legacy/legacymixer.pb.go b/galley/pkg/kube/converter/legacy/legacymixer.pb.go deleted file mode 100644 index 547339062f1c..000000000000 --- a/galley/pkg/kube/converter/legacy/legacymixer.pb.go +++ /dev/null @@ -1,92 +0,0 @@ -// Code generated by protoc-gen-gogo. DO NOT EDIT. -// source: legacymixer.proto - -/* -Package legacy is a generated protocol buffer package. - -TODO: Temporarily placing this file in the istio repo. Eventually this should go into istio.io/api. - -It is generated from these files: - legacymixer.proto - -It has these top-level messages: - LegacyMixerResource -*/ -package legacy - -import proto "github.com/gogo/protobuf/proto" -import fmt "fmt" -import math "math" -import google_protobuf "github.com/gogo/protobuf/types" - -// Reference imports to suppress errors if they are not otherwise used. -var _ = proto.Marshal -var _ = fmt.Errorf -var _ = math.Inf - -// This is a compile-time assertion to ensure that this generated file -// is compatible with the proto package it is being compiled against. -// A compilation error at this line likely means your copy of the -// proto package needs to be updated. -const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package - -// LegacyMixerResource is used to multiplex old-style one-per-kind Mixer instances and templates through -// the MCP protocol. -type LegacyMixerResource struct { - // The original name of the resource. - Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` - // The original kind of the resource. - Kind string `protobuf:"bytes,2,opt,name=kind,proto3" json:"kind,omitempty"` - // The original contents of the resource. - Contents *google_protobuf.Struct `protobuf:"bytes,3,opt,name=contents" json:"contents,omitempty"` -} - -func (m *LegacyMixerResource) Reset() { *m = LegacyMixerResource{} } -func (m *LegacyMixerResource) String() string { return proto.CompactTextString(m) } -func (*LegacyMixerResource) ProtoMessage() {} -func (*LegacyMixerResource) Descriptor() ([]byte, []int) { return fileDescriptorLegacymixer, []int{0} } - -func (m *LegacyMixerResource) GetName() string { - if m != nil { - return m.Name - } - return "" -} - -func (m *LegacyMixerResource) GetKind() string { - if m != nil { - return m.Kind - } - return "" -} - -func (m *LegacyMixerResource) GetContents() *google_protobuf.Struct { - if m != nil { - return m.Contents - } - return nil -} - -func init() { - proto.RegisterType((*LegacyMixerResource)(nil), "istio.mcp.v1alpha1.extensions.LegacyMixerResource") -} - -func init() { proto.RegisterFile("legacymixer.proto", fileDescriptorLegacymixer) } - -var fileDescriptorLegacymixer = []byte{ - // 209 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x3c, 0x8f, 0x31, 0x4f, 0x04, 0x21, - 0x10, 0x85, 0xb3, 0x6a, 0x8c, 0x62, 0x25, 0x16, 0x6e, 0x8c, 0x26, 0x17, 0xab, 0xab, 0x98, 0xac, - 0xf7, 0x0f, 0xac, 0xb5, 0x59, 0x3b, 0x3b, 0x16, 0x47, 0x24, 0xcb, 0x32, 0x04, 0x86, 0xcb, 0xed, - 0xbf, 0x37, 0x0b, 0xd1, 0xee, 0xe5, 0xe5, 0xf1, 0x7d, 0x8c, 0xb8, 0xf5, 0x68, 0xb5, 0x59, 0x17, - 0x77, 0xc2, 0xa4, 0x62, 0x22, 0x26, 0xf9, 0xe4, 0x32, 0x3b, 0x52, 0x8b, 0x89, 0xea, 0x38, 0x68, - 0x1f, 0x7f, 0xf4, 0xa0, 0xf0, 0xc4, 0x18, 0xb2, 0xa3, 0x90, 0x1f, 0x1e, 0x2d, 0x91, 0xf5, 0x08, - 0x75, 0x3c, 0x95, 0x6f, 0xc8, 0x9c, 0x8a, 0xe1, 0xf6, 0xf8, 0x39, 0x89, 0xbb, 0xb7, 0x4a, 0x7c, - 0xdf, 0x88, 0x23, 0x66, 0x2a, 0xc9, 0xa0, 0x94, 0xe2, 0x22, 0xe8, 0x05, 0xfb, 0x6e, 0xd7, 0xed, - 0xaf, 0xc7, 0x9a, 0xb7, 0x6e, 0x76, 0xe1, 0xab, 0x3f, 0x6b, 0xdd, 0x96, 0xe5, 0x41, 0x5c, 0x19, - 0x0a, 0x8c, 0x81, 0x73, 0x7f, 0xbe, 0xeb, 0xf6, 0x37, 0x2f, 0xf7, 0xaa, 0xf9, 0xd4, 0x9f, 0x4f, - 0x7d, 0x54, 0xdf, 0xf8, 0x3f, 0x7c, 0x1d, 0x3e, 0xa1, 0x7d, 0xd9, 0x51, 0x0b, 0x60, 0xb5, 0xf7, - 0xb8, 0x42, 0x9c, 0x2d, 0xcc, 0x65, 0x42, 0x30, 0x14, 0x8e, 0x98, 0x18, 0x13, 0xb4, 0x73, 0xa7, - 0xcb, 0x4a, 0x3b, 0xfc, 0x06, 0x00, 0x00, 0xff, 0xff, 0x08, 0xe2, 0x38, 0x61, 0xff, 0x00, 0x00, - 0x00, -} diff --git a/galley/pkg/kube/converter/legacy/legacymixer.proto b/galley/pkg/kube/converter/legacy/legacymixer.proto deleted file mode 100644 index eee69cd6a3a8..000000000000 --- a/galley/pkg/kube/converter/legacy/legacymixer.proto +++ /dev/null @@ -1,36 +0,0 @@ -// Copyright 2018 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - - -syntax = "proto3"; - -import "google/protobuf/struct.proto"; - -// TODO: Temporarily placing this file in the istio repo. Eventually this should go into istio.io/api. -package istio.mcp.v1alpha1.extensions; - -option go_package="istio.io/istio/galley/pkg/kube/converter/legacy"; - -// LegacyMixerResource is used to multiplex old-style one-per-kind Mixer instances and templates through -// the MCP protocol. -message LegacyMixerResource { - // The original name of the resource. - string name = 1; - - // The original kind of the resource. - string kind = 2; - - // The original contents of the resource. - google.protobuf.Struct contents = 3; -} diff --git a/galley/pkg/meshconfig/cache.go b/galley/pkg/meshconfig/cache.go index a82a069094d2..852a2fb65ad3 100644 --- a/galley/pkg/meshconfig/cache.go +++ b/galley/pkg/meshconfig/cache.go @@ -103,6 +103,7 @@ func (c *FsCache) reload() { c.cachedMutex.Lock() defer c.cachedMutex.Unlock() c.cached = cfg + scope.Infof("Reloaded mesh config: \n%s\n", string(by)) } // Close closes this cache. diff --git a/galley/pkg/meshconfig/defaults.go b/galley/pkg/meshconfig/defaults.go index 94ac3186b736..00b4aa41fffb 100644 --- a/galley/pkg/meshconfig/defaults.go +++ b/galley/pkg/meshconfig/defaults.go @@ -36,5 +36,6 @@ func Default() v1alpha1.MeshConfig { EnableTracing: true, AccessLogFile: "/dev/stdout", SdsUdsPath: "", + OutboundTrafficPolicy: &v1alpha1.MeshConfig_OutboundTrafficPolicy{Mode: v1alpha1.MeshConfig_OutboundTrafficPolicy_REGISTRY_ONLY}, } } diff --git a/galley/pkg/metadata/kube/types.go b/galley/pkg/metadata/kube/types.go index 4030e49e6094..0a12e45bd6e0 100644 --- a/galley/pkg/metadata/kube/types.go +++ b/galley/pkg/metadata/kube/types.go @@ -6,575 +6,652 @@ package kube import ( - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/kube/converter" "istio.io/istio/galley/pkg/metadata" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" + "istio.io/istio/galley/pkg/source/kube/schema" ) // Types in the schema. -var Types *kube.Schema +var Types *schema.Instance func init() { - b := kube.NewSchemaBuilder() + b := schema.NewBuilder() - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "MeshPolicy", ListKind: "MeshPolicyList", Singular: "meshpolicy", Plural: "meshpolicies", Version: "v1alpha1", Group: "authentication.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.authentication.v1alpha1.Policy"), + Target: metadata.Types.Get("istio/authentication/v1alpha1/meshpolicies"), Converter: converter.Get("auth-policy-resource"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "Policy", ListKind: "PolicyList", Singular: "policy", Plural: "policies", Version: "v1alpha1", Group: "authentication.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.authentication.v1alpha1.Policy"), + Target: metadata.Types.Get("istio/authentication/v1alpha1/policies"), Converter: converter.Get("auth-policy-resource"), }) - b.Add(kube.ResourceSpec{ - Kind: "prometheus", - ListKind: "prometheusList", - Singular: "prometheus", - Plural: "prometheuses", - Version: "v1alpha2", - Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), - }) - - b.Add(kube.ResourceSpec{ - Kind: "quota", - ListKind: "quotaList", - Singular: "quota", - Plural: "quotas", + b.Add(schema.ResourceSpec{ + Kind: "adapter", + ListKind: "adapterList", + Singular: "adapter", + Plural: "adapters", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/adapters"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "metric", - ListKind: "metricList", - Singular: "metric", - Plural: "metrics", + b.Add(schema.ResourceSpec{ + Kind: "HTTPAPISpecBinding", + ListKind: "HTTPAPISpecBindingList", + Singular: "httpapispecbinding", + Plural: "httpapispecbindings", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/httpapispecbindings"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "logentry", - ListKind: "logentryList", - Singular: "logentry", - Plural: "logentries", + b.Add(schema.ResourceSpec{ + Kind: "HTTPAPISpec", + ListKind: "HTTPAPISpecList", + Singular: "httpapispec", + Plural: "httpapispecs", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/httpapispecs"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "reportnothing", - ListKind: "reportnothingList", - Singular: "reportnothing", - Plural: "reportnothings", + b.Add(schema.ResourceSpec{ + Kind: "apikey", + ListKind: "apikeyList", + Singular: "apikey", + Plural: "apikeys", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/apikeys"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "listentry", - ListKind: "listentryList", - Singular: "listentry", - Plural: "listentries", + b.Add(schema.ResourceSpec{ + Kind: "authorization", + ListKind: "authorizationList", + Singular: "authorization", + Plural: "authorizations", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/authorizations"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "kubernetes", - ListKind: "kubernetesList", - Singular: "kubernetes", - Plural: "kuberneteses", + b.Add(schema.ResourceSpec{ + Kind: "bypass", + ListKind: "bypassList", + Singular: "bypass", + Plural: "bypasses", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/bypasses"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "checknothing", ListKind: "checknothingList", Singular: "checknothing", Plural: "checknothings", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), - }) - - b.Add(kube.ResourceSpec{ - Kind: "authorization", - ListKind: "authorizationList", - Singular: "authorization", - Plural: "authorizations", - Version: "v1alpha2", - Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/checknothings"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "servicecontrolreport", - ListKind: "servicecontrolreportList", - Singular: "servicecontrolreport", - Plural: "servicecontrolreports", + b.Add(schema.ResourceSpec{ + Kind: "circonus", + ListKind: "circonusList", + Singular: "circonus", + Plural: "circonuses", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/circonuses"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "apikey", - ListKind: "apikeyList", - Singular: "apikey", - Plural: "apikeys", + b.Add(schema.ResourceSpec{ + Kind: "cloudwatch", + ListKind: "cloudwatchList", + Singular: "cloudwatch", + Plural: "cloudwatches", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/cloudwatches"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "stdio", - ListKind: "stdioList", - Singular: "stdio", - Plural: "stdios", + b.Add(schema.ResourceSpec{ + Kind: "denier", + ListKind: "denierList", + Singular: "denier", + Plural: "deniers", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/deniers"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "statsd", - ListKind: "statsdList", - Singular: "statsd", - Plural: "statsds", + b.Add(schema.ResourceSpec{ + Kind: "dogstatsd", + ListKind: "dogstatsdList", + Singular: "dogstatsd", + Plural: "dogstatsds", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/dogstatsds"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "stackdriver", - ListKind: "stackdriverList", - Singular: "stackdriver", - Plural: "stackdrivers", + b.Add(schema.ResourceSpec{ + Kind: "edge", + ListKind: "edgeList", + Singular: "edge", + Plural: "edges", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/edges"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "solarwinds", - ListKind: "solarwindsList", - Singular: "solarwinds", - Plural: "solarwindses", + b.Add(schema.ResourceSpec{ + Kind: "fluentd", + ListKind: "fluentdList", + Singular: "fluentd", + Plural: "fluentds", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/fluentds"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "signalfx", - ListKind: "signalfxList", - Singular: "signalfx", - Plural: "signalfxs", + b.Add(schema.ResourceSpec{ + Kind: "kubernetesenv", + ListKind: "kubernetesenvList", + Singular: "kubernetesenv", + Plural: "kubernetesenvs", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/kubernetesenvs"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "servicecontrol", - ListKind: "servicecontrolList", - Singular: "servicecontrol", - Plural: "servicecontrols", + b.Add(schema.ResourceSpec{ + Kind: "kubernetes", + ListKind: "kubernetesList", + Singular: "kubernetes", + Plural: "kuberneteses", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/kuberneteses"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "denier", - ListKind: "denierList", - Singular: "denier", - Plural: "deniers", + b.Add(schema.ResourceSpec{ + Kind: "listchecker", + ListKind: "listcheckerList", + Singular: "listchecker", + Plural: "listcheckers", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/listcheckers"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "redisquota", - ListKind: "redisquotaList", - Singular: "redisquota", - Plural: "redisquotas", + b.Add(schema.ResourceSpec{ + Kind: "listentry", + ListKind: "listentryList", + Singular: "listentry", + Plural: "listentries", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/listentries"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "template", - ListKind: "templateList", - Singular: "template", - Plural: "templates", + b.Add(schema.ResourceSpec{ + Kind: "logentry", + ListKind: "logentryList", + Singular: "logentry", + Plural: "logentries", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/logentries"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "adapter", - ListKind: "adapterList", - Singular: "adapter", - Plural: "adapters", + b.Add(schema.ResourceSpec{ + Kind: "memquota", + ListKind: "memquotaList", + Singular: "memquota", + Plural: "memquotas", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/memquotas"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "bypass", - ListKind: "bypassList", - Singular: "bypass", - Plural: "bypasses", + b.Add(schema.ResourceSpec{ + Kind: "metric", + ListKind: "metricList", + Singular: "metric", + Plural: "metrics", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/metrics"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "circonus", - ListKind: "circonusList", - Singular: "circonus", - Plural: "circonuses", + b.Add(schema.ResourceSpec{ + Kind: "noop", + ListKind: "noopList", + Singular: "noop", + Plural: "noops", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/noops"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "opa", ListKind: "opaList", Singular: "opa", Plural: "opas", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/opas"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "fluentd", - ListKind: "fluentdList", - Singular: "fluentd", - Plural: "fluentds", + b.Add(schema.ResourceSpec{ + Kind: "prometheus", + ListKind: "prometheusList", + Singular: "prometheus", + Plural: "prometheuses", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/prometheuses"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "kubernetesenv", - ListKind: "kubernetesenvList", - Singular: "kubernetesenv", - Plural: "kubernetesenvs", + b.Add(schema.ResourceSpec{ + Kind: "quota", + ListKind: "quotaList", + Singular: "quota", + Plural: "quotas", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/quotas"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "listchecker", - ListKind: "listcheckerList", - Singular: "listchecker", - Plural: "listcheckers", + b.Add(schema.ResourceSpec{ + Kind: "rbac", + ListKind: "rbacList", + Singular: "rbac", + Plural: "rbacs", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/rbacs"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "memquota", - ListKind: "memquotaList", - Singular: "memquota", - Plural: "memquotas", + b.Add(schema.ResourceSpec{ + Kind: "redisquota", + ListKind: "redisquotaList", + Singular: "redisquota", + Plural: "redisquotas", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/redisquotas"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "noop", - ListKind: "noopList", - Singular: "noop", - Plural: "noops", + b.Add(schema.ResourceSpec{ + Kind: "reportnothing", + ListKind: "reportnothingList", + Singular: "reportnothing", + Plural: "reportnothings", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/reportnothings"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "tracespan", - ListKind: "tracespanList", - Singular: "tracespan", - Plural: "tracespans", + b.Add(schema.ResourceSpec{ + Kind: "servicecontrolreport", + ListKind: "servicecontrolreportList", + Singular: "servicecontrolreport", + Plural: "servicecontrolreports", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource"), - Converter: converter.Get("legacy-mixer-resource"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/servicecontrolreports"), + Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "HTTPAPISpec", - ListKind: "HTTPAPISpecList", - Singular: "httpapispec", - Plural: "httpapispecs", + b.Add(schema.ResourceSpec{ + Kind: "servicecontrol", + ListKind: "servicecontrolList", + Singular: "servicecontrol", + Plural: "servicecontrols", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mixer.v1.config.client.HTTPAPISpec"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/servicecontrols"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "HTTPAPISpecBinding", - ListKind: "HTTPAPISpecBindingList", - Singular: "httpapispecbinding", - Plural: "httpapispecbindings", + b.Add(schema.ResourceSpec{ + Kind: "signalfx", + ListKind: "signalfxList", + Singular: "signalfx", + Plural: "signalfxs", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mixer.v1.config.client.HTTPAPISpecBinding"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/signalfxs"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "QuotaSpec", - ListKind: "QuotaSpecList", - Singular: "quotaspec", - Plural: "quotaspecs", + b.Add(schema.ResourceSpec{ + Kind: "solarwinds", + ListKind: "solarwindsList", + Singular: "solarwinds", + Plural: "solarwindses", + Version: "v1alpha2", + Group: "config.istio.io", + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/solarwindses"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ + Kind: "stackdriver", + ListKind: "stackdriverList", + Singular: "stackdriver", + Plural: "stackdrivers", + Version: "v1alpha2", + Group: "config.istio.io", + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/stackdrivers"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ + Kind: "statsd", + ListKind: "statsdList", + Singular: "statsd", + Plural: "statsds", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mixer.v1.config.client.QuotaSpec"), + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/statsds"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ + Kind: "stdio", + ListKind: "stdioList", + Singular: "stdio", + Plural: "stdios", + Version: "v1alpha2", + Group: "config.istio.io", + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/stdios"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ + Kind: "tracespan", + ListKind: "tracespanList", + Singular: "tracespan", + Plural: "tracespans", + Version: "v1alpha2", + Group: "config.istio.io", + Target: metadata.Types.Get("istio/config/v1alpha2/legacy/tracespans"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ + Kind: "template", + ListKind: "templateList", + Singular: "template", + Plural: "templates", + Version: "v1alpha2", + Group: "config.istio.io", + Target: metadata.Types.Get("istio/config/v1alpha2/templates"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ Kind: "QuotaSpecBinding", ListKind: "QuotaSpecBindingList", Singular: "quotaspecbinding", Plural: "quotaspecbindings", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.mixer.v1.config.client.QuotaSpecBinding"), + Target: metadata.Types.Get("istio/mixer/v1/config/client/quotaspecbindings"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ + Kind: "QuotaSpec", + ListKind: "QuotaSpecList", + Singular: "quotaspec", + Plural: "quotaspecs", + Version: "v1alpha2", + Group: "config.istio.io", + Target: metadata.Types.Get("istio/mixer/v1/config/client/quotaspecs"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ Kind: "DestinationRule", ListKind: "DestinationRuleList", Singular: "destinationrule", Plural: "destinationrules", Version: "v1alpha3", Group: "networking.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.networking.v1alpha3.DestinationRule"), + Target: metadata.Types.Get("istio/networking/v1alpha3/destinationrules"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "EnvoyFilter", ListKind: "EnvoyFilterList", Singular: "envoyfilter", Plural: "envoyfilters", Version: "v1alpha3", Group: "networking.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.networking.v1alpha3.EnvoyFilter"), + Target: metadata.Types.Get("istio/networking/v1alpha3/envoyfilters"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "Gateway", ListKind: "GatewayList", Singular: "gateway", Plural: "gateways", Version: "v1alpha3", Group: "networking.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.networking.v1alpha3.Gateway"), + Target: metadata.Types.Get("istio/networking/v1alpha3/gateways"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "ServiceEntry", ListKind: "ServiceEntryList", Singular: "serviceentry", Plural: "serviceentries", Version: "v1alpha3", Group: "networking.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.networking.v1alpha3.ServiceEntry"), + Target: metadata.Types.Get("istio/networking/v1alpha3/serviceentries"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ + Kind: "Sidecar", + ListKind: "SidecarList", + Singular: "sidecar", + Plural: "sidecars", + Version: "v1alpha3", + Group: "networking.istio.io", + Target: metadata.Types.Get("istio/networking/v1alpha3/sidecars"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "VirtualService", ListKind: "VirtualServiceList", Singular: "virtualservice", Plural: "virtualservices", Version: "v1alpha3", Group: "networking.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.networking.v1alpha3.VirtualService"), + Target: metadata.Types.Get("istio/networking/v1alpha3/virtualservices"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "attributemanifest", ListKind: "attributemanifestList", Singular: "attributemanifest", Plural: "attributemanifests", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.policy.v1beta1.AttributeManifest"), + Target: metadata.Types.Get("istio/policy/v1beta1/attributemanifests"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "handler", ListKind: "handlerList", Singular: "handler", Plural: "handlers", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.policy.v1beta1.Handler"), + Target: metadata.Types.Get("istio/policy/v1beta1/handlers"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "instance", ListKind: "instanceList", Singular: "instance", Plural: "instances", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.policy.v1beta1.Instance"), + Target: metadata.Types.Get("istio/policy/v1beta1/instances"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "rule", ListKind: "ruleList", Singular: "rule", Plural: "rules", Version: "v1alpha2", Group: "config.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.policy.v1beta1.Rule"), + Target: metadata.Types.Get("istio/policy/v1beta1/rules"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "ClusterRbacConfig", ListKind: "ClusterRbacConfigList", Singular: "clusterrbacconfig", Plural: "clusterrbacconfigs", Version: "v1alpha1", Group: "rbac.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.rbac.v1alpha1.RbacConfig"), + Target: metadata.Types.Get("istio/rbac/v1alpha1/clusterrbacconfigs"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "RbacConfig", ListKind: "RbacConfigList", Singular: "rbacconfig", Plural: "rbacconfigs", Version: "v1alpha1", Group: "rbac.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.rbac.v1alpha1.RbacConfig"), + Target: metadata.Types.Get("istio/rbac/v1alpha1/rbacconfigs"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ + Kind: "ServiceRoleBinding", + ListKind: "ServiceRoleBindingList", + Singular: "servicerolebinding", + Plural: "servicerolebindings", + Version: "v1alpha1", + Group: "rbac.istio.io", + Target: metadata.Types.Get("istio/rbac/v1alpha1/servicerolebindings"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "ServiceRole", ListKind: "ServiceRoleList", Singular: "servicerole", Plural: "serviceroles", Version: "v1alpha1", Group: "rbac.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.rbac.v1alpha1.ServiceRole"), + Target: metadata.Types.Get("istio/rbac/v1alpha1/serviceroles"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ - Kind: "ServiceRoleBinding", - ListKind: "ServiceRoleBindingList", - Singular: "servicerolebinding", - Plural: "servicerolebindings", - Version: "v1alpha1", - Group: "rbac.istio.io", - Target: metadata.Types.Get("type.googleapis.com/istio.rbac.v1alpha1.ServiceRoleBinding"), + b.Add(schema.ResourceSpec{ + Kind: "Node", + ListKind: "NodeList", + Singular: "node", + Plural: "nodes", + Version: "v1", + Group: "", + Target: metadata.Types.Get("k8s/core/v1/nodes"), + Converter: converter.Get("identity"), + }) + + b.Add(schema.ResourceSpec{ + Kind: "Pod", + ListKind: "PodList", + Singular: "pod", + Plural: "pods", + Version: "v1", + Group: "", + Target: metadata.Types.Get("k8s/core/v1/pods"), Converter: converter.Get("identity"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "Service", ListKind: "ServiceList", Singular: "service", Plural: "services", Version: "v1", Group: "", - Target: metadata.Types.Get("type.googleapis.com/k8s.io.api.core.v1.ServiceSpec"), + Target: metadata.Types.Get("k8s/core/v1/services"), Converter: converter.Get("kube-service-resource"), }) - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "Ingress", ListKind: "IngressList", Singular: "ingress", Plural: "ingresses", Version: "v1beta1", Group: "extensions", - Target: metadata.Types.Get("type.googleapis.com/k8s.io.api.extensions.v1beta1.IngressSpec"), + Target: metadata.Types.Get("k8s/extensions/v1beta1/ingresses"), Converter: converter.Get("kube-ingress-resource"), }) diff --git a/galley/pkg/metadata/kube/types_test.go b/galley/pkg/metadata/kube/types_test.go index a0fa2b6c7883..396b328ca390 100644 --- a/galley/pkg/metadata/kube/types_test.go +++ b/galley/pkg/metadata/kube/types_test.go @@ -20,7 +20,7 @@ import ( func TestEntries_Binding(t *testing.T) { for _, e := range Types.All() { - if e.Target.TypeURL.String() == "" { + if e.Target.Collection.String() == "" { t.Fatalf("Invalid binding to empty target: %v", e) } } diff --git a/galley/pkg/metadata/types.go b/galley/pkg/metadata/types.go index 9bf067a32339..bfcf012d3bbc 100644 --- a/galley/pkg/metadata/types.go +++ b/galley/pkg/metadata/types.go @@ -8,28 +8,28 @@ package metadata import ( // Pull in all the known proto types to ensure we get their types registered. - // Register protos in istio.io/api/authentication/v1alpha1"" + // Register protos in "github.com/gogo/protobuf/types" + _ "github.com/gogo/protobuf/types" + + // Register protos in "istio.io/api/authentication/v1alpha1" _ "istio.io/api/authentication/v1alpha1" - // Register protos in istio.io/api/mixer/v1/config/client"" + // Register protos in "istio.io/api/mixer/v1/config/client" _ "istio.io/api/mixer/v1/config/client" - // Register protos in istio.io/api/networking/v1alpha3"" + // Register protos in "istio.io/api/networking/v1alpha3" _ "istio.io/api/networking/v1alpha3" - // Register protos in istio.io/api/policy/v1beta1"" + // Register protos in "istio.io/api/policy/v1beta1" _ "istio.io/api/policy/v1beta1" - // Register protos in istio.io/api/rbac/v1alpha1"" + // Register protos in "istio.io/api/rbac/v1alpha1" _ "istio.io/api/rbac/v1alpha1" - // Register protos in istio.io/istio/galley/pkg/kube/converter/legacy"" - _ "istio.io/istio/galley/pkg/kube/converter/legacy" - - // Register protos in k8s.io/api/core/v1"" + // Register protos in "k8s.io/api/core/v1" _ "k8s.io/api/core/v1" - // Register protos in k8s.io/api/extensions/v1beta1"" + // Register protos in "k8s.io/api/extensions/v1beta1" _ "k8s.io/api/extensions/v1beta1" "istio.io/istio/galley/pkg/runtime/resource" @@ -40,8 +40,125 @@ var Types *resource.Schema var ( - // AttributeManifest metadata - AttributeManifest resource.Info + // MeshPolicy metadata + MeshPolicy resource.Info + + // Policy metadata + Policy resource.Info + + // Adapter metadata + Adapter resource.Info + + // HTTPAPISpecBinding metadata + HTTPAPISpecBinding resource.Info + + // HTTPAPISpec metadata + HTTPAPISpec resource.Info + + // Apikey metadata + Apikey resource.Info + + // Authorization metadata + Authorization resource.Info + + // Bypass metadata + Bypass resource.Info + + // Checknothing metadata + Checknothing resource.Info + + // Circonus metadata + Circonus resource.Info + + // Cloudwatch metadata + Cloudwatch resource.Info + + // Denier metadata + Denier resource.Info + + // Dogstatsd metadata + Dogstatsd resource.Info + + // Edge metadata + Edge resource.Info + + // Fluentd metadata + Fluentd resource.Info + + // Kubernetesenv metadata + Kubernetesenv resource.Info + + // Kubernetes metadata + Kubernetes resource.Info + + // Listchecker metadata + Listchecker resource.Info + + // Listentry metadata + Listentry resource.Info + + // Logentry metadata + Logentry resource.Info + + // Memquota metadata + Memquota resource.Info + + // Metric metadata + Metric resource.Info + + // Noop metadata + Noop resource.Info + + // Opa metadata + Opa resource.Info + + // Prometheus metadata + Prometheus resource.Info + + // Quota metadata + Quota resource.Info + + // Rbac metadata + Rbac resource.Info + + // Redisquota metadata + Redisquota resource.Info + + // Reportnothing metadata + Reportnothing resource.Info + + // Servicecontrolreport metadata + Servicecontrolreport resource.Info + + // Servicecontrol metadata + Servicecontrol resource.Info + + // Signalfx metadata + Signalfx resource.Info + + // Solarwinds metadata + Solarwinds resource.Info + + // Stackdriver metadata + Stackdriver resource.Info + + // Statsd metadata + Statsd resource.Info + + // Stdio metadata + Stdio resource.Info + + // Tracespan metadata + Tracespan resource.Info + + // Template metadata + Template resource.Info + + // QuotaSpecBinding metadata + QuotaSpecBinding resource.Info + + // QuotaSpec metadata + QuotaSpec resource.Info // DestinationRule metadata DestinationRule resource.Info @@ -52,77 +169,229 @@ var ( // Gateway metadata Gateway resource.Info - // HTTPAPISpec metadata - HTTPAPISpec resource.Info + // ServiceEntry metadata + ServiceEntry resource.Info - // HTTPAPISpecBinding metadata - HTTPAPISpecBinding resource.Info + // Sidecar metadata + Sidecar resource.Info + + // VirtualService metadata + VirtualService resource.Info + + // Attributemanifest metadata + Attributemanifest resource.Info // Handler metadata Handler resource.Info - // IngressSpec metadata - IngressSpec resource.Info - // Instance metadata Instance resource.Info - // LegacyMixerResource metadata - LegacyMixerResource resource.Info - - // Policy metadata - Policy resource.Info - - // QuotaSpec metadata - QuotaSpec resource.Info + // Rule metadata + Rule resource.Info - // QuotaSpecBinding metadata - QuotaSpecBinding resource.Info + // ClusterRbacConfig metadata + ClusterRbacConfig resource.Info // RbacConfig metadata RbacConfig resource.Info - // Rule metadata - Rule resource.Info - - // ServiceEntry metadata - ServiceEntry resource.Info + // ServiceRoleBinding metadata + ServiceRoleBinding resource.Info // ServiceRole metadata ServiceRole resource.Info - // ServiceRoleBinding metadata - ServiceRoleBinding resource.Info + // Node metadata + Node resource.Info - // ServiceSpec metadata - ServiceSpec resource.Info + // Pod metadata + Pod resource.Info - // VirtualService metadata - VirtualService resource.Info + // Service metadata + Service resource.Info + + // Ingress metadata + Ingress resource.Info ) func init() { b := resource.NewSchemaBuilder() - AttributeManifest = b.Register("type.googleapis.com/istio.policy.v1beta1.AttributeManifest") - DestinationRule = b.Register("type.googleapis.com/istio.networking.v1alpha3.DestinationRule") - EnvoyFilter = b.Register("type.googleapis.com/istio.networking.v1alpha3.EnvoyFilter") - Gateway = b.Register("type.googleapis.com/istio.networking.v1alpha3.Gateway") - HTTPAPISpec = b.Register("type.googleapis.com/istio.mixer.v1.config.client.HTTPAPISpec") - HTTPAPISpecBinding = b.Register("type.googleapis.com/istio.mixer.v1.config.client.HTTPAPISpecBinding") - Handler = b.Register("type.googleapis.com/istio.policy.v1beta1.Handler") - IngressSpec = b.Register("type.googleapis.com/k8s.io.api.extensions.v1beta1.IngressSpec") - Instance = b.Register("type.googleapis.com/istio.policy.v1beta1.Instance") - LegacyMixerResource = b.Register("type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource") - Policy = b.Register("type.googleapis.com/istio.authentication.v1alpha1.Policy") - QuotaSpec = b.Register("type.googleapis.com/istio.mixer.v1.config.client.QuotaSpec") - QuotaSpecBinding = b.Register("type.googleapis.com/istio.mixer.v1.config.client.QuotaSpecBinding") - RbacConfig = b.Register("type.googleapis.com/istio.rbac.v1alpha1.RbacConfig") - Rule = b.Register("type.googleapis.com/istio.policy.v1beta1.Rule") - ServiceEntry = b.Register("type.googleapis.com/istio.networking.v1alpha3.ServiceEntry") - ServiceRole = b.Register("type.googleapis.com/istio.rbac.v1alpha1.ServiceRole") - ServiceRoleBinding = b.Register("type.googleapis.com/istio.rbac.v1alpha1.ServiceRoleBinding") - ServiceSpec = b.Register("type.googleapis.com/k8s.io.api.core.v1.ServiceSpec") - VirtualService = b.Register("type.googleapis.com/istio.networking.v1alpha3.VirtualService") + + MeshPolicy = b.Register( + "istio/authentication/v1alpha1/meshpolicies", + "type.googleapis.com/istio.authentication.v1alpha1.Policy") + Policy = b.Register( + "istio/authentication/v1alpha1/policies", + "type.googleapis.com/istio.authentication.v1alpha1.Policy") + Adapter = b.Register( + "istio/config/v1alpha2/adapters", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + HTTPAPISpecBinding = b.Register( + "istio/config/v1alpha2/httpapispecbindings", + "type.googleapis.com/istio.mixer.v1.config.client.HTTPAPISpecBinding") + HTTPAPISpec = b.Register( + "istio/config/v1alpha2/httpapispecs", + "type.googleapis.com/istio.mixer.v1.config.client.HTTPAPISpec") + Apikey = b.Register( + "istio/config/v1alpha2/legacy/apikeys", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Authorization = b.Register( + "istio/config/v1alpha2/legacy/authorizations", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Bypass = b.Register( + "istio/config/v1alpha2/legacy/bypasses", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Checknothing = b.Register( + "istio/config/v1alpha2/legacy/checknothings", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Circonus = b.Register( + "istio/config/v1alpha2/legacy/circonuses", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Cloudwatch = b.Register( + "istio/config/v1alpha2/legacy/cloudwatches", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Denier = b.Register( + "istio/config/v1alpha2/legacy/deniers", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Dogstatsd = b.Register( + "istio/config/v1alpha2/legacy/dogstatsds", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Edge = b.Register( + "istio/config/v1alpha2/legacy/edges", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Fluentd = b.Register( + "istio/config/v1alpha2/legacy/fluentds", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Kubernetesenv = b.Register( + "istio/config/v1alpha2/legacy/kubernetesenvs", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Kubernetes = b.Register( + "istio/config/v1alpha2/legacy/kuberneteses", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Listchecker = b.Register( + "istio/config/v1alpha2/legacy/listcheckers", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Listentry = b.Register( + "istio/config/v1alpha2/legacy/listentries", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Logentry = b.Register( + "istio/config/v1alpha2/legacy/logentries", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Memquota = b.Register( + "istio/config/v1alpha2/legacy/memquotas", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Metric = b.Register( + "istio/config/v1alpha2/legacy/metrics", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Noop = b.Register( + "istio/config/v1alpha2/legacy/noops", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Opa = b.Register( + "istio/config/v1alpha2/legacy/opas", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Prometheus = b.Register( + "istio/config/v1alpha2/legacy/prometheuses", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Quota = b.Register( + "istio/config/v1alpha2/legacy/quotas", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Rbac = b.Register( + "istio/config/v1alpha2/legacy/rbacs", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Redisquota = b.Register( + "istio/config/v1alpha2/legacy/redisquotas", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Reportnothing = b.Register( + "istio/config/v1alpha2/legacy/reportnothings", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Servicecontrolreport = b.Register( + "istio/config/v1alpha2/legacy/servicecontrolreports", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Servicecontrol = b.Register( + "istio/config/v1alpha2/legacy/servicecontrols", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Signalfx = b.Register( + "istio/config/v1alpha2/legacy/signalfxs", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Solarwinds = b.Register( + "istio/config/v1alpha2/legacy/solarwindses", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Stackdriver = b.Register( + "istio/config/v1alpha2/legacy/stackdrivers", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Statsd = b.Register( + "istio/config/v1alpha2/legacy/statsds", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Stdio = b.Register( + "istio/config/v1alpha2/legacy/stdios", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Tracespan = b.Register( + "istio/config/v1alpha2/legacy/tracespans", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + Template = b.Register( + "istio/config/v1alpha2/templates", + "type.googleapis.com/type.googleapis.com/google.protobuf.Struct") + QuotaSpecBinding = b.Register( + "istio/mixer/v1/config/client/quotaspecbindings", + "type.googleapis.com/istio.mixer.v1.config.client.QuotaSpecBinding") + QuotaSpec = b.Register( + "istio/mixer/v1/config/client/quotaspecs", + "type.googleapis.com/istio.mixer.v1.config.client.QuotaSpec") + DestinationRule = b.Register( + "istio/networking/v1alpha3/destinationrules", + "type.googleapis.com/istio.networking.v1alpha3.DestinationRule") + EnvoyFilter = b.Register( + "istio/networking/v1alpha3/envoyfilters", + "type.googleapis.com/istio.networking.v1alpha3.EnvoyFilter") + Gateway = b.Register( + "istio/networking/v1alpha3/gateways", + "type.googleapis.com/istio.networking.v1alpha3.Gateway") + ServiceEntry = b.Register( + "istio/networking/v1alpha3/serviceentries", + "type.googleapis.com/istio.networking.v1alpha3.ServiceEntry") + Sidecar = b.Register( + "istio/networking/v1alpha3/sidecars", + "type.googleapis.com/istio.networking.v1alpha3.Sidecar") + VirtualService = b.Register( + "istio/networking/v1alpha3/virtualservices", + "type.googleapis.com/istio.networking.v1alpha3.VirtualService") + Attributemanifest = b.Register( + "istio/policy/v1beta1/attributemanifests", + "type.googleapis.com/istio.policy.v1beta1.AttributeManifest") + Handler = b.Register( + "istio/policy/v1beta1/handlers", + "type.googleapis.com/istio.policy.v1beta1.Handler") + Instance = b.Register( + "istio/policy/v1beta1/instances", + "type.googleapis.com/istio.policy.v1beta1.Instance") + Rule = b.Register( + "istio/policy/v1beta1/rules", + "type.googleapis.com/istio.policy.v1beta1.Rule") + ClusterRbacConfig = b.Register( + "istio/rbac/v1alpha1/clusterrbacconfigs", + "type.googleapis.com/istio.rbac.v1alpha1.RbacConfig") + RbacConfig = b.Register( + "istio/rbac/v1alpha1/rbacconfigs", + "type.googleapis.com/istio.rbac.v1alpha1.RbacConfig") + ServiceRoleBinding = b.Register( + "istio/rbac/v1alpha1/servicerolebindings", + "type.googleapis.com/istio.rbac.v1alpha1.ServiceRoleBinding") + ServiceRole = b.Register( + "istio/rbac/v1alpha1/serviceroles", + "type.googleapis.com/istio.rbac.v1alpha1.ServiceRole") + Node = b.Register( + "k8s/core/v1/nodes", + "type.googleapis.com/k8s.io.api.core.v1.NodeSpec") + Pod = b.Register( + "k8s/core/v1/pods", + "type.googleapis.com/k8s.io.api.core.v1.PodSpec") + Service = b.Register( + "k8s/core/v1/services", + "type.googleapis.com/k8s.io.api.core.v1.ServiceSpec") + Ingress = b.Register( + "k8s/extensions/v1beta1/ingresses", + "type.googleapis.com/k8s.io.api.extensions.v1beta1.IngressSpec") Types = b.Build() } diff --git a/galley/pkg/metadata/types_test.go b/galley/pkg/metadata/types_test.go index 5328c1418f59..51beaca03092 100644 --- a/galley/pkg/metadata/types_test.go +++ b/galley/pkg/metadata/types_test.go @@ -22,9 +22,9 @@ import ( func TestTypes_Info(t *testing.T) { for _, info := range Types.All() { - i, found := Types.Lookup(info.TypeURL.String()) + i, found := Types.Lookup(info.Collection.String()) if !found { - t.Fatalf("Unable to find by lookup: %q", info.TypeURL.String()) + t.Fatalf("Unable to find by lookup: %q", info.Collection.String()) } if i != info { t.Fatalf("Lookup mismatch. Expected:%v, Actual:%v", info, i) @@ -37,14 +37,14 @@ func TestTypes_NewProtoInstance(t *testing.T) { p := info.NewProtoInstance() name := pgogo.MessageName(p) if name != info.TypeURL.MessageName() { - t.Fatalf("Name/TypeURL mismatch: TypeURL:%v, Name:%v", info.TypeURL, name) + t.Fatalf("Name/TypeURL mismatch: TypeURL:%v, Name:%v", info.Collection, name) } } } -func TestTypes_LookupByTypeURL(t *testing.T) { +func TestTypes_LookupByCollection(t *testing.T) { for _, info := range Types.All() { - i, found := Types.Lookup(info.TypeURL.String()) + i, found := Types.Lookup(info.Collection.String()) if !found { t.Fatalf("Expected info not found: %v", info) @@ -56,8 +56,8 @@ func TestTypes_LookupByTypeURL(t *testing.T) { } } -func TestTypes_TypeURLs(t *testing.T) { - for _, url := range Types.TypeURLs() { +func TestTypes_Collections(t *testing.T) { + for _, url := range Types.Collections() { _, found := Types.Lookup(url) if !found { @@ -68,8 +68,8 @@ func TestTypes_TypeURLs(t *testing.T) { func TestTypes_Lookup(t *testing.T) { for _, info := range Types.All() { - if _, found := Types.Lookup(info.TypeURL.String()); !found { - t.Fatalf("expected info not found for: %s", info.TypeURL.String()) + if _, found := Types.Lookup(info.Collection.String()); !found { + t.Fatalf("expected info not found for: %s", info.Collection.String()) } } } diff --git a/galley/pkg/runtime/conversions/ingress.go b/galley/pkg/runtime/conversions/ingress.go index e5baae7ad827..cdf2a5a8fee5 100644 --- a/galley/pkg/runtime/conversions/ingress.go +++ b/galley/pkg/runtime/conversions/ingress.go @@ -20,8 +20,6 @@ import ( "strings" "github.com/gogo/protobuf/proto" - ingress "k8s.io/api/extensions/v1beta1" - "k8s.io/apimachinery/pkg/util/intstr" mcp "istio.io/api/mcp/v1alpha1" "istio.io/api/networking/v1alpha3" @@ -29,21 +27,24 @@ import ( "istio.io/istio/galley/pkg/runtime/resource" "istio.io/istio/pilot/pkg/model" "istio.io/istio/pkg/log" + + ingress "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/util/intstr" ) var scope = log.RegisterScope("conversions", "proto converters for runtime state", 0) -// ToIngressSpec unwraps an enveloped proto -func ToIngressSpec(e *mcp.Envelope) (*ingress.IngressSpec, error) { +// ToIngressSpec unwraps an MCP resource proto +func ToIngressSpec(e *mcp.Resource) (*ingress.IngressSpec, error) { - p := metadata.IngressSpec.NewProtoInstance() + p := metadata.Ingress.NewProtoInstance() i, ok := p.(*ingress.IngressSpec) if !ok { // Shouldn't happen return nil, fmt.Errorf("unable to convert proto to Ingress: %v", p) } - if err := proto.Unmarshal(e.Resource.Value, p); err != nil { + if err := proto.Unmarshal(e.Body.Value, p); err != nil { // Shouldn't happen return nil, fmt.Errorf("unable to unmarshal Ingress during projection: %v", err) } @@ -52,7 +53,8 @@ func ToIngressSpec(e *mcp.Envelope) (*ingress.IngressSpec, error) { } // IngressToVirtualService converts from ingress spec to Istio VirtualServices -func IngressToVirtualService(key resource.VersionedKey, i *ingress.IngressSpec, domainSuffix string, ingressByHost map[string]resource.Entry) { +func IngressToVirtualService(key resource.VersionedKey, meta resource.Metadata, i *ingress.IngressSpec, + domainSuffix string, ingressByHost map[string]resource.Entry) { // Ingress allows a single host - if missing '*' is assumed // We need to merge all rules with a particular host across // all ingresses, and return a separate VirtualService for each @@ -103,13 +105,13 @@ func IngressToVirtualService(key resource.VersionedKey, i *ingress.IngressSpec, ingressByHost[host] = resource.Entry{ ID: resource.VersionedKey{ Key: resource.Key{ - FullName: resource.FullNameFromNamespaceAndName(newNamespace, newName), - TypeURL: metadata.VirtualService.TypeURL, + FullName: resource.FullNameFromNamespaceAndName(newNamespace, newName), + Collection: metadata.VirtualService.Collection, }, - Version: key.Version, - CreateTime: key.CreateTime, + Version: key.Version, }, - Item: virtualService, + Metadata: meta, + Item: virtualService, } } } @@ -179,7 +181,7 @@ func ingressBackendToHTTPRoute(backend *ingress.IngressBackend, namespace string } // IngressToGateway converts from ingress spec to Istio Gateway -func IngressToGateway(key resource.VersionedKey, i *ingress.IngressSpec) resource.Entry { +func IngressToGateway(key resource.VersionedKey, meta resource.Metadata, i *ingress.IngressSpec) resource.Entry { namespace, name := key.FullName.InterpretAsNamespaceAndName() gateway := &v1alpha3.Gateway{ @@ -230,13 +232,13 @@ func IngressToGateway(key resource.VersionedKey, i *ingress.IngressSpec) resourc gw := resource.Entry{ ID: resource.VersionedKey{ Key: resource.Key{ - FullName: resource.FullNameFromNamespaceAndName(newNamespace, newName), - TypeURL: metadata.VirtualService.TypeURL, + FullName: resource.FullNameFromNamespaceAndName(newNamespace, newName), + Collection: metadata.VirtualService.Collection, }, - Version: key.Version, - CreateTime: key.CreateTime, + Version: key.Version, }, - Item: gateway, + Metadata: meta, + Item: gateway, } return gw diff --git a/galley/pkg/runtime/conversions/ingress_test.go b/galley/pkg/runtime/conversions/ingress_test.go index 5845d2693f61..09b422a37c52 100644 --- a/galley/pkg/runtime/conversions/ingress_test.go +++ b/galley/pkg/runtime/conversions/ingress_test.go @@ -80,8 +80,8 @@ func TestIngressConversion(t *testing.T) { } key := resource.VersionedKey{ Key: resource.Key{ - TypeURL: metadata.IngressSpec.TypeURL, - FullName: resource.FullNameFromNamespaceAndName("mock", "i1"), + Collection: metadata.Ingress.Collection, + FullName: resource.FullNameFromNamespaceAndName("mock", "i1"), }, } @@ -107,14 +107,14 @@ func TestIngressConversion(t *testing.T) { } key2 := resource.VersionedKey{ Key: resource.Key{ - TypeURL: metadata.IngressSpec.TypeURL, - FullName: resource.FullNameFromNamespaceAndName("mock", "i1"), + Collection: metadata.Ingress.Collection, + FullName: resource.FullNameFromNamespaceAndName("mock", "i1"), }, } cfgs := map[string]resource.Entry{} - IngressToVirtualService(key, &ingress, "mydomain", cfgs) - IngressToVirtualService(key2, &ingress2, "mydomain", cfgs) + IngressToVirtualService(key, resource.Metadata{}, &ingress, "mydomain", cfgs) + IngressToVirtualService(key2, resource.Metadata{}, &ingress2, "mydomain", cfgs) if len(cfgs) != 3 { t.Error("VirtualServices, expected 3 got ", len(cfgs)) diff --git a/galley/pkg/runtime/monitoring.go b/galley/pkg/runtime/monitoring.go index d53565ce16ad..11fe729b0afe 100644 --- a/galley/pkg/runtime/monitoring.go +++ b/galley/pkg/runtime/monitoring.go @@ -23,10 +23,10 @@ import ( "go.opencensus.io/tag" ) -const typeURL = "typeURL" +const collection = "collection" -// TypeURLTag holds the type URL for the context. -var TypeURLTag tag.Key +// CollectionTag holds the type URL for the context. +var CollectionTag tag.Key var ( strategyOnChangeTotal = stats.Int64( @@ -101,8 +101,8 @@ func recordProcessorSnapshotPublished(events int64, snapshotSpan time.Duration) processorSnapshotLifetimesMs.M(snapshotSpan.Nanoseconds()/1e6)) } -func recordStateTypeCount(typeURL string, count int) { - ctx, err := tag.New(context.Background(), tag.Insert(TypeURLTag, typeURL)) +func recordStateTypeCount(collection string, count int) { + ctx, err := tag.New(context.Background(), tag.Insert(CollectionTag, collection)) if err != nil { scope.Errorf("Error creating monitoring context for counting state: %v", err) } else { @@ -122,12 +122,12 @@ func newView(measure stats.Measure, keys []tag.Key, aggregation *view.Aggregatio func init() { var err error - if TypeURLTag, err = tag.NewKey(typeURL); err != nil { + if CollectionTag, err = tag.NewKey(collection); err != nil { panic(err) } var noKeys []tag.Key - typeURLKeys := []tag.Key{TypeURLTag} + collectionKeys := []tag.Key{CollectionTag} err = view.Register( newView(strategyOnTimerResetTotal, noKeys, view.Count()), @@ -138,7 +138,7 @@ func init() { newView(processorEventsProcessed, noKeys, view.Count()), newView(processorSnapshotsPublished, noKeys, view.Count()), newView(processorEventsPerSnapshot, noKeys, view.Distribution(0, 1, 2, 4, 8, 16, 32, 64, 128, 256)), - newView(stateTypeInstancesTotal, typeURLKeys, view.LastValue()), + newView(stateTypeInstancesTotal, collectionKeys, view.LastValue()), newView(processorSnapshotLifetimesMs, noKeys, durationDistributionMs), ) diff --git a/galley/pkg/runtime/processor_test.go b/galley/pkg/runtime/processor_test.go index 71bf7da5b240..50f6cd7169f8 100644 --- a/galley/pkg/runtime/processor_test.go +++ b/galley/pkg/runtime/processor_test.go @@ -30,14 +30,14 @@ import ( var testSchema = func() *resource.Schema { b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") - b.Register("type.googleapis.com/google.protobuf.Struct") + b.Register("empty", "type.googleapis.com/google.protobuf.Empty") + b.Register("struct", "type.googleapis.com/google.protobuf.Struct") return b.Build() }() var ( - emptyInfo = testSchema.Get("type.googleapis.com/google.protobuf.Empty") - structInfo = testSchema.Get("type.googleapis.com/google.protobuf.Struct") + emptyInfo = testSchema.Get("empty") + structInfo = testSchema.Get("struct") ) func TestProcessor_Start(t *testing.T) { @@ -108,8 +108,8 @@ func TestProcessor_EventAccumulation(t *testing.T) { t.Fatalf("unexpected error: %v", err) } - k1 := resource.Key{TypeURL: emptyInfo.TypeURL, FullName: resource.FullNameFromNamespaceAndName("", "r1")} - src.Set(k1, &types.Empty{}) + k1 := resource.Key{Collection: emptyInfo.Collection, FullName: resource.FullNameFromNamespaceAndName("", "r1")} + src.Set(k1, resource.Metadata{}, &types.Empty{}) // Wait "long enough" time.Sleep(time.Millisecond * 10) @@ -134,8 +134,8 @@ func TestProcessor_EventAccumulation_WithFullSync(t *testing.T) { t.Fatalf("unexpected error: %v", err) } - k1 := resource.Key{TypeURL: info.TypeURL, FullName: resource.FullNameFromNamespaceAndName("", "r1")} - src.Set(k1, &types.Empty{}) + k1 := resource.Key{Collection: info.Collection, FullName: resource.FullNameFromNamespaceAndName("", "r1")} + src.Set(k1, resource.Metadata{}, &types.Empty{}) // Wait "long enough" time.Sleep(time.Millisecond * 10) @@ -165,8 +165,8 @@ func TestProcessor_Publishing(t *testing.T) { t.Fatalf("unexpected error: %v", err) } - k1 := resource.Key{TypeURL: info.TypeURL, FullName: resource.FullNameFromNamespaceAndName("", "r1")} - src.Set(k1, &types.Empty{}) + k1 := resource.Key{Collection: info.Collection, FullName: resource.FullNameFromNamespaceAndName("", "r1")} + src.Set(k1, resource.Metadata{}, &types.Empty{}) processCallCount.Wait() diff --git a/galley/pkg/runtime/resource/resource.go b/galley/pkg/runtime/resource/resource.go index a0ddea6d933e..84d04806f1a3 100644 --- a/galley/pkg/runtime/resource/resource.go +++ b/galley/pkg/runtime/resource/resource.go @@ -25,46 +25,63 @@ import ( "github.com/gogo/protobuf/proto" ) +// Collection of the resource. +type Collection struct{ string } + // TypeURL of the resource. type TypeURL struct{ string } // Version is the version identifier of a resource. type Version string -// FullName of the resource. It is unique within a given set of resource of the same TypeUrl. +// FullName of the resource. It is unique within a given set of resource of the same collection. type FullName struct { string } // Key uniquely identifies a (mutable) config resource in the config space. type Key struct { - // TypeURL of the resource. - TypeURL TypeURL + // Collection of the resource. + Collection Collection // Fully qualified name of the resource. FullName FullName } -// VersionedKey uniquely identifies a snapshot of a config resource in the config space, at a given -// time. +// VersionedKey uniquely identifies a snapshot of a config resource in the config space. type VersionedKey struct { Key - Version Version - CreateTime time.Time + Version Version +} + +// Labels are a map of string keys and values that can be used to organize and categorize +// resources within a collection. +type Labels map[string]string + +// Annotations are a map of string keys and values that can be used by source and sink to communicate +// arbitrary metadata about this resource. +type Annotations map[string]string + +type Metadata struct { + CreateTime time.Time + Labels Labels + Annotations Annotations } // Entry is the abstract representation of a versioned config resource in Istio. type Entry struct { - ID VersionedKey - Item proto.Message + Metadata Metadata + ID VersionedKey + Item proto.Message } // Info is the type metadata for an Entry. type Info struct { - // TypeURL of the resource that this info is about - TypeURL TypeURL + // Collection of the resource that this info is about + Collection Collection - goType reflect.Type + goType reflect.Type + TypeURL TypeURL } // newTypeURL validates the passed in url as a type url, and returns a strongly typed version. @@ -97,6 +114,16 @@ func (t TypeURL) String() string { return t.string } +// newCollection returns a strongly typed collection. +func newCollection(collection string) Collection { + return Collection{collection} +} + +// String interface method implementation. +func (t Collection) String() string { + return t.string +} + // FullNameFromNamespaceAndName returns a FullName from namespace and name. func FullNameFromNamespaceAndName(namespace, name string) FullName { if namespace == "" { @@ -123,12 +150,12 @@ func (n FullName) InterpretAsNamespaceAndName() (string, string) { // String interface method implementation. func (k Key) String() string { - return fmt.Sprintf("[Key](%s:%s)", k.TypeURL, k.FullName) + return fmt.Sprintf("[Key](%s:%s)", k.Collection, k.FullName) } // String interface method implementation. func (k VersionedKey) String() string { - return fmt.Sprintf("[VKey](%s:%s @%s)", k.TypeURL, k.FullName, k.Version) + return fmt.Sprintf("[VKey](%s:%s @%s)", k.Collection, k.FullName, k.Version) } // IsEmpty returns true if the resource Entry.Item is nil. @@ -138,7 +165,7 @@ func (r *Entry) IsEmpty() bool { // String interface method implementation. func (i *Info) String() string { - return fmt.Sprintf("[Info](%s,%s)", i.TypeURL, i.TypeURL) + return fmt.Sprintf("[Info](%s,%s)", i.Collection, i.Collection) } // NewProtoInstance returns a new instance of the underlying proto for this resource. @@ -149,7 +176,7 @@ func (i *Info) NewProtoInstance() proto.Message { if p, ok := instance.(proto.Message); !ok { panic(fmt.Sprintf( "NewProtoInstance: message is not an instance of proto.Message. kind:%s, type:%v, value:%v", - i.TypeURL, i.goType, instance)) + i.Collection, i.goType, instance)) } else { return p } diff --git a/galley/pkg/runtime/resource/resource_test.go b/galley/pkg/runtime/resource/resource_test.go index 3740d869726f..39c31b360ba2 100644 --- a/galley/pkg/runtime/resource/resource_test.go +++ b/galley/pkg/runtime/resource/resource_test.go @@ -15,24 +15,25 @@ package resource import ( + "fmt" "reflect" "testing" "github.com/gogo/protobuf/types" ) -func TestTypeURL_Equality_True(t *testing.T) { - k1 := TypeURL{"a"} - k2 := TypeURL{"a"} +func TestCollection_Equality_True(t *testing.T) { + k1 := Collection{"a"} + k2 := Collection{"a"} if k1 != k2 { t.Fatalf("Expected to be equal: %v == %v", k1, k2) } } -func TestTypeURL_Equality_False(t *testing.T) { - k1 := TypeURL{"a"} - k2 := TypeURL{"v"} +func TestCollection_Equality_False(t *testing.T) { + k1 := Collection{"a"} + k2 := Collection{"v"} if k1 == k2 { t.Fatalf("Expected to be not equal: %v == %v", k1, k2) @@ -57,17 +58,17 @@ func TestVersion_Equality_False(t *testing.T) { } } func TestKey_Equality_True(t *testing.T) { - k1 := Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}} - k2 := Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}} + k1 := Key{Collection: Collection{"a"}, FullName: FullName{"ks"}} + k2 := Key{Collection: Collection{"a"}, FullName: FullName{"ks"}} if k1 != k2 { t.Fatalf("Expected to be equal: %v == %v", k1, k2) } } -func TestKey_Equality_False_DifferentTypeURL(t *testing.T) { - k1 := Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}} - k2 := Key{TypeURL: TypeURL{"b"}, FullName: FullName{"ks"}} +func TestKey_Equality_False_DifferentCollection(t *testing.T) { + k1 := Key{Collection: Collection{"a"}, FullName: FullName{"ks"}} + k2 := Key{Collection: Collection{"b"}, FullName: FullName{"ks"}} if k1 == k2 { t.Fatalf("Expected to be not equal: %v == %v", k1, k2) @@ -75,8 +76,8 @@ func TestKey_Equality_False_DifferentTypeURL(t *testing.T) { } func TestKey_Equality_False_DifferentName(t *testing.T) { - k1 := Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}} - k2 := Key{TypeURL: TypeURL{"a"}, FullName: FullName{"otherks"}} + k1 := Key{Collection: Collection{"a"}, FullName: FullName{"ks"}} + k2 := Key{Collection: Collection{"a"}, FullName: FullName{"otherks"}} if k1 == k2 { t.Fatalf("Expected to be not equal: %v == %v", k1, k2) @@ -84,27 +85,27 @@ func TestKey_Equality_False_DifferentName(t *testing.T) { } func TestKey_String(t *testing.T) { - k1 := Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}} + k1 := Key{Collection: Collection{"a"}, FullName: FullName{"ks"}} // Ensure that it doesn't crash _ = k1.String() } func TestVersionedKey_Equality_True(t *testing.T) { k1 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} k2 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} if k1 != k2 { t.Fatalf("Expected to be equal: %v == %v", k1, k2) } } -func TestVersionedKey_Equality_False_DifferentTypeURL(t *testing.T) { +func TestVersionedKey_Equality_False_DifferentCollection(t *testing.T) { k1 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} k2 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"b"}, FullName: FullName{"ks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"b"}, FullName: FullName{"ks"}}, Version: Version("v1")} if k1 == k2 { t.Fatalf("Expected to be not equal: %v == %v", k1, k2) @@ -113,9 +114,9 @@ func TestVersionedKey_Equality_False_DifferentTypeURL(t *testing.T) { func TestVersionedKey_Equality_False_DifferentName(t *testing.T) { k1 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} k2 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"otherks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"otherks"}}, Version: Version("v1")} if k1 == k2 { t.Fatalf("Expected to be not equal: %v == %v", k1, k2) @@ -124,9 +125,9 @@ func TestVersionedKey_Equality_False_DifferentName(t *testing.T) { func TestVersionedKey_Equality_False_DifferentVersion(t *testing.T) { k1 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} k2 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}}, Version: Version("v2")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"ks"}}, Version: Version("v2")} if k1 == k2 { t.Fatalf("Expected to be not equal: %v == %v", k1, k2) @@ -135,49 +136,11 @@ func TestVersionedKey_Equality_False_DifferentVersion(t *testing.T) { func TestVersionedKey_String(t *testing.T) { k1 := VersionedKey{ - Key: Key{TypeURL: TypeURL{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} + Key: Key{Collection: Collection{"a"}, FullName: FullName{"ks"}}, Version: Version("v1")} // Ensure that it doesn't crash _ = k1.String() } -func TestNewTypeURL(t *testing.T) { - goodurls := []string{ - "type.googleapis.com/a.b.c", - "type.googleapis.com/a", - "type.googleapis.com/foo/a.b.c", - "zoo.com/a.b.c", - "zoo.com/bar/a.b.c", - "http://type.googleapis.com/foo/a.b.c", - "https://type.googleapis.com/foo/a.b.c", - } - - for _, g := range goodurls { - t.Run(g, func(t *testing.T) { - _, err := newTypeURL(g) - if err != nil { - t.Fatalf("Unexpected error: %v", err) - } - }) - } - - badurls := []string{ - "ftp://type.googleapis.com/a.b.c", - "type.googleapis.com/a.b.c/", - "type.googleapis.com/", - "type.googleapis.com", - ":zoo:bar/doo", - } - - for _, g := range badurls { - t.Run(g, func(t *testing.T) { - _, err := newTypeURL(g) - if err == nil { - t.Fatal("expected error not found") - } - }) - } -} - func TestResource_IsEmpty(t *testing.T) { r := Entry{} if !r.IsEmpty() { @@ -229,8 +192,80 @@ func TestInfo_newProtoInstance_PanicAtNonProto(t *testing.T) { func TestInfo_String(t *testing.T) { i := Info{ - TypeURL: TypeURL{"http://foo.bar.com/foo"}, + Collection: Collection{"a"}, } // Ensure that it doesn't crash _ = i.String() } + +func TestFullNameFromNamespaceAndName(t *testing.T) { + cases := []struct { + namespace string + name string + want FullName + }{ + { + namespace: "default", + name: "foo", + want: FullName{string: "default/foo"}, + }, + { + namespace: "", + name: "foo", + want: FullName{string: "foo"}, + }, + } + + for i, c := range cases { + t.Run(fmt.Sprintf("[%v]%s", i, c.want), func(tt *testing.T) { + if got := FullNameFromNamespaceAndName(c.namespace, c.name); got != c.want { + tt.Errorf("wrong FullName: got: %v want %v", got, c.want) + } + gotNamespace, gotName := c.want.InterpretAsNamespaceAndName() + if gotNamespace != c.namespace { + tt.Errorf("wrong namespace: got %v want %v", gotNamespace, c.namespace) + } + if gotName != c.name { + tt.Errorf("wrong name: got %v want %v", gotName, c.name) + } + }) + } +} + +func TestNewTypeURL(t *testing.T) { + goodurls := []string{ + "type.googleapis.com/a.b.c", + "type.googleapis.com/a", + "type.googleapis.com/foo/a.b.c", + "zoo.com/a.b.c", + "zoo.com/bar/a.b.c", + "http://type.googleapis.com/foo/a.b.c", + "https://type.googleapis.com/foo/a.b.c", + } + + for _, g := range goodurls { + t.Run(g, func(t *testing.T) { + _, err := newTypeURL(g) + if err != nil { + t.Fatalf("Unexpected error: %v", err) + } + }) + } + + badurls := []string{ + "ftp://type.googleapis.com/a.b.c", + "type.googleapis.com/a.b.c/", + "type.googleapis.com/", + "type.googleapis.com", + ":zoo:bar/doo", + } + + for _, g := range badurls { + t.Run(g, func(t *testing.T) { + _, err := newTypeURL(g) + if err == nil { + t.Fatal("expected error not found") + } + }) + } +} diff --git a/galley/pkg/runtime/resource/schema.go b/galley/pkg/runtime/resource/schema.go index 929c459bb788..1e8f72ea5d9a 100644 --- a/galley/pkg/runtime/resource/schema.go +++ b/galley/pkg/runtime/resource/schema.go @@ -24,7 +24,7 @@ type messageTypeFn func(name string) reflect.Type // Schema contains metadata about configuration resources. type Schema struct { - byURL map[string]Info + byCollection map[string]Info messageTypeFn messageTypeFn } @@ -42,7 +42,7 @@ func NewSchemaBuilder() *SchemaBuilder { // newSchemaBuilder returns a new instance of SchemaBuilder. func newSchemaBuilder(messageTypeFn messageTypeFn) *SchemaBuilder { s := &Schema{ - byURL: make(map[string]Info), + byCollection: make(map[string]Info), messageTypeFn: messageTypeFn, } @@ -52,28 +52,28 @@ func newSchemaBuilder(messageTypeFn messageTypeFn) *SchemaBuilder { } // Register a proto into the schema. -func (b *SchemaBuilder) Register(typeURL string) Info { - if _, found := b.schema.byURL[typeURL]; found { - panic(fmt.Sprintf("schema.Register: Proto type is registered multiple times: %q", typeURL)) +func (b *SchemaBuilder) Register(rawCollection, rawTypeURL string) Info { + if _, found := b.schema.byCollection[rawCollection]; found { + panic(fmt.Sprintf("schema.Register: collection is registered multiple times: %q", rawCollection)) } - // Before registering, ensure that the proto type is actually reachable. - url, err := newTypeURL(typeURL) + typeURL, err := newTypeURL(rawTypeURL) if err != nil { panic(err) } - goType := b.schema.messageTypeFn(url.MessageName()) + goType := b.schema.messageTypeFn(typeURL.MessageName()) if goType == nil { - panic(fmt.Sprintf("schema.Register: Proto type not found: %q", url.MessageName())) + panic(fmt.Sprintf("schema.Register: Proto type not found: %q", typeURL.MessageName())) } info := Info{ - TypeURL: url, - goType: goType, + Collection: newCollection(rawCollection), + goType: goType, + TypeURL: typeURL, } - b.schema.byURL[info.TypeURL.String()] = info + b.schema.byCollection[info.Collection.String()] = info return info } @@ -88,38 +88,38 @@ func (b *SchemaBuilder) Build() *Schema { return s } -// Lookup looks up a resource.Info by its type url. -func (s *Schema) Lookup(url string) (Info, bool) { - i, ok := s.byURL[url] +// Lookup looks up a resource.Info by its collection. +func (s *Schema) Lookup(collection string) (Info, bool) { + i, ok := s.byCollection[collection] return i, ok } -// Get looks up a resource.Info by its type Url. Panics if it is not found. -func (s *Schema) Get(url string) Info { - i, ok := s.Lookup(url) +// Get looks up a resource.Info by its collection. Panics if it is not found. +func (s *Schema) Get(collection string) Info { + i, ok := s.Lookup(collection) if !ok { - panic(fmt.Sprintf("schema.Get: matching entry not found for url: %q", url)) + panic(fmt.Sprintf("schema.Get: matching entry not found for collection: %q", collection)) } return i } // All returns all known info objects func (s *Schema) All() []Info { - result := make([]Info, 0, len(s.byURL)) + result := make([]Info, 0, len(s.byCollection)) - for _, info := range s.byURL { + for _, info := range s.byCollection { result = append(result, info) } return result } -// TypeURLs returns all known type URLs. -func (s *Schema) TypeURLs() []string { - result := make([]string, 0, len(s.byURL)) +// Collections returns all known collections. +func (s *Schema) Collections() []string { + result := make([]string, 0, len(s.byCollection)) - for _, info := range s.byURL { - result = append(result, info.TypeURL.string) + for _, info := range s.byCollection { + result = append(result, info.Collection.string) } return result diff --git a/galley/pkg/runtime/resource/schema_test.go b/galley/pkg/runtime/resource/schema_test.go index c77f74a5b76c..8df41578312e 100644 --- a/galley/pkg/runtime/resource/schema_test.go +++ b/galley/pkg/runtime/resource/schema_test.go @@ -27,25 +27,25 @@ import ( func TestSchema_All(t *testing.T) { // Test schema.All in isolation, as the rest of the tests depend on it. s := Schema{ - byURL: make(map[string]Info), + byCollection: make(map[string]Info), } - foo := Info{TypeURL: TypeURL{"zoo.tar.com/foo"}} - bar := Info{TypeURL: TypeURL{"zoo.tar.com/bar"}} - s.byURL[foo.TypeURL.String()] = foo - s.byURL[bar.TypeURL.String()] = bar + foo := Info{Collection: Collection{"zoo/tar/com/foo"}} + bar := Info{Collection: Collection{"zoo/tar/com/bar"}} + s.byCollection[foo.Collection.String()] = foo + s.byCollection[bar.Collection.String()] = bar infos := s.All() sort.Slice(infos, func(i, j int) bool { - return strings.Compare(infos[i].TypeURL.String(), infos[j].TypeURL.String()) < 0 + return strings.Compare(infos[i].Collection.String(), infos[j].Collection.String()) < 0 }) expected := []Info{ { - TypeURL: TypeURL{"zoo.tar.com/bar"}, + Collection: bar.Collection, }, { - TypeURL: TypeURL{"zoo.tar.com/foo"}, + Collection: foo.Collection, }, } @@ -56,10 +56,10 @@ func TestSchema_All(t *testing.T) { func TestSchemaBuilder_Register_Success(t *testing.T) { b := NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") s := b.Build() - if _, found := s.byURL["type.googleapis.com/google.protobuf.Empty"]; !found { + if _, found := s.byCollection["foo"]; !found { t.Fatalf("Empty type should have been registered") } } @@ -72,8 +72,8 @@ func TestRegister_DoubleRegistrationPanic(t *testing.T) { }() b := NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") - b.Register("type.googleapis.com/google.protobuf.Empty") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") } func TestRegister_UnknownProto_Panic(t *testing.T) { @@ -84,10 +84,10 @@ func TestRegister_UnknownProto_Panic(t *testing.T) { }() b := NewSchemaBuilder() - b.Register("type.googleapis.com/unknown") + b.Register("unknown", "type.googleapis.com/unknown") } -func TestRegister_BadTypeURL(t *testing.T) { +func TestRegister_BadCollection(t *testing.T) { defer func() { if r := recover(); r == nil { t.Fatal("should have panicked") @@ -95,25 +95,25 @@ func TestRegister_BadTypeURL(t *testing.T) { }() b := NewSchemaBuilder() - b.Register("ftp://type.googleapis.com/google.protobuf.Empty") + b.Register("badCollection", "ftp://type.googleapis.com/google.protobuf.Empty") } func TestSchema_Lookup(t *testing.T) { b := NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") s := b.Build() - _, ok := s.Lookup("type.googleapis.com/google.protobuf.Empty") + _, ok := s.Lookup("foo") if !ok { t.Fatal("Should have found the info") } - _, ok = s.Lookup("type.googleapis.com/Foo") + _, ok = s.Lookup("bar") if ok { t.Fatal("Shouldn't have found the info") } - if _, found := s.byURL["type.googleapis.com/google.protobuf.Empty"]; !found { + if _, found := s.byCollection["foo"]; !found { t.Fatalf("Empty type should have been registered") } } @@ -126,11 +126,11 @@ func TestSchema_Get_Success(t *testing.T) { }() b := NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") s := b.Build() - i := s.Get("type.googleapis.com/google.protobuf.Empty") - if i.TypeURL.String() != "type.googleapis.com/google.protobuf.Empty" { + i := s.Get("foo") + if i.Collection.String() != "foo" { t.Fatalf("Unexpected info: %v", i) } } @@ -143,62 +143,27 @@ func TestSchema_Get_Panic(t *testing.T) { }() b := NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") + b.Register("panic", "type.googleapis.com/google.protobuf.Empty") s := b.Build() _ = s.Get("type.googleapis.com/foo") } -func TestSchema_TypeURLs(t *testing.T) { +func TestSchema_Collections(t *testing.T) { b := NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") - b.Register("type.googleapis.com/google.protobuf.Struct") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") + b.Register("bar", "type.googleapis.com/google.protobuf.Struct") s := b.Build() - actual := s.TypeURLs() + actual := s.Collections() sort.Strings(actual) expected := []string{ - "type.googleapis.com/google.protobuf.Empty", - "type.googleapis.com/google.protobuf.Struct", + "bar", + "foo", } if !reflect.DeepEqual(actual, expected) { t.Fatalf("Mismatch\nGot:\n%v\nWanted:\n%v\n", actual, expected) } } - -// -//func TestSchema_NewProtoInstance(t *testing.T) { -// for _, info := range Types.All() { -// p := info.NewProtoInstance() -// name := plang.MessageName(p) -// if name != info.TypeURL.MessageName() { -// t.Fatalf("Name/TypeURL mismatch: TypeURL:%v, Name:%v", info.TypeURL, name) -// } -// } -//} -// -//func TestSchema_LookupByTypeURL(t *testing.T) { -// for _, info := range Types.All() { -// i, found := Types.Lookup(info.TypeURL.string) -// -// if !found { -// t.Fatalf("Expected info not found: %v", info) -// } -// -// if i != info { -// t.Fatalf("Lookup mismatch. Expected:%v, Actual:%v", info, i) -// } -// } -//} -// -//func TestSchema_TypeURLs(t *testing.T) { -// for _, url := range Types.TypeURLs() { -// _, found := Types.Lookup(url) -// -// if !found { -// t.Fatalf("Expected info not found: %v", url) -// } -// } -//} diff --git a/galley/pkg/runtime/source.go b/galley/pkg/runtime/source.go index a77a3739a611..585d70decd63 100644 --- a/galley/pkg/runtime/source.go +++ b/galley/pkg/runtime/source.go @@ -68,7 +68,11 @@ func (s *InMemorySource) Start() (chan resource.Event, error) { // publish current items for _, item := range s.items { - s.ch <- resource.Event{Kind: resource.Added, Entry: resource.Entry{ID: item.ID, Item: item.Item}} + s.ch <- resource.Event{Kind: resource.Added, Entry: resource.Entry{ + ID: item.ID, + Metadata: item.Metadata, + Item: item.Item, + }} } s.ch <- resource.Event{Kind: resource.FullSync} @@ -89,7 +93,7 @@ func (s *InMemorySource) Stop() { } // Set the value in the in-memory store. -func (s *InMemorySource) Set(k resource.Key, item proto.Message) { +func (s *InMemorySource) Set(k resource.Key, metadata resource.Metadata, item proto.Message) { s.stateLock.Lock() defer s.stateLock.Unlock() @@ -104,7 +108,11 @@ func (s *InMemorySource) Set(k resource.Key, item proto.Message) { } if s.ch != nil { - s.ch <- resource.Event{Kind: kind, Entry: resource.Entry{ID: resource.VersionedKey{Key: k, Version: v}, Item: item}} + s.ch <- resource.Event{Kind: kind, Entry: resource.Entry{ + ID: resource.VersionedKey{Key: k, Version: v}, + Metadata: metadata, + Item: item, + }} } } diff --git a/galley/pkg/runtime/source_test.go b/galley/pkg/runtime/source_test.go index efb8234c787e..38dfba7cb1ed 100644 --- a/galley/pkg/runtime/source_test.go +++ b/galley/pkg/runtime/source_test.go @@ -43,7 +43,7 @@ func TestInMemory_Start_Empty(t *testing.T) { func TestInMemory_Start_WithItem(t *testing.T) { i := NewInMemorySource() fn := resource.FullNameFromNamespaceAndName("n1", "f1") - i.Set(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, &types.Empty{}) + i.Set(resource.Key{Collection: emptyInfo.Collection, FullName: fn}, resource.Metadata{}, &types.Empty{}) ch, err := i.Start() if err != nil { @@ -52,7 +52,7 @@ func TestInMemory_Start_WithItem(t *testing.T) { actual := captureChannelOutput(t, ch, 2) expected := strings.TrimSpace(` -[Event](Added: [VKey](type.googleapis.com/google.protobuf.Empty:n1/f1 @v1)) +[Event](Added: [VKey](empty:n1/f1 @v1)) [Event](FullSync) `) if actual != expected { @@ -86,14 +86,14 @@ func TestInMemory_Set(t *testing.T) { // One Register one update fn := resource.FullNameFromNamespaceAndName("n1", "f1") - i.Set(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, &types.Empty{}) - i.Set(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, &types.Empty{}) + i.Set(resource.Key{Collection: emptyInfo.Collection, FullName: fn}, resource.Metadata{}, &types.Empty{}) + i.Set(resource.Key{Collection: emptyInfo.Collection, FullName: fn}, resource.Metadata{}, &types.Empty{}) actual := captureChannelOutput(t, ch, 3) expected := strings.TrimSpace(` [Event](FullSync) -[Event](Added: [VKey](type.googleapis.com/google.protobuf.Empty:n1/f1 @v1)) -[Event](Updated: [VKey](type.googleapis.com/google.protobuf.Empty:n1/f1 @v2)) +[Event](Added: [VKey](empty:n1/f1 @v1)) +[Event](Updated: [VKey](empty:n1/f1 @v2)) `) if actual != expected { t.Fatalf("Channel mismatch:\nActual:\n%v\nExpected:\n%v\n", actual, expected) @@ -108,16 +108,16 @@ func TestInMemory_Delete(t *testing.T) { } fn := resource.FullNameFromNamespaceAndName("n1", "f1") - i.Set(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, &types.Empty{}) + i.Set(resource.Key{Collection: emptyInfo.Collection, FullName: fn}, resource.Metadata{}, &types.Empty{}) // Two deletes - i.Delete(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}) - i.Delete(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}) + i.Delete(resource.Key{Collection: emptyInfo.Collection, FullName: fn}) + i.Delete(resource.Key{Collection: emptyInfo.Collection, FullName: fn}) actual := captureChannelOutput(t, ch, 3) expected := strings.TrimSpace(` [Event](FullSync) -[Event](Added: [VKey](type.googleapis.com/google.protobuf.Empty:n1/f1 @v1)) -[Event](Deleted: [VKey](type.googleapis.com/google.protobuf.Empty:n1/f1 @v2)) +[Event](Added: [VKey](empty:n1/f1 @v1)) +[Event](Deleted: [VKey](empty:n1/f1 @v2)) `) if actual != expected { t.Fatalf("Channel mismatch:\nActual:\n%v\nExpected:\n%v\n", actual, expected) @@ -128,14 +128,14 @@ func TestInMemory_Get(t *testing.T) { fn := resource.FullNameFromNamespaceAndName("n1", "f1") i := NewInMemorySource() - i.Set(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, &types.Empty{}) + i.Set(resource.Key{Collection: emptyInfo.Collection, FullName: fn}, resource.Metadata{}, &types.Empty{}) - r, _ := i.Get(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}) + r, _ := i.Get(resource.Key{Collection: emptyInfo.Collection, FullName: fn}) if r.IsEmpty() { t.Fatal("Get should have been non empty") } - r, _ = i.Get(resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn2}) + r, _ = i.Get(resource.Key{Collection: emptyInfo.Collection, FullName: fn2}) if !r.IsEmpty() { t.Fatalf("Get should have been empty: %v", r) } diff --git a/galley/pkg/runtime/state.go b/galley/pkg/runtime/state.go index 18c8f0545192..43926bd19093 100644 --- a/galley/pkg/runtime/state.go +++ b/galley/pkg/runtime/state.go @@ -17,9 +17,10 @@ package runtime import ( "bytes" "fmt" + "sort" + "strings" "sync" - "github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/types" mcp "istio.io/api/mcp/v1alpha1" @@ -40,7 +41,12 @@ type State struct { // entries for per-message-type State. entriesLock sync.Mutex - entries map[resource.TypeURL]*resourceTypeState + entries map[resource.Collection]*resourceTypeState + + // Virtual version numbers for Gateways & VirtualServices for Ingress projected ones + ingressGWVersion int64 + ingressVSVersion int64 + lastIngressVersion int64 } // per-resource-type State. @@ -48,7 +54,7 @@ type resourceTypeState struct { // The version number for the current State of the object. Every time entries or versions change, // the version number also change version int64 - entries map[resource.FullName]*mcp.Envelope + entries map[resource.FullName]*mcp.Resource versions map[resource.FullName]resource.Version } @@ -56,14 +62,14 @@ func newState(schema *resource.Schema, cfg *Config) *State { s := &State{ schema: schema, config: cfg, - entries: make(map[resource.TypeURL]*resourceTypeState), + entries: make(map[resource.Collection]*resourceTypeState), } // pre-populate state for all known types so that built snapshots // includes valid default version for empty resource collections. for _, info := range schema.All() { - s.entries[info.TypeURL] = &resourceTypeState{ - entries: make(map[resource.FullName]*mcp.Envelope), + s.entries[info.Collection] = &resourceTypeState{ + entries: make(map[resource.FullName]*mcp.Resource), versions: make(map[resource.FullName]resource.Version), } } @@ -72,7 +78,7 @@ func newState(schema *resource.Schema, cfg *Config) *State { } func (s *State) apply(event resource.Event) bool { - pks, found := s.getResourceTypeState(event.Entry.ID.TypeURL) + pks, found := s.getResourceTypeState(event.Entry.ID.Collection) if !found { return false } @@ -88,19 +94,19 @@ func (s *State) apply(event resource.Event) bool { // TODO: Check for content-wise equality - entry, ok := s.envelopeResource(event.Entry) + entry, ok := s.toResource(event.Entry) if !ok { return false } pks.entries[event.Entry.ID.FullName] = entry pks.versions[event.Entry.ID.FullName] = event.Entry.ID.Version - recordStateTypeCount(event.Entry.ID.TypeURL.String(), len(pks.entries)) + recordStateTypeCount(event.Entry.ID.Collection.String(), len(pks.entries)) case resource.Deleted: delete(pks.entries, event.Entry.ID.FullName) delete(pks.versions, event.Entry.ID.FullName) - recordStateTypeCount(event.Entry.ID.TypeURL.String(), len(pks.entries)) + recordStateTypeCount(event.Entry.ID.Collection.String(), len(pks.entries)) default: scope.Errorf("Unknown event kind: %v", event.Kind) @@ -115,7 +121,7 @@ func (s *State) apply(event resource.Event) bool { return true } -func (s *State) getResourceTypeState(name resource.TypeURL) (*resourceTypeState, bool) { +func (s *State) getResourceTypeState(name resource.Collection) (*resourceTypeState, bool) { s.entriesLock.Lock() defer s.entriesLock.Unlock() @@ -129,13 +135,13 @@ func (s *State) buildSnapshot() snapshot.Snapshot { b := snapshot.NewInMemoryBuilder() - for typeURL, state := range s.entries { - entries := make([]*mcp.Envelope, 0, len(state.entries)) + for collection, state := range s.entries { + entries := make([]*mcp.Resource, 0, len(state.entries)) for _, entry := range state.entries { entries = append(entries, entry) } version := fmt.Sprintf("%d", state.version) - b.Set(typeURL.String(), version, entries) + b.Set(collection.String(), version, entries) } // Build entities that are derived from existing ones. @@ -152,28 +158,61 @@ func (s *State) buildIngressProjectionResources(b *snapshot.InMemoryBuilder) { ingressByHost := make(map[string]resource.Entry) // Build ingress projections - state := s.entries[metadata.IngressSpec.TypeURL] - if state == nil { + state := s.entries[metadata.Ingress.Collection] + if state == nil || len(state.entries) == 0 { return } - for name, entry := range state.entries { + if s.lastIngressVersion != state.version { + // Ingresses has changed + s.versionCounter++ + s.ingressGWVersion = s.versionCounter + s.versionCounter++ + s.ingressVSVersion = s.versionCounter + s.lastIngressVersion = state.version + } + + versionStr := fmt.Sprintf("%d_%d", + s.entries[metadata.Gateway.Collection].version, s.ingressGWVersion) + b.SetVersion(metadata.Gateway.Collection.String(), versionStr) + + versionStr = fmt.Sprintf("%d_%d", + s.entries[metadata.VirtualService.Collection].version, s.ingressVSVersion) + b.SetVersion(metadata.VirtualService.Collection.String(), versionStr) + + // Order names for stable generation. + var orderedNames []resource.FullName + for name := range state.entries { + orderedNames = append(orderedNames, name) + } + sort.Slice(orderedNames, func(i, j int) bool { + return strings.Compare(orderedNames[i].String(), orderedNames[j].String()) < 0 + }) + + for _, name := range orderedNames { + entry := state.entries[name] + ingress, err := conversions.ToIngressSpec(entry) - key := extractKey(name, entry, state.versions[name]) if err != nil { // Shouldn't happen scope.Errorf("error during ingress projection: %v", err) continue } - conversions.IngressToVirtualService(key, ingress, s.config.DomainSuffix, ingressByHost) - gw := conversions.IngressToGateway(key, ingress) + key := extractKey(name, state.versions[name]) + meta := extractMetadata(entry) + + conversions.IngressToVirtualService(key, meta, ingress, s.config.DomainSuffix, ingressByHost) + + gw := conversions.IngressToGateway(key, meta, ingress) err = b.SetEntry( - metadata.Gateway.TypeURL.String(), + metadata.Gateway.Collection.String(), gw.ID.FullName.String(), string(gw.ID.Version), - gw.ID.CreateTime, + gw.Metadata.CreateTime, + nil, + nil, gw.Item) if err != nil { scope.Errorf("Unable to set gateway entry: %v", err) @@ -182,10 +221,12 @@ func (s *State) buildIngressProjectionResources(b *snapshot.InMemoryBuilder) { for _, e := range ingressByHost { err := b.SetEntry( - metadata.VirtualService.TypeURL.String(), + metadata.VirtualService.Collection.String(), e.ID.FullName.String(), string(e.ID.Version), - e.ID.CreateTime, + e.Metadata.CreateTime, + nil, + nil, e.Item) if err != nil { scope.Errorf("Unable to set virtualservice entry: %v", err) @@ -193,46 +234,52 @@ func (s *State) buildIngressProjectionResources(b *snapshot.InMemoryBuilder) { } } -func extractKey(name resource.FullName, entry *mcp.Envelope, version resource.Version) resource.VersionedKey { +func extractKey(name resource.FullName, version resource.Version) resource.VersionedKey { + return resource.VersionedKey{ + Key: resource.Key{ + Collection: metadata.Ingress.Collection, + FullName: name, + }, + Version: version, + } +} + +func extractMetadata(entry *mcp.Resource) resource.Metadata { ts, err := types.TimestampFromProto(entry.Metadata.CreateTime) if err != nil { // It is an invalid timestamp. This shouldn't happen. scope.Errorf("Error converting proto timestamp to time.Time: %v", err) } - return resource.VersionedKey{ - Key: resource.Key{ - TypeURL: metadata.IngressSpec.TypeURL, - FullName: name, - }, - Version: version, - CreateTime: ts, + return resource.Metadata{ + CreateTime: ts, + Labels: entry.Metadata.GetLabels(), + Annotations: entry.Metadata.GetAnnotations(), } } -func (s *State) envelopeResource(e resource.Entry) (*mcp.Envelope, bool) { - serialized, err := proto.Marshal(e.Item) +func (s *State) toResource(e resource.Entry) (*mcp.Resource, bool) { + body, err := types.MarshalAny(e.Item) if err != nil { scope.Errorf("Error serializing proto from source e: %v:", e) return nil, false } - createTime, err := types.TimestampProto(e.ID.CreateTime) + createTime, err := types.TimestampProto(e.Metadata.CreateTime) if err != nil { scope.Errorf("Error parsing resource create_time for event (%v): %v", e, err) return nil, false } - entry := &mcp.Envelope{ + entry := &mcp.Resource{ Metadata: &mcp.Metadata{ - Name: e.ID.FullName.String(), - CreateTime: createTime, - Version: string(e.ID.Version), - }, - Resource: &types.Any{ - TypeUrl: e.ID.TypeURL.String(), - Value: serialized, + Name: e.ID.FullName.String(), + CreateTime: createTime, + Version: string(e.ID.Version), + Labels: e.Metadata.Labels, + Annotations: e.Metadata.Annotations, }, + Body: body, } return entry, true @@ -242,10 +289,10 @@ func (s *State) envelopeResource(e resource.Entry) (*mcp.Envelope, bool) { func (s *State) String() string { var b bytes.Buffer - fmt.Fprintf(&b, "[State @%v]\n", s.versionCounter) + _, _ = fmt.Fprintf(&b, "[State @%v]\n", s.versionCounter) sn := s.buildSnapshot().(*snapshot.InMemory) - fmt.Fprintf(&b, "%v", sn) + _, _ = fmt.Fprintf(&b, "%v", sn) return b.String() } diff --git a/galley/pkg/runtime/state_test.go b/galley/pkg/runtime/state_test.go index ec274ac570f6..bdd0a0653d49 100644 --- a/galley/pkg/runtime/state_test.go +++ b/galley/pkg/runtime/state_test.go @@ -48,7 +48,7 @@ func init() { } } -func checkCreateTime(e *mcp.Envelope, want time.Time) error { +func checkCreateTime(e *mcp.Resource, want time.Time) error { got, err := types.TimestampFromProto(e.Metadata.CreateTime) if err != nil { return fmt.Errorf("failed to decode: %v", err) @@ -63,12 +63,12 @@ func TestState_DefaultSnapshot(t *testing.T) { s := newState(testSchema, cfg) sn := s.buildSnapshot() - for _, typeURL := range []string{emptyInfo.TypeURL.String(), structInfo.TypeURL.String()} { - if r := sn.Resources(typeURL); len(r) != 0 { - t.Fatalf("%s entry should have been registered in snapshot", typeURL) + for _, collection := range []string{emptyInfo.Collection.String(), structInfo.Collection.String()} { + if r := sn.Resources(collection); len(r) != 0 { + t.Fatalf("%s entry should have been registered in snapshot", collection) } - if v := sn.Version(typeURL); v == "" { - t.Fatalf("%s version should have been available", typeURL) + if v := sn.Version(collection); v == "" { + t.Fatalf("%s version should have been available", collection) } } @@ -76,8 +76,10 @@ func TestState_DefaultSnapshot(t *testing.T) { Kind: resource.Added, Entry: resource.Entry{ ID: resource.VersionedKey{ - Version: "v1", - Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, + Version: "v1", + Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}, + }, + Metadata: resource.Metadata{ CreateTime: fakeCreateTime0, }, Item: &types.Any{}, @@ -90,14 +92,14 @@ func TestState_DefaultSnapshot(t *testing.T) { } sn = s.buildSnapshot() - r := sn.Resources(emptyInfo.TypeURL.String()) + r := sn.Resources(emptyInfo.Collection.String()) if len(r) != 1 { t.Fatal("Entry should have been registered in snapshot") } if err := checkCreateTime(r[0], fakeCreateTime0); err != nil { t.Fatalf("Bad create time: %v", err) } - v := sn.Version(emptyInfo.TypeURL.String()) + v := sn.Version(emptyInfo.Collection.String()) if v == "" { t.Fatal("Version should have been available") } @@ -110,8 +112,10 @@ func TestState_Apply_Update(t *testing.T) { Kind: resource.Added, Entry: resource.Entry{ ID: resource.VersionedKey{ - Version: "v1", - Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, + Version: "v1", + Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}, + }, + Metadata: resource.Metadata{ CreateTime: fakeCreateTime0, }, Item: &types.Any{}, @@ -127,8 +131,10 @@ func TestState_Apply_Update(t *testing.T) { Kind: resource.Updated, Entry: resource.Entry{ ID: resource.VersionedKey{ - Version: "v2", - Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, + Version: "v2", + Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}, + }, + Metadata: resource.Metadata{ CreateTime: fakeCreateTime1, }, Item: &types.Any{}, @@ -140,14 +146,14 @@ func TestState_Apply_Update(t *testing.T) { } sn := s.buildSnapshot() - r := sn.Resources(emptyInfo.TypeURL.String()) + r := sn.Resources(emptyInfo.Collection.String()) if len(r) != 1 { t.Fatal("Entry should have been registered in snapshot") } if err := checkCreateTime(r[0], fakeCreateTime1); err != nil { t.Fatalf("Bad create time: %v", err) } - v := sn.Version(emptyInfo.TypeURL.String()) + v := sn.Version(emptyInfo.Collection.String()) if v == "" { t.Fatal("Version should have been available") } @@ -160,8 +166,10 @@ func TestState_Apply_Update_SameVersion(t *testing.T) { Kind: resource.Added, Entry: resource.Entry{ ID: resource.VersionedKey{ - Version: "v1", - Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, + Version: "v1", + Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}, + }, + Metadata: resource.Metadata{ CreateTime: fakeCreateTime0, }, Item: &types.Any{}, @@ -177,8 +185,10 @@ func TestState_Apply_Update_SameVersion(t *testing.T) { Kind: resource.Updated, Entry: resource.Entry{ ID: resource.VersionedKey{ - Version: "v1", - Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}, + Version: "v1", + Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}, + }, + Metadata: resource.Metadata{ CreateTime: fakeCreateTime1, }, Item: &types.Any{}, @@ -198,7 +208,7 @@ func TestState_Apply_Delete(t *testing.T) { e := resource.Event{ Kind: resource.Added, Entry: resource.Entry{ - ID: resource.VersionedKey{Version: "v1", Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}}, + ID: resource.VersionedKey{Version: "v1", Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}}, Item: &types.Any{}, }, } @@ -211,7 +221,7 @@ func TestState_Apply_Delete(t *testing.T) { e = resource.Event{ Kind: resource.Deleted, Entry: resource.Entry{ - ID: resource.VersionedKey{Version: "v2", Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}}, + ID: resource.VersionedKey{Version: "v2", Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}}, }, } s.apply(e) @@ -234,7 +244,7 @@ func TestState_Apply_UnknownEventKind(t *testing.T) { e := resource.Event{ Kind: resource.EventKind(42), Entry: resource.Entry{ - ID: resource.VersionedKey{Version: "v1", Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}}, + ID: resource.VersionedKey{Version: "v1", Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}}, Item: &types.Any{}, }, } @@ -256,7 +266,7 @@ func TestState_Apply_BrokenProto(t *testing.T) { e := resource.Event{ Kind: resource.Added, Entry: resource.Entry{ - ID: resource.VersionedKey{Version: "v1", Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}}, + ID: resource.VersionedKey{Version: "v1", Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}}, Item: nil, }, } @@ -266,7 +276,7 @@ func TestState_Apply_BrokenProto(t *testing.T) { } sn := s.buildSnapshot() - r := sn.Resources(emptyInfo.TypeURL.String()) + r := sn.Resources(emptyInfo.Collection.String()) if len(r) != 0 { t.Fatal("Entry should have not been in snapshot") } @@ -278,7 +288,7 @@ func TestState_String(t *testing.T) { e := resource.Event{ Kind: resource.Added, Entry: resource.Entry{ - ID: resource.VersionedKey{Version: "v1", Key: resource.Key{TypeURL: emptyInfo.TypeURL, FullName: fn}}, + ID: resource.VersionedKey{Version: "v1", Key: resource.Key{Collection: emptyInfo.Collection, FullName: fn}}, Item: nil, }, } diff --git a/galley/pkg/server/args.go b/galley/pkg/server/args.go index 2c490ce27a73..775d0b469203 100644 --- a/galley/pkg/server/args.go +++ b/galley/pkg/server/args.go @@ -20,7 +20,6 @@ import ( "time" "istio.io/istio/pkg/ctrlz" - "istio.io/istio/pkg/log" "istio.io/istio/pkg/mcp/creds" ) @@ -58,9 +57,6 @@ type Args struct { // The credential options to use for MCP. CredentialOptions *creds.Options - // The logging options to use - LoggingOptions *log.Options - // The introspection options to use IntrospectionOptions *ctrlz.Options @@ -91,7 +87,6 @@ func DefaultArgs() *Args { APIAddress: "tcp://0.0.0.0:9901", MaxReceivedMessageSize: 1024 * 1024, MaxConcurrentStreams: 1024, - LoggingOptions: log.DefaultOptions(), IntrospectionOptions: ctrlz.DefaultOptions(), Insecure: false, AccessListFile: defaultAccessListFile, @@ -114,7 +109,6 @@ func (a *Args) String() string { fmt.Fprintf(buf, "EnableGrpcTracing: %v\n", a.EnableGRPCTracing) fmt.Fprintf(buf, "MaxReceivedMessageSize: %d\n", a.MaxReceivedMessageSize) fmt.Fprintf(buf, "MaxConcurrentStreams: %d\n", a.MaxConcurrentStreams) - fmt.Fprintf(buf, "LoggingOptions: %#v\n", *a.LoggingOptions) fmt.Fprintf(buf, "IntrospectionOptions: %+v\n", *a.IntrospectionOptions) fmt.Fprintf(buf, "Insecure: %v\n", a.Insecure) fmt.Fprintf(buf, "AccessListFile: %s\n", a.AccessListFile) diff --git a/galley/pkg/server/configmap.go b/galley/pkg/server/configmap.go index 25d94e66b217..cfab5b5c93d9 100644 --- a/galley/pkg/server/configmap.go +++ b/galley/pkg/server/configmap.go @@ -18,11 +18,13 @@ import ( "fmt" "io/ioutil" "os" + "time" "github.com/fsnotify/fsnotify" yaml "gopkg.in/yaml.v2" "istio.io/istio/pkg/filewatcher" + "istio.io/istio/pkg/mcp/env" "istio.io/istio/pkg/mcp/server" ) @@ -37,6 +39,16 @@ var ( watchEventHandledProbe func() ) +var ( + // For the purposes of logging rate limiting authz failures, this controls how + // many authz failures are logs as a burst every AUTHZ_FAILURE_LOG_FREQ. + authzFailureLogBurstSize = env.Integer("AUTHZ_FAILURE_LOG_BURST_SIZE", 1) + + // For the purposes of logging rate limiting authz failures, this controls how + // frequently bursts of authz failures are logged. + authzFailureLogFreq = env.Duration("AUTHZ_FAILURE_LOG_FREQ", time.Minute) +) + func watchAccessList(stopCh <-chan struct{}, accessListFile string) (*server.ListAuthChecker, error) { // Do the initial read. list, err := readAccessList(accessListFile) @@ -44,12 +56,15 @@ func watchAccessList(stopCh <-chan struct{}, accessListFile string) (*server.Lis return nil, err } - checker := server.NewListAuthChecker() + options := server.DefaultListAuthCheckerOptions() + options.AuthzFailureLogBurstSize = authzFailureLogBurstSize + options.AuthzFailureLogFreq = authzFailureLogFreq if list.IsBlackList { - checker.SetMode(server.AuthBlackList) + options.AuthMode = server.AuthBlackList } else { - checker.SetMode(server.AuthWhiteList) + options.AuthMode = server.AuthWhiteList } + checker := server.NewListAuthChecker(options) checker.Set(list.Allowed...) watcher := newFileWatcher() diff --git a/galley/pkg/server/server.go b/galley/pkg/server/server.go index 2b4b6a2eb207..9f838f0ab90c 100644 --- a/galley/pkg/server/server.go +++ b/galley/pkg/server/server.go @@ -28,20 +28,23 @@ import ( "google.golang.org/grpc" mcp "istio.io/api/mcp/v1alpha1" - "istio.io/istio/galley/cmd/shared" - "istio.io/istio/galley/pkg/fs" - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/kube/converter" - "istio.io/istio/galley/pkg/kube/source" "istio.io/istio/galley/pkg/meshconfig" "istio.io/istio/galley/pkg/metadata" - kube_meta "istio.io/istio/galley/pkg/metadata/kube" + kubeMeta "istio.io/istio/galley/pkg/metadata/kube" "istio.io/istio/galley/pkg/runtime" + "istio.io/istio/galley/pkg/source/fs" + "istio.io/istio/galley/pkg/source/kube/client" + "istio.io/istio/galley/pkg/source/kube/dynamic" + kubeConverter "istio.io/istio/galley/pkg/source/kube/dynamic/converter" + "istio.io/istio/galley/pkg/source/kube/schema" + "istio.io/istio/galley/pkg/source/kube/schema/check" "istio.io/istio/pkg/ctrlz" "istio.io/istio/pkg/log" "istio.io/istio/pkg/mcp/creds" + "istio.io/istio/pkg/mcp/monitoring" "istio.io/istio/pkg/mcp/server" "istio.io/istio/pkg/mcp/snapshot" + "istio.io/istio/pkg/mcp/source" "istio.io/istio/pkg/probe" "istio.io/istio/pkg/version" ) @@ -54,31 +57,29 @@ type Server struct { grpcServer *grpc.Server processor *runtime.Processor mcp *server.Server - reporter server.Reporter + reporter monitoring.Reporter listener net.Listener controlZ *ctrlz.Server stopCh chan struct{} } type patchTable struct { - logConfigure func(*log.Options) error - newKubeFromConfigFile func(string) (kube.Interfaces, error) - verifyResourceTypesPresence func(kube.Interfaces) error - newSource func(kube.Interfaces, time.Duration, *kube.Schema, *converter.Config) (runtime.Source, error) + newKubeFromConfigFile func(string) (client.Interfaces, error) + verifyResourceTypesPresence func(client.Interfaces) error + newSource func(client.Interfaces, time.Duration, *schema.Instance, *kubeConverter.Config) (runtime.Source, error) netListen func(network, address string) (net.Listener, error) newMeshConfigCache func(path string) (meshconfig.Cache, error) - mcpMetricReporter func(string) server.Reporter - fsNew func(string, *kube.Schema, *converter.Config) (runtime.Source, error) + mcpMetricReporter func(string) monitoring.Reporter + fsNew func(string, *schema.Instance, *kubeConverter.Config) (runtime.Source, error) } func defaultPatchTable() patchTable { return patchTable{ - logConfigure: log.Configure, - newKubeFromConfigFile: kube.NewKubeFromConfigFile, - verifyResourceTypesPresence: source.VerifyResourceTypesPresence, - newSource: source.New, + newKubeFromConfigFile: client.NewKubeFromConfigFile, + verifyResourceTypesPresence: check.VerifyResourceTypesPresence, + newSource: dynamic.New, netListen: net.Listen, - mcpMetricReporter: func(prefix string) server.Reporter { return server.NewStatsContext(prefix) }, + mcpMetricReporter: func(prefix string) monitoring.Reporter { return monitoring.NewStatsContext(prefix) }, newMeshConfigCache: func(path string) (meshconfig.Cache, error) { return meshconfig.NewCacheFromFile(path) }, fsNew: fs.New, } @@ -108,23 +109,21 @@ func newServer(a *Args, p patchTable, convertK8SService bool) (*Server, error) { } }() - if err = p.logConfigure(a.LoggingOptions); err != nil { - return nil, err - } - mesh, err := p.newMeshConfigCache(a.MeshConfigFile) if err != nil { return nil, err } - converterCfg := &converter.Config{ + converterCfg := &kubeConverter.Config{ Mesh: mesh, DomainSuffix: a.DomainSuffix, } - specs := kube_meta.Types.All() + specs := kubeMeta.Types.All() if !convertK8SService { - var filtered []kube.ResourceSpec + var filtered []schema.ResourceSpec for _, t := range specs { - if t.Kind != "Service" { + // TODO(nmittler): Temporarily filter Node and Pod until custom sources land. + // Pod yaml cannot be parsed currently. See: https://github.com/istio/istio/issues/10891 + if t.Kind != "Service" && t.Kind != "Node" && t.Kind != "Pod" { filtered = append(filtered, t) } } @@ -133,15 +132,15 @@ func newServer(a *Args, p patchTable, convertK8SService bool) (*Server, error) { sort.Slice(specs, func(i, j int) bool { return strings.Compare(specs[i].CanonicalResourceName(), specs[j].CanonicalResourceName()) < 0 }) - sb := kube.NewSchemaBuilder() + sb := schema.NewBuilder() for _, s := range specs { sb.Add(s) } - schema := sb.Build() + kubeSchema := sb.Build() var src runtime.Source if a.ConfigPath != "" { - src, err = p.fsNew(a.ConfigPath, schema, converterCfg) + src, err = p.fsNew(a.ConfigPath, kubeSchema, converterCfg) if err != nil { return nil, err } @@ -155,7 +154,7 @@ func newServer(a *Args, p patchTable, convertK8SService bool) (*Server, error) { return nil, err } } - src, err = p.newSource(k, a.ResyncPeriod, schema, converterCfg) + src, err = p.newSource(k, a.ResyncPeriod, kubeSchema, converterCfg) if err != nil { return nil, err } @@ -173,7 +172,7 @@ func newServer(a *Args, p patchTable, convertK8SService bool) (*Server, error) { grpcOptions = append(grpcOptions, grpc.MaxRecvMsgSize(int(a.MaxReceivedMessageSize))) s.stopCh = make(chan struct{}) - checker := server.NewAllowAllChecker() + var checker server.AuthChecker = server.NewAllowAllChecker() if !a.Insecure { checker, err = watchAccessList(s.stopCh, a.AccessListFile) if err != nil { @@ -192,7 +191,14 @@ func newServer(a *Args, p patchTable, convertK8SService bool) (*Server, error) { s.grpcServer = grpc.NewServer(grpcOptions...) s.reporter = p.mcpMetricReporter("galley/") - s.mcp = server.New(distributor, metadata.Types.TypeURLs(), checker, s.reporter) + + options := &source.Options{ + Watcher: distributor, + Reporter: s.reporter, + CollectionsOptions: source.CollectionOptionsFromSlice(metadata.Types.Collections()), + } + + s.mcp = server.New(options, checker) // get the network stuff setup network := "tcp" @@ -270,15 +276,14 @@ func (s *Server) Close() error { } //RunServer start Galley Server mode -func RunServer(sa *Args, printf, fatalf shared.FormatFn, livenessProbeController, +func RunServer(sa *Args, livenessProbeController, readinessProbeController probe.Controller) { - printf("Galley started with\n%s", sa) + log.Infof("Galley started with %s", sa) s, err := New(sa) if err != nil { - fatalf("Unable to initialize Galley Server: %v", err) + log.Fatalf("Unable to initialize Galley Server: %v", err) } - printf("Istio Galley: %s", version.Info) - printf("Starting gRPC server on %v", sa.APIAddress) + log.Infof("Istio Galley: %s\nStarting gRPC server on %v", version.Info, sa.APIAddress) s.Run() if livenessProbeController != nil { serverLivenessProbe := probe.NewProbe() diff --git a/galley/pkg/server/server_test.go b/galley/pkg/server/server_test.go index 20581cb256d0..608fb225a1e5 100644 --- a/galley/pkg/server/server_test.go +++ b/galley/pkg/server/server_test.go @@ -21,14 +21,14 @@ import ( "testing" "time" - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/kube/converter" "istio.io/istio/galley/pkg/meshconfig" kube_meta "istio.io/istio/galley/pkg/metadata/kube" "istio.io/istio/galley/pkg/runtime" + "istio.io/istio/galley/pkg/source/kube/client" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" + "istio.io/istio/galley/pkg/source/kube/schema" "istio.io/istio/galley/pkg/testing/mock" - "istio.io/istio/pkg/log" - "istio.io/istio/pkg/mcp/server" + "istio.io/istio/pkg/mcp/monitoring" mcptestmon "istio.io/istio/pkg/mcp/testing/monitoring" ) @@ -38,15 +38,15 @@ loop: for i := 0; ; i++ { p := defaultPatchTable() mk := mock.NewKube() - p.newKubeFromConfigFile = func(string) (kube.Interfaces, error) { return mk, nil } - p.newSource = func(kube.Interfaces, time.Duration, *kube.Schema, *converter.Config) (runtime.Source, error) { + p.newKubeFromConfigFile = func(string) (client.Interfaces, error) { return mk, nil } + p.newSource = func(client.Interfaces, time.Duration, *schema.Instance, *converter.Config) (runtime.Source, error) { return runtime.NewInMemorySource(), nil } p.newMeshConfigCache = func(path string) (meshconfig.Cache, error) { return meshconfig.NewInMemory(), nil } - p.fsNew = func(string, *kube.Schema, *converter.Config) (runtime.Source, error) { + p.fsNew = func(string, *schema.Instance, *converter.Config) (runtime.Source, error) { return runtime.NewInMemorySource(), nil } - p.mcpMetricReporter = func(string) server.Reporter { + p.mcpMetricReporter = func(string) monitoring.Reporter { return nil } @@ -58,20 +58,18 @@ loop: switch i { case 0: - p.logConfigure = func(*log.Options) error { return e } + p.newKubeFromConfigFile = func(string) (client.Interfaces, error) { return nil, e } case 1: - p.newKubeFromConfigFile = func(string) (kube.Interfaces, error) { return nil, e } - case 2: - p.newSource = func(kube.Interfaces, time.Duration, *kube.Schema, *converter.Config) (runtime.Source, error) { + p.newSource = func(client.Interfaces, time.Duration, *schema.Instance, *converter.Config) (runtime.Source, error) { return nil, e } - case 3: + case 2: p.netListen = func(network, address string) (net.Listener, error) { return nil, e } - case 4: + case 3: p.newMeshConfigCache = func(path string) (meshconfig.Cache, error) { return nil, e } - case 5: + case 4: args.ConfigPath = "aaa" - p.fsNew = func(string, *kube.Schema, *converter.Config) (runtime.Source, error) { return nil, e } + p.fsNew = func(string, *schema.Instance, *converter.Config) (runtime.Source, error) { return nil, e } default: break loop } @@ -86,18 +84,18 @@ loop: func TestNewServer(t *testing.T) { p := defaultPatchTable() mk := mock.NewKube() - p.newKubeFromConfigFile = func(string) (kube.Interfaces, error) { return mk, nil } - p.newSource = func(kube.Interfaces, time.Duration, *kube.Schema, *converter.Config) (runtime.Source, error) { + p.newKubeFromConfigFile = func(string) (client.Interfaces, error) { return mk, nil } + p.newSource = func(client.Interfaces, time.Duration, *schema.Instance, *converter.Config) (runtime.Source, error) { return runtime.NewInMemorySource(), nil } - p.mcpMetricReporter = func(s string) server.Reporter { - return mcptestmon.NewInMemoryServerStatsContext() + p.mcpMetricReporter = func(s string) monitoring.Reporter { + return mcptestmon.NewInMemoryStatsContext() } p.newMeshConfigCache = func(path string) (meshconfig.Cache, error) { return meshconfig.NewInMemory(), nil } - p.fsNew = func(string, *kube.Schema, *converter.Config) (runtime.Source, error) { + p.fsNew = func(string, *schema.Instance, *converter.Config) (runtime.Source, error) { return runtime.NewInMemorySource(), nil } - p.verifyResourceTypesPresence = func(kube.Interfaces) error { + p.verifyResourceTypesPresence = func(client.Interfaces) error { return nil } @@ -134,15 +132,15 @@ func TestNewServer(t *testing.T) { func TestServer_Basic(t *testing.T) { p := defaultPatchTable() mk := mock.NewKube() - p.newKubeFromConfigFile = func(string) (kube.Interfaces, error) { return mk, nil } - p.newSource = func(kube.Interfaces, time.Duration, *kube.Schema, *converter.Config) (runtime.Source, error) { + p.newKubeFromConfigFile = func(string) (client.Interfaces, error) { return mk, nil } + p.newSource = func(client.Interfaces, time.Duration, *schema.Instance, *converter.Config) (runtime.Source, error) { return runtime.NewInMemorySource(), nil } - p.mcpMetricReporter = func(s string) server.Reporter { - return mcptestmon.NewInMemoryServerStatsContext() + p.mcpMetricReporter = func(s string) monitoring.Reporter { + return mcptestmon.NewInMemoryStatsContext() } p.newMeshConfigCache = func(path string) (meshconfig.Cache, error) { return meshconfig.NewInMemory(), nil } - p.verifyResourceTypesPresence = func(kube.Interfaces) error { + p.verifyResourceTypesPresence = func(client.Interfaces) error { return nil } diff --git a/galley/pkg/fs/fssource.go b/galley/pkg/source/fs/fssource.go similarity index 92% rename from galley/pkg/fs/fssource.go rename to galley/pkg/source/fs/fssource.go index a112f82638c0..452a4047c39f 100644 --- a/galley/pkg/fs/fssource.go +++ b/galley/pkg/source/fs/fssource.go @@ -17,24 +17,22 @@ package fs import ( "crypto/sha1" "fmt" - "io/ioutil" "os" "path/filepath" "sync" "github.com/howeyc/fsnotify" - "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" - - "istio.io/istio/galley/pkg/kube/converter" - - "istio.io/istio/pkg/log" - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/kube/source" - kube_meta "istio.io/istio/galley/pkg/metadata/kube" + kubeMeta "istio.io/istio/galley/pkg/metadata/kube" "istio.io/istio/galley/pkg/runtime" "istio.io/istio/galley/pkg/runtime/resource" + "istio.io/istio/galley/pkg/source/kube/dynamic" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" + "istio.io/istio/galley/pkg/source/kube/schema" + "istio.io/istio/pkg/log" + + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" ) var supportedExtensions = map[string]bool{ @@ -87,7 +85,7 @@ func (s *fsSource) readFiles(root string) map[resource.FullName]*istioResource { } //add watcher for sub folders if info.Mode().IsDir() { - s.watcher.Watch(path) + _ = s.watcher.Watch(path) } return nil }) @@ -214,12 +212,12 @@ func (s *fsSource) initialCheck() { // Stop implements runtime.Source func (s *fsSource) Stop() { close(s.donec) - s.watcher.Close() + _ = s.watcher.Close() } func (s *fsSource) process(eventKind resource.EventKind, key resource.FullName, resourceKind string, r *istioResource) { var u *unstructured.Unstructured - var spec kube.ResourceSpec + var spec schema.ResourceSpec var kind string // no need to care about real data when deleting resources if eventKind == resource.Deleted { @@ -229,14 +227,14 @@ func (s *fsSource) process(eventKind resource.EventKind, key resource.FullName, u = r.u kind = r.u.GetKind() } - for _, v := range kube_meta.Types.All() { + for _, v := range kubeMeta.Types.All() { if v.Kind == kind { spec = v break } } - source.ProcessEvent(s.config, spec, eventKind, key, fmt.Sprintf("v%d", s.version), u, s.ch) + dynamic.ProcessEvent(s.config, spec, eventKind, key, fmt.Sprintf("v%d", s.version), u, s.ch) } // Start implements runtime.Source @@ -247,7 +245,7 @@ func (s *fsSource) Start() (chan resource.Event, error) { return nil, err } s.watcher = watcher - s.watcher.Watch(s.root) + _ = s.watcher.Watch(s.root) s.initialCheck() go func() { for { @@ -266,7 +264,7 @@ func (s *fsSource) Start() (chan resource.Event, error) { } else { if fi.Mode().IsDir() { scope.Debugf("add watcher for new folder %s", ev.Name) - s.watcher.Watch(ev.Name) + _ = s.watcher.Watch(ev.Name) } else { newData := s.readFile(ev.Name, fi, true) if newData != nil && len(newData) != 0 { @@ -280,7 +278,7 @@ func (s *fsSource) Start() (chan resource.Event, error) { scope.Warnf("error occurs for watching %s", ev.Name) } else { if fi.Mode().IsDir() { - s.watcher.RemoveWatch(ev.Name) + _ = s.watcher.RemoveWatch(ev.Name) } else { newData := s.readFile(ev.Name, fi, false) if newData != nil && len(newData) != 0 { @@ -301,7 +299,7 @@ func (s *fsSource) Start() (chan resource.Event, error) { } // New returns a File System implementation of runtime.Source. -func New(root string, schema *kube.Schema, config *converter.Config) (runtime.Source, error) { +func New(root string, schema *schema.Instance, config *converter.Config) (runtime.Source, error) { fs := &fsSource{ config: config, root: root, diff --git a/galley/pkg/fs/fssource_test.go b/galley/pkg/source/fs/fssource_test.go similarity index 91% rename from galley/pkg/fs/fssource_test.go rename to galley/pkg/source/fs/fssource_test.go index 516efd3fe65e..ace5c033a4c5 100644 --- a/galley/pkg/fs/fssource_test.go +++ b/galley/pkg/source/fs/fssource_test.go @@ -23,11 +23,11 @@ import ( "testing" "time" - "istio.io/istio/galley/pkg/kube/converter" "istio.io/istio/galley/pkg/meshconfig" - kube_meta "istio.io/istio/galley/pkg/metadata/kube" + kubeMeta "istio.io/istio/galley/pkg/metadata/kube" "istio.io/istio/galley/pkg/runtime" "istio.io/istio/galley/pkg/runtime/resource" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" sn "istio.io/istio/pkg/mcp/snapshot" ) @@ -133,6 +133,8 @@ type scenario struct { } var checkResult = func(ch chan resource.Event, expected string, t *testing.T, expectedSequence int) { + t.Helper() + log := logChannelOutput(ch, expectedSequence) if log != expected { t.Fatalf("Event mismatch:\nActual:\n%s\nExpected:\n%s\n", log, expected) @@ -140,6 +142,8 @@ var checkResult = func(ch chan resource.Event, expected string, t *testing.T, ex } func (fst *fsTestSourceState) testSetup(t *testing.T) { + t.Helper() + var err error fst.rootPath, err = ioutil.TempDir("", "configPath") @@ -195,7 +199,7 @@ func TestFsSource(t *testing.T) { initFileName: "virtual_service.yml", expectedSequence: 2, expectedResult: strings.TrimSpace(` - [Event](Added: [VKey](type.googleapis.com/istio.networking.v1alpha3.VirtualService:route-for-myapp @v0))`), + [Event](Added: [VKey](istio/networking/v1alpha3/virtualservices:route-for-myapp @v0))`), fileAction: func(_ chan resource.Event, _ runtime.Source) {}, checkResult: checkResult}, "FsSource_AddFile": { @@ -203,7 +207,7 @@ func TestFsSource(t *testing.T) { initFileName: "", expectedSequence: 2, expectedResult: strings.TrimSpace(` - [Event](Added: [VKey](type.googleapis.com/istio.networking.v1alpha3.VirtualService:route-for-myapp @v1))`), + [Event](Added: [VKey](istio/networking/v1alpha3/virtualservices:route-for-myapp @v1))`), fileAction: func(_ chan resource.Event, _ runtime.Source) { fst.configFiles["virtual_service.yml"] = []byte(virtualServiceYAML) err := fst.writeFile() @@ -218,7 +222,7 @@ func TestFsSource(t *testing.T) { initFileName: "virtual_service.yml", expectedSequence: 3, expectedResult: strings.TrimSpace(` - [Event](Deleted: [VKey](type.googleapis.com/istio.networking.v1alpha3.VirtualService:route-for-myapp @v0))`), + [Event](Deleted: [VKey](istio/networking/v1alpha3/virtualservices:route-for-myapp @v0))`), fileAction: func(_ chan resource.Event, _ runtime.Source) { err := fst.deleteFile() if err != nil { @@ -232,7 +236,7 @@ func TestFsSource(t *testing.T) { initFileName: "virtual_service.yml", expectedSequence: 4, expectedResult: strings.TrimSpace(` - [Event](Added: [VKey](type.googleapis.com/istio.networking.v1alpha3.VirtualService:route-for-myapp-changed @v1))`), + [Event](Added: [VKey](istio/networking/v1alpha3/virtualservices:route-for-myapp-changed @v1))`), fileAction: func(_ chan resource.Event, _ runtime.Source) { err := ioutil.WriteFile(filepath.Join(fst.rootPath, "virtual_service.yml"), []byte(virtualServiceChangedYAML), 0600) if err != nil { @@ -251,7 +255,7 @@ func TestFsSource(t *testing.T) { t.Fatalf("Unexpected error: %v", err) } donec := make(chan bool) - expected := "[Event](Deleted: [VKey](type.googleapis.com/istio.policy.v1beta1.Rule:some.mixer.rule @v0))" + expected := "[Event](Deleted: [VKey](istio/policy/v1beta1/rules:some.mixer.rule @v0))" go checkEventOccurs(expected, ch, donec) select { case <-time.After(5 * time.Second): @@ -310,7 +314,7 @@ func runTestCode(t *testing.T, test scenario) { } fst.testSetup(t) defer fst.testTeardown(t) - s, err := New(fst.rootPath, kube_meta.Types, &converter.Config{Mesh: meshconfig.NewInMemory()}) + s, err := New(fst.rootPath, kubeMeta.Types, &converter.Config{Mesh: meshconfig.NewInMemory()}) if err != nil { t.Fatalf("Unexpected error found: %v", err) } diff --git a/galley/pkg/fs/fsutilities.go b/galley/pkg/source/fs/fsutilities.go similarity index 100% rename from galley/pkg/fs/fsutilities.go rename to galley/pkg/source/fs/fsutilities.go diff --git a/galley/pkg/kube/interfaces.go b/galley/pkg/source/kube/client/interfaces.go similarity index 99% rename from galley/pkg/kube/interfaces.go rename to galley/pkg/source/kube/client/interfaces.go index 3bb630b8a243..1a926d30254a 100644 --- a/galley/pkg/kube/interfaces.go +++ b/galley/pkg/source/kube/client/interfaces.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package kube +package client import ( "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" diff --git a/galley/pkg/kube/interfaces_test.go b/galley/pkg/source/kube/client/interfaces_test.go similarity index 98% rename from galley/pkg/kube/interfaces_test.go rename to galley/pkg/source/kube/client/interfaces_test.go index 268580f189bc..bab458fe67de 100644 --- a/galley/pkg/kube/interfaces_test.go +++ b/galley/pkg/source/kube/client/interfaces_test.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package kube +package client import ( "testing" diff --git a/galley/pkg/kube/converter/config.go b/galley/pkg/source/kube/dynamic/converter/config.go similarity index 100% rename from galley/pkg/kube/converter/config.go rename to galley/pkg/source/kube/dynamic/converter/config.go diff --git a/galley/pkg/kube/converter/converter.go b/galley/pkg/source/kube/dynamic/converter/converter.go similarity index 89% rename from galley/pkg/kube/converter/converter.go rename to galley/pkg/source/kube/dynamic/converter/converter.go index 56fdd89d1354..77e58db1eef0 100644 --- a/galley/pkg/kube/converter/converter.go +++ b/galley/pkg/source/kube/dynamic/converter/converter.go @@ -20,7 +20,6 @@ import ( "time" "github.com/gogo/protobuf/proto" - "github.com/gogo/protobuf/types" corev1 "k8s.io/api/core/v1" extensions "k8s.io/api/extensions/v1beta1" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" @@ -28,7 +27,6 @@ import ( authn "istio.io/api/authentication/v1alpha1" meshconfig "istio.io/api/mesh/v1alpha1" networking "istio.io/api/networking/v1alpha3" - "istio.io/istio/galley/pkg/kube/converter/legacy" "istio.io/istio/galley/pkg/runtime/resource" "istio.io/istio/pilot/pkg/serviceregistry/kube" "istio.io/istio/pkg/log" @@ -42,6 +40,7 @@ type Fn func(cfg *Config, destination resource.Info, name resource.FullName, kin // Entry is a single converted entry. type Entry struct { Key resource.FullName + Metadata resource.Metadata CreationTime time.Time Resource proto.Message } @@ -53,7 +52,6 @@ var converters = func() map[string]Fn { // is acceptable to the lint check-in gate, and nolint annotations do not work. m["identity"] = identity m["nil"] = nilConverter - m["legacy-mixer-resource"] = legacyMixerResource m["auth-policy-resource"] = authPolicyResource m["kube-ingress-resource"] = kubeIngressResource m["kube-service-resource"] = kubeServiceResource @@ -84,17 +82,21 @@ func convertJSON(from, to interface{}) error { func identity(_ *Config, destination resource.Info, name resource.FullName, _ string, u *unstructured.Unstructured) ([]Entry, error) { var p proto.Message creationTime := time.Time{} + var metadata resource.Metadata if u != nil { var err error if p, err = toProto(destination, u.Object["spec"]); err != nil { return nil, err } creationTime = u.GetCreationTimestamp().Time + metadata.Labels = u.GetLabels() + metadata.Annotations = u.GetAnnotations() } e := Entry{ Key: name, CreationTime: creationTime, + Metadata: metadata, Resource: p, } @@ -105,49 +107,22 @@ func nilConverter(_ *Config, _ resource.Info, _ resource.FullName, _ string, _ * return nil, nil } -func legacyMixerResource(_ *Config, _ resource.Info, name resource.FullName, kind string, u *unstructured.Unstructured) ([]Entry, error) { - s := &types.Struct{} - creationTime := time.Time{} - var res *legacy.LegacyMixerResource - - if u != nil { - spec := u.Object["spec"] - if err := toproto(s, spec); err != nil { - return nil, err - } - creationTime = u.GetCreationTimestamp().Time - res = &legacy.LegacyMixerResource{ - Name: name.String(), - Kind: kind, - Contents: s, - } - - } - - newName := resource.FullNameFromNamespaceAndName(kind, name.String()) - - e := Entry{ - Key: newName, - CreationTime: creationTime, - Resource: res, - } - - return []Entry{e}, nil -} - func authPolicyResource(_ *Config, destination resource.Info, name resource.FullName, _ string, u *unstructured.Unstructured) ([]Entry, error) { var p proto.Message creationTime := time.Time{} + var metadata resource.Metadata if u != nil { var err error if p, err = toProto(destination, u.Object["spec"]); err != nil { return nil, err } creationTime = u.GetCreationTimestamp().Time + metadata.Labels = u.GetLabels() + metadata.Annotations = u.GetAnnotations() policy, ok := p.(*authn.Policy) if !ok { - return nil, fmt.Errorf("object is not of type %v", destination.TypeURL) + return nil, fmt.Errorf("object is not of type %v", destination.Collection) } // The pilot authentication plugin's config handling allows the mtls @@ -186,6 +161,7 @@ func authPolicyResource(_ *Config, destination resource.Info, name resource.Full e := Entry{ Key: name, CreationTime: creationTime, + Metadata: metadata, Resource: p, } @@ -194,6 +170,7 @@ func authPolicyResource(_ *Config, destination resource.Info, name resource.Full func kubeIngressResource(cfg *Config, _ resource.Info, name resource.FullName, _ string, u *unstructured.Unstructured) ([]Entry, error) { creationTime := time.Time{} + var metadata resource.Metadata var p *extensions.IngressSpec if u != nil { ing := &extensions.Ingress{} @@ -202,6 +179,8 @@ func kubeIngressResource(cfg *Config, _ resource.Info, name resource.FullName, _ } creationTime = u.GetCreationTimestamp().Time + metadata.Labels = u.GetLabels() + metadata.Annotations = u.GetAnnotations() if !shouldProcessIngress(cfg, ing) { return nil, nil @@ -213,6 +192,7 @@ func kubeIngressResource(cfg *Config, _ resource.Info, name resource.FullName, _ e := Entry{ Key: name, CreationTime: creationTime, + Metadata: metadata, Resource: p, } @@ -240,7 +220,11 @@ func kubeServiceResource(cfg *Config, _ resource.Info, name resource.FullName, _ return []Entry{{ Key: name, CreationTime: service.CreationTimestamp.Time, - Resource: &se, + Metadata: resource.Metadata{ + Labels: service.Labels, + Annotations: service.Annotations, + }, + Resource: &se, }}, nil } diff --git a/galley/pkg/kube/converter/converter_test.go b/galley/pkg/source/kube/dynamic/converter/converter_test.go similarity index 63% rename from galley/pkg/kube/converter/converter_test.go rename to galley/pkg/source/kube/dynamic/converter/converter_test.go index f9c64358343e..74eb85466320 100644 --- a/galley/pkg/kube/converter/converter_test.go +++ b/galley/pkg/source/kube/dynamic/converter/converter_test.go @@ -32,7 +32,6 @@ import ( authn "istio.io/api/authentication/v1alpha1" meshcfg "istio.io/api/mesh/v1alpha1" networking "istio.io/api/networking/v1alpha3" - "istio.io/istio/galley/pkg/kube/converter/legacy" "istio.io/istio/galley/pkg/meshconfig" "istio.io/istio/galley/pkg/runtime/resource" "istio.io/istio/pilot/pkg/model" @@ -71,15 +70,23 @@ func TestNilConverter(t *testing.T) { func TestIdentity(t *testing.T) { b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Struct") + b.Register("foo", "type.googleapis.com/google.protobuf.Struct") s := b.Build() - info := s.Get("type.googleapis.com/google.protobuf.Struct") + info := s.Get("foo") u := &unstructured.Unstructured{ Object: map[string]interface{}{ "metadata": map[string]interface{}{ "creationTimestamp": fakeCreateTime.Format(time.RFC3339), + "annotations": map[string]interface{}{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + "labels": map[string]interface{}{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, }, "spec": map[string]interface{}{ "foo": "bar", @@ -107,16 +114,27 @@ func TestIdentity(t *testing.T) { entries[0].CreationTime, fakeCreateTime) } - actual, ok := entries[0].Resource.(*types.Struct) - if !ok { - t.Fatalf("Unable to convert to struct: %v", entries[0].Resource) - } + actual := entries[0] - expected := &types.Struct{ - Fields: map[string]*types.Value{ - "foo": { - Kind: &types.Value_StringValue{ - StringValue: "bar", + expected := Entry{ + Key: key, + CreationTime: fakeCreateTime.Local(), + Metadata: resource.Metadata{ + Annotations: map[string]string{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + Labels: map[string]string{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, + }, + Resource: &types.Struct{ + Fields: map[string]*types.Value{ + "foo": { + Kind: &types.Value_StringValue{ + StringValue: "bar", + }, }, }, }, @@ -129,10 +147,10 @@ func TestIdentity(t *testing.T) { func TestIdentity_Error(t *testing.T) { b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") s := b.Build() - info := s.Get("type.googleapis.com/google.protobuf.Empty") + info := s.Get("foo") u := &unstructured.Unstructured{ Object: map[string]interface{}{ @@ -153,10 +171,10 @@ func TestIdentity_Error(t *testing.T) { func TestIdentity_NilResource(t *testing.T) { b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Struct") + b.Register("foo", "type.googleapis.com/google.protobuf.Struct") s := b.Build() - info := s.Get("type.googleapis.com/google.protobuf.Struct") + info := s.Get("foo") key := resource.FullNameFromNamespaceAndName("foo", "Key") @@ -174,128 +192,20 @@ func TestIdentity_NilResource(t *testing.T) { } } -func TestLegacyMixerResource(t *testing.T) { - b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Struct") - s := b.Build() - - info := s.Get("type.googleapis.com/google.protobuf.Struct") - - u := &unstructured.Unstructured{ - Object: map[string]interface{}{ - "kind": "k1", - "metadata": map[string]interface{}{ - "creationTimestamp": fakeCreateTime.Format(time.RFC3339), - }, - "spec": map[string]interface{}{ - "foo": "bar", - }, - }, - } - - key := resource.FullNameFromNamespaceAndName("", "Key") - - entries, err := legacyMixerResource(nil, info, key, "k1", u) - if err != nil { - t.Fatalf("Unexpected error: %v", err) - } - - if len(entries) != 1 { - t.Fatalf("Expected one entry: %v", entries) - } - - expectedKey := "k1/" + key.String() - if entries[0].Key.String() != expectedKey { - t.Fatalf("Keys mismatch. Wanted=%s, Got=%s", expectedKey, entries[0].Key) - } - - if !entries[0].CreationTime.Equal(fakeCreateTime) { - t.Fatalf("createTime mismatch: got %q want %q", - entries[0].CreationTime, fakeCreateTime) - } - - actual, ok := entries[0].Resource.(*legacy.LegacyMixerResource) - if !ok { - t.Fatalf("Unable to convert to legacy: %v", entries[0].Resource) - } - - expected := &legacy.LegacyMixerResource{ - Name: "Key", - Kind: "k1", - Contents: &types.Struct{ - Fields: map[string]*types.Value{ - "foo": { - Kind: &types.Value_StringValue{ - StringValue: "bar", - }, - }, - }, - }, - } - - if !reflect.DeepEqual(actual, expected) { - t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, expected) - } -} - -func TestLegacyMixerResource_NilResource(t *testing.T) { - b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Struct") - s := b.Build() - - info := s.Get("type.googleapis.com/google.protobuf.Struct") - - key := resource.FullNameFromNamespaceAndName("ns1", "Key") - - entries, err := legacyMixerResource(nil, info, key, "k1", nil) - if err != nil { - t.Fatalf("Unexpected error: %v", err) - } - - if len(entries) != 1 { - t.Fatalf("Expected one entry: %v", entries) - } - - expectedKey := "k1/" + key.String() - if entries[0].Key.String() != expectedKey { - t.Fatalf("Keys mismatch. Wanted=%s, Got=%s", expectedKey, entries[0].Key) - } -} - -func TestLegacyMixerResource_Error(t *testing.T) { - b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Any") - s := b.Build() - - info := s.Get("type.googleapis.com/google.protobuf.Any") - - u := &unstructured.Unstructured{ - Object: map[string]interface{}{ - "kind": "k1", - "spec": 23, - }, - } - - key := resource.FullNameFromNamespaceAndName("", "Key") - - _, err := legacyMixerResource(nil, info, key, "", u) - if err == nil { - t.Fatalf("expected error not found") - } -} - func TestAuthPolicyResource(t *testing.T) { typeURL := fmt.Sprintf("type.googleapis.com/" + proto.MessageName((*authn.Policy)(nil))) + collection := "test/collection/authpolicy" + b := resource.NewSchemaBuilder() - b.Register(typeURL) + b.Register(collection, typeURL) s := b.Build() - info := s.Get(typeURL) + info := s.Get(collection) cases := []struct { - name string - in *unstructured.Unstructured - wantProto *authn.Policy + name string + in *unstructured.Unstructured + want Entry }{ { name: "no-op", @@ -306,6 +216,14 @@ func TestAuthPolicyResource(t *testing.T) { "creationTimestamp": fakeCreateTime.Format(time.RFC3339), "name": "foo", "namespace": "default", + "annotations": map[string]interface{}{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + "labels": map[string]interface{}{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, }, "spec": map[string]interface{}{ "targets": []interface{}{ @@ -321,13 +239,27 @@ func TestAuthPolicyResource(t *testing.T) { }, }, }, - wantProto: &authn.Policy{ - Targets: []*authn.TargetSelector{{ - Name: "foo", - }}, - Peers: []*authn.PeerAuthenticationMethod{{ - &authn.PeerAuthenticationMethod_Mtls{Mtls: &authn.MutualTls{}}, - }}, + want: Entry{ + Key: resource.FullNameFromNamespaceAndName("default", "foo"), + CreationTime: fakeCreateTime.Local(), + Metadata: resource.Metadata{ + Annotations: map[string]string{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + Labels: map[string]string{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, + }, + Resource: &authn.Policy{ + Targets: []*authn.TargetSelector{{ + Name: "foo", + }}, + Peers: []*authn.PeerAuthenticationMethod{{ + &authn.PeerAuthenticationMethod_Mtls{Mtls: &authn.MutualTls{}}, + }}, + }, }, }, { @@ -339,6 +271,14 @@ func TestAuthPolicyResource(t *testing.T) { "creationTimestamp": fakeCreateTime.Format(time.RFC3339), "name": "foo", "namespace": "default", + "annotations": map[string]interface{}{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + "labels": map[string]interface{}{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, }, "spec": map[string]interface{}{ "targets": []interface{}{ @@ -354,19 +294,36 @@ func TestAuthPolicyResource(t *testing.T) { }, }, }, - wantProto: &authn.Policy{ - Targets: []*authn.TargetSelector{{ - Name: "foo", - }}, - Peers: []*authn.PeerAuthenticationMethod{{ - &authn.PeerAuthenticationMethod_Mtls{Mtls: &authn.MutualTls{}}, - }}, + want: Entry{ + Key: resource.FullNameFromNamespaceAndName("default", "foo"), + CreationTime: fakeCreateTime.Local(), + Metadata: resource.Metadata{ + Annotations: map[string]string{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + Labels: map[string]string{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, + }, + Resource: &authn.Policy{ + Targets: []*authn.TargetSelector{{ + Name: "foo", + }}, + Peers: []*authn.PeerAuthenticationMethod{{ + &authn.PeerAuthenticationMethod_Mtls{Mtls: &authn.MutualTls{}}, + }}, + }, }, }, { - name: "nil resource", - in: nil, - wantProto: nil, + name: "nil resource", + in: nil, + want: Entry{ + Key: resource.FullNameFromNamespaceAndName("ns1", "res1"), + Resource: nil, + }, }, } @@ -385,30 +342,10 @@ func TestAuthPolicyResource(t *testing.T) { tt.Fatalf("Expected one entry: %v", entries) } - gotKey := entries[0].Key - createTime := entries[0].CreationTime - pb := entries[0].Resource + got := entries[0] - if entries[0].Key != wantKey { - tt.Fatalf("Keys mismatch. got=%s, want=%s", gotKey, wantKey) - } - - if c.in == nil { - return - } - - if !createTime.Equal(fakeCreateTime) { - tt.Fatalf("createTime mismatch: got %q want %q", - createTime, fakeCreateTime) - } - - gotProto, ok := pb.(*authn.Policy) - if !ok { - tt.Fatalf("Unable to convert to authn.Policy: %v", pb) - } - - if !reflect.DeepEqual(gotProto, c.wantProto) { - tt.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", gotProto, c.wantProto) + if !reflect.DeepEqual(got, c.want) { + tt.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", got, c.want) } }) } @@ -416,11 +353,13 @@ func TestAuthPolicyResource(t *testing.T) { func TestKubeIngressResource(t *testing.T) { typeURL := fmt.Sprintf("type.googleapis.com/" + proto.MessageName((*extensions.IngressSpec)(nil))) + collection := "test/collection/ingress" + b := resource.NewSchemaBuilder() - b.Register(typeURL) + b.Register(collection, typeURL) s := b.Build() - info := s.Get(typeURL) + info := s.Get(collection) meshCfgOff := meshconfig.NewInMemory() meshCfgStrict := meshconfig.NewInMemory() @@ -434,14 +373,17 @@ func TestKubeIngressResource(t *testing.T) { IngressControllerMode: meshcfg.MeshConfig_DEFAULT, }) + var nilIngress *extensions.IngressSpec cases := []struct { - name string - in *unstructured.Unstructured - wantProto *extensions.IngressSpec - cfg *Config + name string + shouldConvert bool + in *unstructured.Unstructured + want Entry + cfg *Config }{ { - name: "no-conversion", + name: "no-conversion", + shouldConvert: false, in: &unstructured.Unstructured{ Object: map[string]interface{}{ "kind": "Ingress", @@ -449,6 +391,14 @@ func TestKubeIngressResource(t *testing.T) { "creationTimestamp": fakeCreateTime.Format(time.RFC3339), "name": "foo", "namespace": "default", + "annotations": map[string]interface{}{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + "labels": map[string]interface{}{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, }, "spec": map[string]interface{}{ "backend": map[string]interface{}{ @@ -462,10 +412,25 @@ func TestKubeIngressResource(t *testing.T) { Mesh: meshCfgOff, }, - wantProto: nil, + want: Entry{ + Key: resource.FullNameFromNamespaceAndName("default", "foo"), + CreationTime: fakeCreateTime.Local(), + Metadata: resource.Metadata{ + Annotations: map[string]string{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + Labels: map[string]string{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, + }, + Resource: nil, + }, }, { - name: "strict", + name: "strict", + shouldConvert: true, in: &unstructured.Unstructured{ Object: map[string]interface{}{ "kind": "Ingress", @@ -476,6 +441,10 @@ func TestKubeIngressResource(t *testing.T) { "annotations": map[string]interface{}{ "kubernetes.io/ingress.class": "cls", }, + "labels": map[string]interface{}{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, }, "spec": map[string]interface{}{ "backend": map[string]interface{}{ @@ -489,21 +458,38 @@ func TestKubeIngressResource(t *testing.T) { Mesh: meshCfgStrict, }, - wantProto: &extensions.IngressSpec{ - Backend: &extensions.IngressBackend{ - ServiceName: "testsvc", - ServicePort: intstr.IntOrString{Type: intstr.String, StrVal: "80"}, + want: Entry{ + Key: resource.FullNameFromNamespaceAndName("default", "foo"), + CreationTime: fakeCreateTime.Local(), + Metadata: resource.Metadata{ + Annotations: map[string]string{ + "kubernetes.io/ingress.class": "cls", + }, + Labels: map[string]string{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, + }, + Resource: &extensions.IngressSpec{ + Backend: &extensions.IngressBackend{ + ServiceName: "testsvc", + ServicePort: intstr.IntOrString{Type: intstr.String, StrVal: "80"}, + }, }, }, }, { - name: "nil", - in: nil, + name: "nil", + shouldConvert: true, + in: nil, cfg: &Config{ Mesh: meshCfgDefault, }, - wantProto: &extensions.IngressSpec{}, + want: Entry{ + Key: resource.FullNameFromNamespaceAndName("ns1", "res1"), + Resource: nilIngress, + }, }, } @@ -518,8 +504,7 @@ func TestKubeIngressResource(t *testing.T) { tt.Fatalf("Unexpected error: %v", err) } - if c.wantProto == nil { - + if !c.shouldConvert { if len(entries) != 0 { tt.Fatalf("Expected zero entries: %v", entries) } @@ -530,30 +515,10 @@ func TestKubeIngressResource(t *testing.T) { tt.Fatalf("Expected one entry: %v", entries) } - gotKey := entries[0].Key - createTime := entries[0].CreationTime - pb := entries[0].Resource + got := entries[0] - if entries[0].Key != wantKey { - tt.Fatalf("Keys mismatch. got=%s, want=%s", gotKey, wantKey) - } - - if c.in == nil { - return - } - - if !createTime.Equal(fakeCreateTime) { - tt.Fatalf("createTime mismatch: got %q want %q", - createTime, fakeCreateTime) - } - - gotProto, ok := pb.(*extensions.IngressSpec) - if !ok { - tt.Fatalf("Unable to convert to ingress spec: %v", pb) - } - - if !reflect.DeepEqual(gotProto, c.wantProto) { - tt.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", gotProto, c.wantProto) + if !reflect.DeepEqual(got.Resource, c.want.Resource) { + tt.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", got.Resource, c.want.Resource) } }) } @@ -612,7 +577,7 @@ func TestKubeServiceResource(t *testing.T) { cases := []struct { name string from corev1.Service - want proto.Message + want Entry }{ { name: "Simple", @@ -621,6 +586,14 @@ func TestKubeServiceResource(t *testing.T) { Name: "reviews", Namespace: "default", CreationTimestamp: meta_v1.Time{Time: fakeCreateTime}, + Annotations: map[string]string{ + "a1_key": "a1_value", + "a2_key": "a2_value", + }, + Labels: map[string]string{ + "l1_key": "l1_value", + "l2_key": "l2_value", + }, }, Spec: corev1.ServiceSpec{ ClusterIP: "10.39.241.161", @@ -646,26 +619,40 @@ func TestKubeServiceResource(t *testing.T) { }, }, }, - want: &networking.ServiceEntry{ - Hosts: []string{"reviews.default.svc.cluster.local"}, - Addresses: []string{"10.39.241.161"}, - Resolution: networking.ServiceEntry_STATIC, - Location: networking.ServiceEntry_MESH_INTERNAL, - Ports: []*networking.Port{ - { - Name: "http", - Number: 9080, - Protocol: "HTTP", + want: Entry{ + Key: resource.FullNameFromNamespaceAndName("default", "reviews"), + CreationTime: fakeCreateTime.Local(), + Metadata: resource.Metadata{ + Annotations: map[string]string{ + "a1_key": "a1_value", + "a2_key": "a2_value", }, - { - Name: "https-web", - Number: 9081, - Protocol: "HTTPS", + Labels: map[string]string{ + "l1_key": "l1_value", + "l2_key": "l2_value", }, - { - Name: "ssh", - Number: 9082, - Protocol: "TCP", + }, + Resource: &networking.ServiceEntry{ + Hosts: []string{"reviews.default.svc.cluster.local"}, + Addresses: []string{"10.39.241.161"}, + Resolution: networking.ServiceEntry_STATIC, + Location: networking.ServiceEntry_MESH_INTERNAL, + Ports: []*networking.Port{ + { + Name: "http", + Number: 9080, + Protocol: "HTTP", + }, + { + Name: "https-web", + Number: 9081, + Protocol: "HTTPS", + }, + { + Name: "ssh", + Number: 9082, + Protocol: "TCP", + }, }, }, }, @@ -679,17 +666,18 @@ func TestKubeServiceResource(t *testing.T) { if err := convertJSON(&c.from, &u.Object); err != nil { t.Fatalf("Internal test error: %v", err) } - want := []Entry{{ - Key: resource.FullNameFromNamespaceAndName(c.from.Namespace, c.from.Name), - CreationTime: fakeCreateTime.Local(), - Resource: c.want, - }} - got, err := kubeServiceResource(&Config{DomainSuffix: "cluster.local"}, resource.Info{}, want[0].Key, "kind", &u) + entries, err := kubeServiceResource(&Config{DomainSuffix: "cluster.local"}, resource.Info{}, c.want.Key, "kind", &u) if err != nil { - t.Fatalf("kubeServiceResource: %v", err) + t.Fatalf("Unexpected error: %v", err) + } + if len(entries) != 1 { + t.Fatalf("Expected one entry: %v", entries) } - if !reflect.DeepEqual(got, want) { - t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", got, want) + + got := entries[0] + + if !reflect.DeepEqual(got, c.want) { + t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", got, c.want) } }) } diff --git a/galley/pkg/kube/converter/proto.go b/galley/pkg/source/kube/dynamic/converter/proto.go similarity index 100% rename from galley/pkg/kube/converter/proto.go rename to galley/pkg/source/kube/dynamic/converter/proto.go diff --git a/galley/pkg/kube/converter/proto_test.go b/galley/pkg/source/kube/dynamic/converter/proto_test.go similarity index 85% rename from galley/pkg/kube/converter/proto_test.go rename to galley/pkg/source/kube/dynamic/converter/proto_test.go index 93ad5fd7735d..da8914ac709b 100644 --- a/galley/pkg/kube/converter/proto_test.go +++ b/galley/pkg/source/kube/dynamic/converter/proto_test.go @@ -27,9 +27,9 @@ func TestToProto_Success(t *testing.T) { spec := map[string]interface{}{} b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") + b.Register("foo", "type.googleapis.com/google.protobuf.Empty") s := b.Build() - i := s.Get("type.googleapis.com/google.protobuf.Empty") + i := s.Get("foo") p, err := toProto(i, spec) if err != nil { @@ -48,9 +48,9 @@ func TestToProto_Error(t *testing.T) { } b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Any") + b.Register("foo", "type.googleapis.com/google.protobuf.Any") s := b.Build() - i := s.Get("type.googleapis.com/google.protobuf.Any") + i := s.Get("foo") _, err := toProto(i, spec) if err == nil { diff --git a/galley/pkg/kube/source/listener.go b/galley/pkg/source/kube/dynamic/listener.go similarity index 76% rename from galley/pkg/kube/source/listener.go rename to galley/pkg/source/kube/dynamic/listener.go index 48faea3f20d5..65e8992009da 100644 --- a/galley/pkg/kube/source/listener.go +++ b/galley/pkg/source/kube/dynamic/listener.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package source +package dynamic import ( "fmt" @@ -20,16 +20,19 @@ import ( "sync" "time" + "istio.io/istio/galley/pkg/runtime/resource" + "istio.io/istio/galley/pkg/source/kube/client" + "istio.io/istio/galley/pkg/source/kube/log" + "istio.io/istio/galley/pkg/source/kube/schema" + "istio.io/istio/galley/pkg/source/kube/stats" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime" - "k8s.io/apimachinery/pkg/runtime/schema" + runtimeSchema "k8s.io/apimachinery/pkg/runtime/schema" "k8s.io/apimachinery/pkg/watch" "k8s.io/client-go/dynamic" "k8s.io/client-go/tools/cache" - - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/runtime/resource" ) // processorFn is a callback function that will receive change events back from listener. @@ -41,7 +44,7 @@ type listener struct { // Lock for changing the running state of the listener stateLock sync.Mutex - spec kube.ResourceSpec + spec schema.ResourceSpec resyncPeriod time.Duration @@ -60,19 +63,19 @@ type listener struct { // newListener returns a new instance of an listener. func newListener( - kubeInterface kube.Interfaces, resyncPeriod time.Duration, spec kube.ResourceSpec, processor processorFn) (*listener, error) { + kubeInterface client.Interfaces, resyncPeriod time.Duration, spec schema.ResourceSpec, processor processorFn) (*listener, error) { - if scope.DebugEnabled() { - scope.Debugf("Creating a new resource listener for: name='%s', gv:'%v'", spec.Singular, spec.GroupVersion()) + if log.Scope.DebugEnabled() { + log.Scope.Debugf("Creating a new resource listener for: name='%s', gv:'%v'", spec.Singular, spec.GroupVersion()) } - client, err := kubeInterface.DynamicInterface() + c, err := kubeInterface.DynamicInterface() if err != nil { - scope.Debugf("Error creating dynamic interface: %s: %v", spec.CanonicalResourceName(), err) + log.Scope.Debugf("Error creating dynamic interface: %s: %v", spec.CanonicalResourceName(), err) return nil, err } - resourceClient := client.Resource(spec.GroupVersion().WithResource(spec.Plural)) + resourceClient := c.Resource(spec.GroupVersion().WithResource(spec.Plural)) return &listener{ spec: spec, @@ -88,11 +91,11 @@ func (l *listener) start() { defer l.stateLock.Unlock() if l.stopCh != nil { - scope.Errorf("already synchronizing resources: name='%s', gv='%v'", l.spec.Singular, l.spec.GroupVersion()) + log.Scope.Errorf("already synchronizing resources: name='%s', gv='%v'", l.spec.Singular, l.spec.GroupVersion()) return } - scope.Debugf("Starting listener for %s(%v)", l.spec.Singular, l.spec.GroupVersion()) + log.Scope.Debugf("Starting listener for %s(%v)", l.spec.Singular, l.spec.GroupVersion()) l.stopCh = make(chan struct{}) @@ -140,7 +143,7 @@ func (l *listener) stop() { defer l.stateLock.Unlock() if l.stopCh == nil { - scope.Errorf("already stopped") + log.Scope.Errorf("already stopped") return } @@ -154,17 +157,17 @@ func (l *listener) handleEvent(c resource.EventKind, obj interface{}) { var tombstone cache.DeletedFinalStateUnknown if tombstone, ok = obj.(cache.DeletedFinalStateUnknown); !ok { msg := fmt.Sprintf("error decoding object, invalid type: %v", reflect.TypeOf(obj)) - scope.Error(msg) - recordHandleEventError(msg) + log.Scope.Error(msg) + stats.RecordHandleEventError(msg) return } if object, ok = tombstone.Obj.(metav1.Object); !ok { msg := fmt.Sprintf("error decoding object tombstone, invalid type: %v", reflect.TypeOf(tombstone.Obj)) - scope.Error(msg) - recordHandleEventError(msg) + log.Scope.Error(msg) + stats.RecordHandleEventError(msg) return } - scope.Infof("Recovered deleted object '%s' from tombstone", object.GetName()) + log.Scope.Infof("Recovered deleted object '%s' from tombstone", object.GetName()) } key := resource.FullNameFromNamespaceAndName(object.GetNamespace(), object.GetName()) @@ -177,16 +180,16 @@ func (l *listener) handleEvent(c resource.EventKind, obj interface{}) { // https://github.com/kubernetes/kubernetes/pull/63972 // k8s machinery does not always preserve TypeMeta in list operations. Restore it // using aprior knowledge of the GVK for this listener. - u.SetGroupVersionKind(schema.GroupVersionKind{ + u.SetGroupVersionKind(runtimeSchema.GroupVersionKind{ Group: l.spec.Group, Version: l.spec.Version, Kind: l.spec.Kind, }) } - if scope.DebugEnabled() { - scope.Debugf("Sending event: [%v] from: %s", c, l.spec.CanonicalResourceName()) + if log.Scope.DebugEnabled() { + log.Scope.Debugf("Sending event: [%v] from: %s", c, l.spec.CanonicalResourceName()) } l.processor(l, c, key, object.GetResourceVersion(), u) - recordHandleEventSuccess() + stats.RecordHandleEventSuccess() } diff --git a/galley/pkg/kube/source/listener_test.go b/galley/pkg/source/kube/dynamic/listener_test.go similarity index 97% rename from galley/pkg/kube/source/listener_test.go rename to galley/pkg/source/kube/dynamic/listener_test.go index 1071d704f480..621d39941fac 100644 --- a/galley/pkg/kube/source/listener_test.go +++ b/galley/pkg/source/kube/dynamic/listener_test.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package source +package dynamic import ( "errors" @@ -21,21 +21,22 @@ import ( "sync" "testing" + "istio.io/istio/galley/pkg/runtime/resource" + kubeLog "istio.io/istio/galley/pkg/source/kube/log" + "istio.io/istio/galley/pkg/source/kube/schema" + "istio.io/istio/galley/pkg/testing/common" + "istio.io/istio/galley/pkg/testing/mock" + "istio.io/istio/pkg/log" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/watch" "k8s.io/client-go/dynamic/fake" dtesting "k8s.io/client-go/testing" "k8s.io/client-go/tools/cache" - - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/runtime/resource" - "istio.io/istio/galley/pkg/testing/common" - "istio.io/istio/galley/pkg/testing/mock" - "istio.io/istio/pkg/log" ) -var info = kube.ResourceSpec{ +var info = schema.ResourceSpec{ Kind: "kind", ListKind: "listkind", Group: "group", @@ -73,9 +74,9 @@ func TestListener_NewClient_Debug(t *testing.T) { processorFn := func(l *listener, eventKind resource.EventKind, key resource.FullName, version string, u *unstructured.Unstructured) { } - old := scope.GetOutputLevel() - defer scope.SetOutputLevel(old) - scope.SetOutputLevel(log.DebugLevel) + old := kubeLog.Scope.GetOutputLevel() + defer kubeLog.Scope.SetOutputLevel(old) + kubeLog.Scope.SetOutputLevel(log.DebugLevel) _, _ = newListener(k, 0, info, processorFn) // should not crash } @@ -495,9 +496,9 @@ func TestListener_Tombstone_ObjDecodeError(t *testing.T) { a.start() - old := scope.GetOutputLevel() - defer scope.SetOutputLevel(old) - scope.SetOutputLevel(log.DebugLevel) + old := kubeLog.Scope.GetOutputLevel() + defer kubeLog.Scope.SetOutputLevel(old) + kubeLog.Scope.SetOutputLevel(log.DebugLevel) item := cache.DeletedFinalStateUnknown{Key: "foo", Obj: struct{}{}} a.handleEvent(resource.Deleted, item) diff --git a/galley/pkg/kube/source/source.go b/galley/pkg/source/kube/dynamic/source.go similarity index 64% rename from galley/pkg/kube/source/source.go rename to galley/pkg/source/kube/dynamic/source.go index 963685c6b5f2..67b93e53b0c5 100644 --- a/galley/pkg/kube/source/source.go +++ b/galley/pkg/source/kube/dynamic/source.go @@ -12,26 +12,26 @@ // See the License for the specific language governing permissions and // limitations under the License. -package source +package dynamic import ( "time" - "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" - - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/kube/converter" "istio.io/istio/galley/pkg/runtime" "istio.io/istio/galley/pkg/runtime/resource" - "istio.io/istio/pkg/log" -) + "istio.io/istio/galley/pkg/source/kube/client" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" + "istio.io/istio/galley/pkg/source/kube/log" + "istio.io/istio/galley/pkg/source/kube/schema" + "istio.io/istio/galley/pkg/source/kube/stats" -var scope = log.RegisterScope("kube", "kube-specific debugging", 0) + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" +) // source is an implementation of runtime.Source. type sourceImpl struct { cfg *converter.Config - ifaces kube.Interfaces + ifaces client.Interfaces ch chan resource.Event listeners []*listener @@ -40,22 +40,22 @@ type sourceImpl struct { var _ runtime.Source = &sourceImpl{} // New returns a Kubernetes implementation of runtime.Source. -func New(k kube.Interfaces, resyncPeriod time.Duration, schema *kube.Schema, cfg *converter.Config) (runtime.Source, error) { +func New(k client.Interfaces, resyncPeriod time.Duration, schema *schema.Instance, cfg *converter.Config) (runtime.Source, error) { s := &sourceImpl{ cfg: cfg, ifaces: k, ch: make(chan resource.Event, 1024), } - scope.Infof("Registering the following resources:") + log.Scope.Infof("Registering the following resources:") for i, spec := range schema.All() { - scope.Infof("[%d]", i) - scope.Infof(" Source: %s", spec.CanonicalResourceName()) - scope.Infof(" Type URL: %s", spec.Target.TypeURL) + log.Scope.Infof("[%d]", i) + log.Scope.Infof(" Source: %s", spec.CanonicalResourceName()) + log.Scope.Infof(" Type URL: %s", spec.Target.Collection) l, err := newListener(k, resyncPeriod, spec, s.process) if err != nil { - scope.Errorf("Error registering listener: %v", err) + log.Scope.Errorf("Error registering listener: %v", err) return nil, err } @@ -94,21 +94,21 @@ func (s *sourceImpl) process(l *listener, kind resource.EventKind, key resource. } // ProcessEvent process the incoming message and convert it to event -func ProcessEvent(cfg *converter.Config, spec kube.ResourceSpec, kind resource.EventKind, key resource.FullName, resourceVersion string, +func ProcessEvent(cfg *converter.Config, spec schema.ResourceSpec, kind resource.EventKind, key resource.FullName, resourceVersion string, u *unstructured.Unstructured, ch chan resource.Event) { var event resource.Event entries, err := spec.Converter(cfg, spec.Target, key, spec.Kind, u) if err != nil { - scope.Errorf("Unable to convert unstructured to proto: %s/%s: %v", key, resourceVersion, err) - recordConverterResult(false, spec.Version, spec.Group, spec.Kind) + log.Scope.Errorf("Unable to convert unstructured to proto: %s/%s: %v", key, resourceVersion, err) + stats.RecordConverterResult(false, spec.Version, spec.Group, spec.Kind) return } - recordConverterResult(true, spec.Version, spec.Group, spec.Kind) + stats.RecordConverterResult(true, spec.Version, spec.Group, spec.Kind) if len(entries) == 0 { - scope.Debugf("Did not receive any entries from converter: kind=%v, key=%v, rv=%s", kind, key, resourceVersion) + log.Scope.Debugf("Did not receive any entries from converter: kind=%v, key=%v, rv=%s", kind, key, resourceVersion) return } @@ -120,23 +120,23 @@ func ProcessEvent(cfg *converter.Config, spec kube.ResourceSpec, kind resource.E rid := resource.VersionedKey{ Key: resource.Key{ - TypeURL: spec.Target.TypeURL, - FullName: entries[0].Key, + Collection: spec.Target.Collection, + FullName: entries[0].Key, }, - Version: resource.Version(resourceVersion), - CreateTime: entries[0].CreationTime, + Version: resource.Version(resourceVersion), } event.Entry = resource.Entry{ - ID: rid, - Item: entries[0].Resource, + ID: rid, + Item: entries[0].Resource, + Metadata: entries[0].Metadata, } case resource.Deleted: rid := resource.VersionedKey{ Key: resource.Key{ - TypeURL: spec.Target.TypeURL, - FullName: entries[0].Key, + Collection: spec.Target.Collection, + FullName: entries[0].Key, }, Version: resource.Version(resourceVersion), } @@ -149,6 +149,6 @@ func ProcessEvent(cfg *converter.Config, spec kube.ResourceSpec, kind resource.E } } - scope.Debugf("Dispatching source event: %v", event) + log.Scope.Debugf("Dispatching source event: %v", event) ch <- event } diff --git a/galley/pkg/kube/source/source_test.go b/galley/pkg/source/kube/dynamic/source_test.go similarity index 88% rename from galley/pkg/kube/source/source_test.go rename to galley/pkg/source/kube/dynamic/source_test.go index 4a2546080eb9..ebeb078f4e99 100644 --- a/galley/pkg/kube/source/source_test.go +++ b/galley/pkg/source/kube/dynamic/source_test.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package source +package dynamic import ( "errors" @@ -21,31 +21,32 @@ import ( "testing" "github.com/gogo/protobuf/types" + + "istio.io/istio/galley/pkg/meshconfig" + kubeMeta "istio.io/istio/galley/pkg/metadata/kube" + "istio.io/istio/galley/pkg/runtime/resource" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" + "istio.io/istio/galley/pkg/source/kube/schema" + "istio.io/istio/galley/pkg/testing/mock" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/watch" "k8s.io/client-go/dynamic/fake" dtesting "k8s.io/client-go/testing" - - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/kube/converter" - "istio.io/istio/galley/pkg/meshconfig" - kube_meta "istio.io/istio/galley/pkg/metadata/kube" - "istio.io/istio/galley/pkg/runtime/resource" - "istio.io/istio/galley/pkg/testing/mock" ) var emptyInfo resource.Info func init() { b := resource.NewSchemaBuilder() - b.Register("type.googleapis.com/google.protobuf.Empty") - schema := b.Build() - emptyInfo, _ = schema.Lookup("type.googleapis.com/google.protobuf.Empty") + b.Register("empty", "type.googleapis.com/google.protobuf.Empty") + s := b.Build() + emptyInfo, _ = s.Lookup("empty") } -func schemaWithSpecs(specs []kube.ResourceSpec) *kube.Schema { - sb := kube.NewSchemaBuilder() +func schemaWithSpecs(specs []schema.ResourceSpec) *schema.Instance { + sb := schema.NewBuilder() for _, s := range specs { sb.Add(s) } @@ -62,7 +63,7 @@ func TestNewSource(t *testing.T) { cfg := converter.Config{ Mesh: meshconfig.NewInMemory(), } - p, err := New(k, 0, kube_meta.Types, &cfg) + p, err := New(k, 0, kubeMeta.Types, &cfg) if err != nil { t.Fatalf("Unexpected error found: %v", err) } @@ -79,7 +80,7 @@ func TestNewSource_Error(t *testing.T) { cfg := converter.Config{ Mesh: meshconfig.NewInMemory(), } - _, err := New(k, 0, kube_meta.Types, &cfg) + _, err := New(k, 0, kubeMeta.Types, &cfg) if err == nil || err.Error() != "newDynamicClient error" { t.Fatalf("Expected error not found: %v", err) } @@ -112,7 +113,7 @@ func TestSource_BasicEvents(t *testing.T) { return true, w, nil }) - schema := schemaWithSpecs([]kube.ResourceSpec{ + specs := schemaWithSpecs([]schema.ResourceSpec{ { Kind: "List", Singular: "List", @@ -125,7 +126,7 @@ func TestSource_BasicEvents(t *testing.T) { cfg := converter.Config{ Mesh: meshconfig.NewInMemory(), } - s, err := New(k, 0, schema, &cfg) + s, err := New(k, 0, specs, &cfg) if err != nil { t.Fatalf("Unexpected error: %v", err) } @@ -141,7 +142,7 @@ func TestSource_BasicEvents(t *testing.T) { log := logChannelOutput(ch, 2) expected := strings.TrimSpace(` -[Event](Added: [VKey](type.googleapis.com/google.protobuf.Empty:ns/f1 @rv1)) +[Event](Added: [VKey](empty:ns/f1 @rv1)) [Event](FullSync) `) if log != expected { @@ -151,7 +152,7 @@ func TestSource_BasicEvents(t *testing.T) { w.Send(watch.Event{Type: watch.Deleted, Object: &i1}) log = logChannelOutput(ch, 1) expected = strings.TrimSpace(` -[Event](Deleted: [VKey](type.googleapis.com/google.protobuf.Empty:ns/f1 @rv1)) +[Event](Deleted: [VKey](empty:ns/f1 @rv1)) `) if log != expected { t.Fatalf("Event mismatch:\nActual:\n%s\nExpected:\n%s\n", log, expected) @@ -187,7 +188,7 @@ func TestSource_BasicEvents_NoConversion(t *testing.T) { return true, mock.NewWatch(), nil }) - schema := schemaWithSpecs([]kube.ResourceSpec{ + specs := schemaWithSpecs([]schema.ResourceSpec{ { Kind: "List", Singular: "List", @@ -200,7 +201,7 @@ func TestSource_BasicEvents_NoConversion(t *testing.T) { cfg := converter.Config{ Mesh: meshconfig.NewInMemory(), } - s, err := New(k, 0, schema, &cfg) + s, err := New(k, 0, specs, &cfg) if err != nil { t.Fatalf("Unexpected error: %v", err) } @@ -250,7 +251,7 @@ func TestSource_ProtoConversionError(t *testing.T) { return true, mock.NewWatch(), nil }) - schema := schemaWithSpecs([]kube.ResourceSpec{ + specs := schemaWithSpecs([]schema.ResourceSpec{ { Kind: "foo", Singular: "foo", @@ -265,7 +266,7 @@ func TestSource_ProtoConversionError(t *testing.T) { cfg := converter.Config{ Mesh: meshconfig.NewInMemory(), } - s, err := New(k, 0, schema, &cfg) + s, err := New(k, 0, specs, &cfg) if err != nil { t.Fatalf("Unexpected error: %v", err) } @@ -316,7 +317,7 @@ func TestSource_MangledNames(t *testing.T) { return true, mock.NewWatch(), nil }) - schema := schemaWithSpecs([]kube.ResourceSpec{ + specs := schemaWithSpecs([]schema.ResourceSpec{ { Kind: "foo", Singular: "foo", @@ -336,7 +337,7 @@ func TestSource_MangledNames(t *testing.T) { cfg := converter.Config{ Mesh: meshconfig.NewInMemory(), } - s, err := New(k, 0, schema, &cfg) + s, err := New(k, 0, specs, &cfg) if err != nil { t.Fatalf("Unexpected error: %v", err) } @@ -353,7 +354,7 @@ func TestSource_MangledNames(t *testing.T) { // The mangled name foo/ns/f1 should appear. log := logChannelOutput(ch, 1) expected := strings.TrimSpace(` -[Event](Added: [VKey](type.googleapis.com/google.protobuf.Empty:foo/ns/f1 @rv1)) +[Event](Added: [VKey](empty:foo/ns/f1 @rv1)) `) if log != expected { t.Fatalf("Event mismatch:\nActual:\n%s\nExpected:\n%s\n", log, expected) diff --git a/galley/pkg/kube/converter/legacy/gen.go b/galley/pkg/source/kube/log/scope.go similarity index 71% rename from galley/pkg/kube/converter/legacy/gen.go rename to galley/pkg/source/kube/log/scope.go index 1d8a4bcaac19..ac74e0f05c1d 100644 --- a/galley/pkg/kube/converter/legacy/gen.go +++ b/galley/pkg/source/kube/log/scope.go @@ -1,4 +1,4 @@ -// Copyright 2018 Istio Authors +// Copyright 2019 Istio Authors // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -12,6 +12,9 @@ // See the License for the specific language governing permissions and // limitations under the License. -// nolint: lll -//go:generate $GOPATH/src/istio.io/istio/bin/protoc.sh -I. legacymixer.proto --gogo_out=Mgoogle/protobuf/struct.proto=github.com/gogo/protobuf/types:${GOPATH}/src -package legacy +package log + +import "istio.io/istio/pkg/log" + +// Scope for kube sources. +var Scope = log.RegisterScope("kube", "kube-specific debugging", 0) diff --git a/galley/pkg/kube/source/init.go b/galley/pkg/source/kube/schema/check/check.go similarity index 77% rename from galley/pkg/kube/source/init.go rename to galley/pkg/source/kube/schema/check/check.go index c90cc2d53b74..0673e6d23f23 100644 --- a/galley/pkg/kube/source/init.go +++ b/galley/pkg/source/kube/schema/check/check.go @@ -12,24 +12,27 @@ // See the License for the specific language governing permissions and // limitations under the License. -package source +package check import ( "fmt" "time" multierror "github.com/hashicorp/go-multierror" - "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" - "k8s.io/apimachinery/pkg/runtime/schema" - "k8s.io/apimachinery/pkg/util/wait" - "istio.io/istio/galley/pkg/kube" kube_meta "istio.io/istio/galley/pkg/metadata/kube" + "istio.io/istio/galley/pkg/source/kube/client" + "istio.io/istio/galley/pkg/source/kube/log" + kubeSchema "istio.io/istio/galley/pkg/source/kube/schema" + + "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" + runtimeSchema "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/util/wait" ) // VerifyResourceTypesPresence verifies that all expected k8s resources types are // present in the k8s apiserver. -func VerifyResourceTypesPresence(k kube.Interfaces) error { +func VerifyResourceTypesPresence(k client.Interfaces) error { cs, err := k.APIExtensionsClientset() if err != nil { return err @@ -42,8 +45,8 @@ var ( pollTimeout = time.Minute ) -func verifyResourceTypesPresence(cs clientset.Interface, specs []kube.ResourceSpec) error { - search := make(map[string]*kube.ResourceSpec, len(specs)) +func verifyResourceTypesPresence(cs clientset.Interface, specs []kubeSchema.ResourceSpec) error { + search := make(map[string]*kubeSchema.ResourceSpec, len(specs)) for i, spec := range specs { search[spec.Plural] = &specs[i] } @@ -52,7 +55,7 @@ func verifyResourceTypesPresence(cs clientset.Interface, specs []kube.ResourceSp var errs error nextResource: for plural, spec := range search { - gv := schema.GroupVersion{Group: spec.Group, Version: spec.Version}.String() + gv := runtimeSchema.GroupVersion{Group: spec.Group, Version: spec.Version}.String() list, err := cs.Discovery().ServerResourcesForGroupVersion(gv) if err != nil { errs = multierror.Append(errs, fmt.Errorf("could not find %v: %v", gv, err)) @@ -67,7 +70,7 @@ func verifyResourceTypesPresence(cs clientset.Interface, specs []kube.ResourceSp } } if !found { - scope.Infof("%s resource type not found", spec.CanonicalResourceName()) + log.Scope.Infof("%s resource type not found", spec.CanonicalResourceName()) } } if len(search) == 0 { @@ -89,6 +92,6 @@ func verifyResourceTypesPresence(cs clientset.Interface, specs []kube.ResourceSp return fmt.Errorf("%v: the following resource type(s) were not found: %v", err, notFound) } - scope.Infof("Discovered all supported resources (# = %v)", len(specs)) + log.Scope.Infof("Discovered all supported resources (# = %v)", len(specs)) return nil } diff --git a/galley/pkg/kube/source/init_test.go b/galley/pkg/source/kube/schema/check/check_test.go similarity index 99% rename from galley/pkg/kube/source/init_test.go rename to galley/pkg/source/kube/schema/check/check_test.go index e1c2ceb6c257..f7f9eab6b66d 100644 --- a/galley/pkg/kube/source/init_test.go +++ b/galley/pkg/source/kube/schema/check/check_test.go @@ -12,17 +12,17 @@ // See the License for the specific language governing permissions and // limitations under the License. -package source +package check import ( "fmt" "testing" "time" + kube_meta "istio.io/istio/galley/pkg/metadata/kube" + extfake "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset/fake" meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - - kube_meta "istio.io/istio/galley/pkg/metadata/kube" ) func TestVerifyCRDPresence(t *testing.T) { diff --git a/galley/pkg/kube/schema.go b/galley/pkg/source/kube/schema/instance.go similarity index 66% rename from galley/pkg/kube/schema.go rename to galley/pkg/source/kube/schema/instance.go index 34591d4fe7e7..b24308601192 100644 --- a/galley/pkg/kube/schema.go +++ b/galley/pkg/source/kube/schema/instance.go @@ -12,32 +12,32 @@ // See the License for the specific language governing permissions and // limitations under the License. -package kube +package schema -// Schema represents a set of known Kubernetes resource types. -type Schema struct { +// Instance represents a set of known Kubernetes resource types. +type Instance struct { entries []ResourceSpec } -// SchemaBuilder is a builder for schema. -type SchemaBuilder struct { - schema *Schema +// Builder is a builder for schema. +type Builder struct { + schema *Instance } -// NewSchemaBuilder returns a new instance of a SchemaBuilder. -func NewSchemaBuilder() *SchemaBuilder { - return &SchemaBuilder{ - schema: &Schema{}, +// NewBuilder returns a new instance of a Builder. +func NewBuilder() *Builder { + return &Builder{ + schema: &Instance{}, } } // Add a new ResourceSpec to the schema. -func (b *SchemaBuilder) Add(entry ResourceSpec) { +func (b *Builder) Add(entry ResourceSpec) { b.schema.entries = append(b.schema.entries, entry) } // Build a new instance of schema. -func (b *SchemaBuilder) Build() *Schema { +func (b *Builder) Build() *Instance { s := b.schema // Avoid modify after Build. @@ -47,6 +47,6 @@ func (b *SchemaBuilder) Build() *Schema { } // All returns information about all known types. -func (e *Schema) All() []ResourceSpec { +func (e *Instance) All() []ResourceSpec { return e.entries } diff --git a/galley/pkg/kube/schema_test.go b/galley/pkg/source/kube/schema/instance_test.go similarity index 96% rename from galley/pkg/kube/schema_test.go rename to galley/pkg/source/kube/schema/instance_test.go index d2d38c590cae..93d724a5387e 100644 --- a/galley/pkg/kube/schema_test.go +++ b/galley/pkg/source/kube/schema/instance_test.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package kube +package schema import ( "reflect" @@ -29,7 +29,7 @@ func TestSchemaBuilder(t *testing.T) { Group: "groupd", } - b := NewSchemaBuilder() + b := NewBuilder() b.Add(spec) s := b.Build() diff --git a/galley/pkg/kube/resourcespec.go b/galley/pkg/source/kube/schema/resourcespec.go similarity index 96% rename from galley/pkg/kube/resourcespec.go rename to galley/pkg/source/kube/schema/resourcespec.go index 5a5d5f2ca0b0..9d64541d7da8 100644 --- a/galley/pkg/kube/resourcespec.go +++ b/galley/pkg/source/kube/schema/resourcespec.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package kube +package schema import ( "fmt" @@ -20,8 +20,8 @@ import ( v1 "k8s.io/apimachinery/pkg/apis/meta/v1" sc "k8s.io/apimachinery/pkg/runtime/schema" - "istio.io/istio/galley/pkg/kube/converter" "istio.io/istio/galley/pkg/runtime/resource" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" ) // ResourceSpec represents a known crd. It is used to drive the K8s-related machinery, and to map to diff --git a/galley/pkg/kube/resourcespec_test.go b/galley/pkg/source/kube/schema/resourcespec_test.go similarity index 99% rename from galley/pkg/kube/resourcespec_test.go rename to galley/pkg/source/kube/schema/resourcespec_test.go index 6830024efb9a..7401dd7edd0f 100644 --- a/galley/pkg/kube/resourcespec_test.go +++ b/galley/pkg/source/kube/schema/resourcespec_test.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package kube +package schema import ( "reflect" diff --git a/galley/pkg/kube/source/monitoring.go b/galley/pkg/source/kube/stats/stats.go similarity index 86% rename from galley/pkg/kube/source/monitoring.go rename to galley/pkg/source/kube/stats/stats.go index 4d32a14c2ea1..87d92a2f7798 100644 --- a/galley/pkg/kube/source/monitoring.go +++ b/galley/pkg/source/kube/stats/stats.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package source +package stats import ( "context" @@ -21,6 +21,8 @@ import ( "go.opencensus.io/stats" "go.opencensus.io/stats/view" "go.opencensus.io/tag" + + "istio.io/istio/galley/pkg/source/kube/log" ) const ( @@ -62,16 +64,18 @@ var ( stats.UnitDimensionless) ) -func recordHandleEventError(msg string) { +// RecordHandleEventError records an error handling a kube event. +func RecordHandleEventError(msg string) { ctx, ctxErr := tag.New(context.Background(), tag.Insert(ErrorTag, msg)) if ctxErr != nil { - scope.Errorf("error creating context to record handleEvent error") + log.Scope.Errorf("error creating context to record handleEvent error") } else { stats.Record(ctx, listenerHandleEventError.M(1)) } } -func recordHandleEventSuccess() { +// RecordHandleEventSuccess records successfully handling a kube event. +func RecordHandleEventSuccess() { stats.Record(context.Background(), listenerHandleEventSuccess.M(1)) } @@ -81,7 +85,8 @@ type contextKey struct { var ctxCache = sync.Map{} -func recordConverterResult(success bool, apiVersion, group, kind string) { +// RecordConverterResult records the result of a kube resource conversion from unstructured. +func RecordConverterResult(success bool, apiVersion, group, kind string) { var metric *stats.Int64Measure if success { metric = sourceConversionSuccess @@ -95,7 +100,7 @@ func recordConverterResult(success bool, apiVersion, group, kind string) { ctx, err = tag.New(context.Background(), tag.Insert(APIVersionTag, apiVersion), tag.Insert(GroupTag, group), tag.Insert(KindTag, kind)) if err != nil { - scope.Errorf("Error creating monitoring context for counting conversion result: %v", err) + log.Scope.Errorf("Error creating monitoring context for counting conversion result: %v", err) return } ctxCache.Store(key, ctx) diff --git a/galley/pkg/testing/testdata/dataset.gen.go b/galley/pkg/testing/testdata/dataset.gen.go index 3efd9bd1e830..ac9dff07bcbd 100644 --- a/galley/pkg/testing/testdata/dataset.gen.go +++ b/galley/pkg/testing/testdata/dataset.gen.go @@ -5,7 +5,6 @@ // dataset/extensions/v1beta1/ingress_basic.yaml // dataset/extensions/v1beta1/ingress_basic_expected.json // dataset/extensions/v1beta1/ingress_basic_meshconfig.yaml -// dataset/extensions/v1beta1/ingress_merge_0.skip // dataset/extensions/v1beta1/ingress_merge_0.yaml // dataset/extensions/v1beta1/ingress_merge_0_expected.json // dataset/extensions/v1beta1/ingress_merge_0_meshconfig.yaml @@ -15,6 +14,10 @@ // dataset/networking.istio.io/v1alpha3/destinationRule_expected.json // dataset/networking.istio.io/v1alpha3/gateway.yaml // dataset/networking.istio.io/v1alpha3/gateway_expected.json +// dataset/v1/node.yaml.skip +// dataset/v1/node_expected.json +// dataset/v1/pod.yaml.skip +// dataset/v1/pod_expected.json // DO NOT EDIT! package testdata @@ -27,7 +30,6 @@ import ( "strings" "time" ) - type asset struct { bytes []byte info os.FileInfo @@ -84,30 +86,26 @@ func datasetConfigIstioIoV1alpha2CirconusYaml() (*asset, error) { } var _datasetConfigIstioIoV1alpha2Circonus_expectedJson = []byte(`{ - "type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource": [ + "istio/config/v1alpha2/legacy/circonuses": [ { "Metadata": { - "name": "circonus/valid-circonus" + "name": "valid-circonus" }, - "Resource": { - "contents": { - "fields": { - "submission_interval": { - "Kind": { - "StringValue": "10s" - } - }, - "submission_url": { - "Kind": { - "StringValue": "https://trap.noit.circonus.net/module/httptrap/myuuid/mysecret" - } + "Body": { + "fields": { + "submission_interval": { + "Kind": { + "StringValue": "10s" + } + }, + "submission_url": { + "Kind": { + "StringValue": "https://trap.noit.circonus.net/module/httptrap/myuuid/mysecret" } } - }, - "kind": "circonus", - "name": "valid-circonus" + } }, - "TypeURL": "type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource" + "TypeURL": "type.googleapis.com/google.protobuf.Struct" } ] } @@ -157,13 +155,13 @@ func datasetExtensionsV1beta1Ingress_basicYaml() (*asset, error) { } var _datasetExtensionsV1beta1Ingress_basic_expectedJson = []byte(`{ - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.Gateway", "Metadata": { "name": "istio-system/foo-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -222,23 +220,6 @@ func datasetExtensionsV1beta1Ingress_basic_meshconfigYaml() (*asset, error) { return a, nil } -var _datasetExtensionsV1beta1Ingress_merge_0Skip = []byte(``) - -func datasetExtensionsV1beta1Ingress_merge_0SkipBytes() ([]byte, error) { - return _datasetExtensionsV1beta1Ingress_merge_0Skip, nil -} - -func datasetExtensionsV1beta1Ingress_merge_0Skip() (*asset, error) { - bytes, err := datasetExtensionsV1beta1Ingress_merge_0SkipBytes() - if err != nil { - return nil, err - } - - info := bindataFileInfo{name: "dataset/extensions/v1beta1/ingress_merge_0.skip", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)} - a := &asset{bytes: bytes, info: info} - return a, nil -} - var _datasetExtensionsV1beta1Ingress_merge_0Yaml = []byte(`apiVersion: extensions/v1beta1 kind: Ingress metadata: @@ -291,12 +272,12 @@ func datasetExtensionsV1beta1Ingress_merge_0Yaml() (*asset, error) { } var _datasetExtensionsV1beta1Ingress_merge_0_expectedJson = []byte(`{ - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "Metadata": { "name": "istio-system/bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -319,7 +300,7 @@ var _datasetExtensionsV1beta1Ingress_merge_0_expectedJson = []byte(`{ "Metadata": { "name": "istio-system/foo-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -340,12 +321,12 @@ var _datasetExtensionsV1beta1Ingress_merge_0_expectedJson = []byte(`{ } ], - "type.googleapis.com/istio.networking.v1alpha3.VirtualService": [ + "istio/networking/v1alpha3/virtualservices": [ { "Metadata": { "name": "istio-system/foo-bar-com-bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "gateways": [ "istio-autogenerated-k8s-ingress" ], @@ -406,7 +387,8 @@ var _datasetExtensionsV1beta1Ingress_merge_0_expectedJson = []byte(`{ "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.VirtualService" } ] -}`) +} +`) func datasetExtensionsV1beta1Ingress_merge_0_expectedJsonBytes() ([]byte, error) { return _datasetExtensionsV1beta1Ingress_merge_0_expectedJson, nil @@ -495,12 +477,12 @@ func datasetExtensionsV1beta1Ingress_merge_1Yaml() (*asset, error) { } var _datasetExtensionsV1beta1Ingress_merge_1_expectedJson = []byte(`{ - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "Metadata": { "name": "istio-system/bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -523,7 +505,7 @@ var _datasetExtensionsV1beta1Ingress_merge_1_expectedJson = []byte(`{ "Metadata": { "name": "istio-system/foo-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -544,12 +526,12 @@ var _datasetExtensionsV1beta1Ingress_merge_1_expectedJson = []byte(`{ } ], - "type.googleapis.com/istio.networking.v1alpha3.VirtualService": [ + "istio/networking/v1alpha3/virtualservices": [ { "Metadata": { "name": "istio-system/foo-bar-com-bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "gateways": [ "istio-autogenerated-k8s-ingress" ], @@ -610,7 +592,8 @@ var _datasetExtensionsV1beta1Ingress_merge_1_expectedJson = []byte(`{ "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.VirtualService" } ] -}`) +} +`) func datasetExtensionsV1beta1Ingress_merge_1_expectedJsonBytes() ([]byte, error) { return _datasetExtensionsV1beta1Ingress_merge_1_expectedJson, nil @@ -658,13 +641,13 @@ func datasetNetworkingIstioIoV1alpha3DestinationruleYaml() (*asset, error) { } var _datasetNetworkingIstioIoV1alpha3Destinationrule_expectedJson = []byte(`{ - "type.googleapis.com/istio.networking.v1alpha3.DestinationRule": [ + "istio/networking/v1alpha3/destinationrules": [ { "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.DestinationRule", "Metadata": { "name": "tcp-echo-destination" }, - "Resource": { + "Body": { "host": "tcp-echo", "subsets": [ { @@ -733,13 +716,13 @@ func datasetNetworkingIstioIoV1alpha3GatewayYaml() (*asset, error) { } var _datasetNetworkingIstioIoV1alpha3Gateway_expectedJson = []byte(`{ - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.Gateway", "Metadata": { "name": "helloworld-gateway" }, - "Resource": { + "Body": { "selector": { "istio": "ingressgateway" }, @@ -776,6 +759,853 @@ func datasetNetworkingIstioIoV1alpha3Gateway_expectedJson() (*asset, error) { return a, nil } +var _datasetV1NodeYamlSkip = []byte(`apiVersion: v1 +kind: Node +metadata: + annotations: + container.googleapis.com/instance_id: "2787417306096525587" + node.alpha.kubernetes.io/ttl: "0" + volumes.kubernetes.io/controller-managed-attach-detach: "true" + creationTimestamp: 2018-10-05T19:40:48Z + labels: + beta.kubernetes.io/arch: amd64 + beta.kubernetes.io/fluentd-ds-ready: "true" + beta.kubernetes.io/instance-type: n1-standard-4 + beta.kubernetes.io/os: linux + cloud.google.com/gke-nodepool: default-pool + cloud.google.com/gke-os-distribution: cos + failure-domain.beta.kubernetes.io/region: us-central1 + failure-domain.beta.kubernetes.io/zone: us-central1-a + kubernetes.io/hostname: gke-istio-test-default-pool-866a0405-420r + name: gke-istio-test-default-pool-866a0405-420r + resourceVersion: "59148590" + selfLink: /api/v1/nodes/gke-istio-test-default-pool-866a0405-420r + uid: 8f63dfef-c8d6-11e8-8901-42010a800278 +spec: + externalID: "1929748586650271976" + podCIDR: 10.40.0.0/24 + providerID: gce://nathanmittler-istio-test/us-central1-a/gke-istio-test-default-pool-866a0405-420r +status: + addresses: + - address: 10.128.0.4 + type: InternalIP + - address: 35.238.214.129 + type: ExternalIP + - address: gke-istio-test-default-pool-866a0405-420r + type: Hostname + allocatable: + cpu: 3920m + ephemeral-storage: "47093746742" + hugepages-2Mi: "0" + memory: 12699980Ki + pods: "110" + capacity: + cpu: "4" + ephemeral-storage: 98868448Ki + hugepages-2Mi: "0" + memory: 15399244Ki + pods: "110" + conditions: + - lastHeartbeatTime: 2019-01-08T18:09:08Z + lastTransitionTime: 2018-12-03T17:00:58Z + message: node is functioning properly + reason: UnregisterNetDevice + status: "False" + type: FrequentUnregisterNetDevice + - lastHeartbeatTime: 2019-01-08T18:09:08Z + lastTransitionTime: 2018-12-03T16:55:56Z + message: kernel has no deadlock + reason: KernelHasNoDeadlock + status: "False" + type: KernelDeadlock + - lastHeartbeatTime: 2018-10-05T19:40:58Z + lastTransitionTime: 2018-10-05T19:40:58Z + message: RouteController created a route + reason: RouteCreated + status: "False" + type: NetworkUnavailable + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:55:57Z + message: kubelet has sufficient disk space available + reason: KubeletHasSufficientDisk + status: "False" + type: OutOfDisk + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:55:57Z + message: kubelet has sufficient memory available + reason: KubeletHasSufficientMemory + status: "False" + type: MemoryPressure + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:55:57Z + message: kubelet has no disk pressure + reason: KubeletHasNoDiskPressure + status: "False" + type: DiskPressure + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-10-05T19:40:48Z + message: kubelet has sufficient PID available + reason: KubeletHasSufficientPID + status: "False" + type: PIDPressure + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:56:07Z + message: kubelet is posting ready status. AppArmor enabled + reason: KubeletReady + status: "True" + type: Ready + daemonEndpoints: + kubeletEndpoint: + Port: 10250 + images: + - names: + - gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:a33f69d0034fdce835a1eb7df8a051ea74323f3fc30d911bbd2e3f2aef09fc93 + - gcr.io/stackdriver-agents/stackdriver-logging-agent:0.3-1.5.34-1-k8s-1 + sizeBytes: 554981103 + - names: + - istio/examples-bookinfo-reviews-v3@sha256:8c0385f0ca799e655d8770b52cb4618ba54e8966a0734ab1aeb6e8b14e171a3b + - istio/examples-bookinfo-reviews-v3:1.9.0 + sizeBytes: 525074812 + - names: + - istio/examples-bookinfo-reviews-v2@sha256:d2483dcb235b27309680177726e4e86905d66e47facaf1d57ed590b2bf95c8ad + - istio/examples-bookinfo-reviews-v2:1.9.0 + sizeBytes: 525074812 + - names: + - istio/examples-bookinfo-reviews-v1@sha256:920d46b3c526376b28b90d0e895ca7682d36132e6338301fcbcd567ef81bde05 + - istio/examples-bookinfo-reviews-v1:1.9.0 + sizeBytes: 525074812 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:8cea2c055dd3d3ab78f99584256efcc1cff7d8ddbed11cded404e9d164235502 + sizeBytes: 448337138 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:9949bc22667ef88e54ae91700a64bf1459e8c14ed92b870b7ec2f630e14cf3c1 + sizeBytes: 446407220 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:fc1f957cfa26673768be8fa865066f730f22fde98a6e80654d00f755a643b507 + sizeBytes: 446407220 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:23a52850819d5196d66e8e20f4f63f314f779716f830e1d109ad0e24b1f0df43 + sizeBytes: 446407220 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:e338c2c5cbc379db24c5b2d67a4acc9cca9a069c2927217fca0ce7cbc582d312 + - gcr.io/nathanmittler-istio-test/proxyv2:latest + sizeBytes: 446398900 + - names: + - gcr.io/istio-release/proxyv2@sha256:dec972eab4f46c974feec1563ea484ad4995edf55ea91d42e148c5db04b3d4d2 + - gcr.io/istio-release/proxyv2:master-latest-daily + sizeBytes: 353271308 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:cb4a29362ff9014bf1d96e0ce2bb6337bf034908bb4a8d48af0628a4d8d64413 + sizeBytes: 344543156 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:3f4115cd8c26a17f6bf8ea49f1ff5b875382bda5a6d46281c70c886e802666b0 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:cdd2f527b4bd392b533d2d0e62c257c19d5a35a6b5fc3512aa327c560866aec1 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:6ec1dced4cee8569c77817927938fa4341f939e0dddab511bc3ee8724d652ae2 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:9d502fd29961bc3464f7906ac0e86b07edf01cf4892352ef780e55b3525fb0b8 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:4e75c42518bb46376cfe0b4fbaa3da1d8f1cea99f706736f1b0b04a3ac554db2 + sizeBytes: 344201616 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:58a7511f549448f6f86280559069bc57f5c754877ebec69da5bbc7ad55e42162 + sizeBytes: 344201616 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:7f60a750d15cda9918e9172e529270ce78c670751d4027f6adc6bdc84ec2d884 + sizeBytes: 344201436 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:6fc25c08212652c7539caaf0f6d913d929f84c54767f20066657ce0f4e6a51e0 + sizeBytes: 344193424 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:4e93825950c831ce6d2b65c9a80921c8860035e39a4b384d38d40f7d2cb2a4ee + sizeBytes: 344185232 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:842216399613774640a4605202d446cf61bd48ff20e12391a0239cbc6a8f2c77 + sizeBytes: 344185052 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:8ee2bb6fc5484373227b17e377fc226d8d19be11d38d6dbc304970bd46bc929b + sizeBytes: 344159662 + - names: + - gcr.io/nathanmittler-istio-test/pilot@sha256:111e432396950e01835c23471660968c37928c38bb08f2d53b820bbfe7cbd558 + sizeBytes: 307722351 + - names: + - gcr.io/nathanmittler-istio-test/pilot@sha256:2445d3c2839825be2decbafcd3f2668bdf148ba9acbbb855810006a58899f320 + sizeBytes: 307722351 + - names: + - gcr.io/nathanmittler-istio-test/pilot@sha256:b62e9f12609b89892bb38c858936f76d81aa3ccdc91a3961309f900c1c4f574b + sizeBytes: 307722351 + nodeInfo: + architecture: amd64 + bootID: 8f772c7c-09eb-41eb-8bb5-76ef214eaaa1 + containerRuntimeVersion: docker://17.3.2 + kernelVersion: 4.14.65+ + kubeProxyVersion: v1.11.3-gke.18 + kubeletVersion: v1.11.3-gke.18 + machineID: f325f89cd295bdcda652fd40f0049e32 + operatingSystem: linux + osImage: Container-Optimized OS from Google + systemUUID: F325F89C-D295-BDCD-A652-FD40F0049E32 +`) + +func datasetV1NodeYamlSkipBytes() ([]byte, error) { + return _datasetV1NodeYamlSkip, nil +} + +func datasetV1NodeYamlSkip() (*asset, error) { + bytes, err := datasetV1NodeYamlSkipBytes() + if err != nil { + return nil, err + } + + info := bindataFileInfo{name: "dataset/v1/node.yaml.skip", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)} + a := &asset{bytes: bytes, info: info} + return a, nil +} + +var _datasetV1Node_expectedJson = []byte(`{ + "k8s/core/v1/nodes": [ + { + "TypeURL": "type.googleapis.com/k8s.io.api.core.v1.NodeSpec", + "Metadata": { + "name": "gke-istio-test-default-pool-866a0405-420r", + "annotations": { + "container.googleapis.com/instance_id": "2787417306096525587", + "node.alpha.kubernetes.io/ttl": "0", + "volumes.kubernetes.io/controller-managed-attach-detach": "true" + }, + "labels": { + "beta.kubernetes.io/arch": "amd64", + "beta.kubernetes.io/fluentd-ds-ready": "true", + "beta.kubernetes.io/instance-type": "n1-standard-4", + "beta.kubernetes.io/os": "linux", + "cloud.google.com/gke-nodepool": "default-pool", + "cloud.google.com/gke-os-distribution": "cos", + "failure-domain.beta.kubernetes.io/region": "us-central1", + "failure-domain.beta.kubernetes.io/zone": "us-central1-a", + "kubernetes.io/hostname": "gke-istio-test-default-pool-866a0405-420r" + } + }, + "Body": { + "externalID": "1929748586650271976", + "podCIDR": "10.40.0.0/24", + "providerID": "gce://nathanmittler-istio-test/us-central1-a/gke-istio-test-default-pool-866a0405-420r" + } + } + ] +} +`) + +func datasetV1Node_expectedJsonBytes() ([]byte, error) { + return _datasetV1Node_expectedJson, nil +} + +func datasetV1Node_expectedJson() (*asset, error) { + bytes, err := datasetV1Node_expectedJsonBytes() + if err != nil { + return nil, err + } + + info := bindataFileInfo{name: "dataset/v1/node_expected.json", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)} + a := &asset{bytes: bytes, info: info} + return a, nil +} + +var _datasetV1PodYamlSkip = []byte(`apiVersion: v1 +kind: Pod +metadata: + annotations: + scheduler.alpha.kubernetes.io/critical-pod: "" + seccomp.security.alpha.kubernetes.io/pod: docker/default + creationTimestamp: "2018-12-03T16:59:57Z" + generateName: kube-dns-548976df6c- + labels: + k8s-app: kube-dns + pod-template-hash: "1045328927" + name: kube-dns-548976df6c-d9kkv + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: kube-dns-548976df6c + uid: b589a851-f71b-11e8-af4f-42010a800072 + resourceVersion: "50572715" + selfLink: /api/v1/namespaces/kube-system/pods/kube-dns-548976df6c-d9kkv + uid: dd4bbbd4-f71c-11e8-af4f-42010a800072 +spec: + containers: + - args: + - --domain=cluster.local. + - --dns-port=10053 + - --config-dir=/kube-dns-config + - --v=2 + env: + - name: PROMETHEUS_PORT + value: "10055" + image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 + imagePullPolicy: IfNotPresent + livenessProbe: + failureThreshold: 5 + handler: + httpGet: + path: /healthcheck/kubedns + port: + intVal: 10054 + scheme: HTTP + initialDelaySeconds: 60 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + name: kubedns + ports: + - containerPort: 10053 + name: dns-local + protocol: UDP + - containerPort: 10053 + name: dns-tcp-local + protocol: TCP + - containerPort: 10055 + name: metrics + protocol: TCP + readinessProbe: + failureThreshold: 3 + handler: + httpGet: + path: /readiness + port: + intVal: 8081 + scheme: HTTP + initialDelaySeconds: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /kube-dns-config + name: kube-dns-config + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + - args: + - -v=2 + - -logtostderr + - -configDir=/etc/k8s/dns/dnsmasq-nanny + - -restartDnsmasq=true + - -- + - -k + - --cache-size=1000 + - --no-negcache + - --log-facility=- + - --server=/cluster.local/127.0.0.1#10053 + - --server=/in-addr.arpa/127.0.0.1#10053 + - --server=/ip6.arpa/127.0.0.1#10053 + image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 + imagePullPolicy: IfNotPresent + livenessProbe: + failureThreshold: 5 + handler: + httpGet: + path: /healthcheck/dnsmasq + port: + intVal: 10054 + scheme: HTTP + initialDelaySeconds: 60 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + name: dnsmasq + ports: + - containerPort: 53 + name: dns + protocol: UDP + - containerPort: 53 + name: dns-tcp + protocol: TCP + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /etc/k8s/dns/dnsmasq-nanny + name: kube-dns-config + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + - args: + - --v=2 + - --logtostderr + - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV + - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV + image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 + imagePullPolicy: IfNotPresent + livenessProbe: + failureThreshold: 5 + handler: + httpGet: + path: /metrics + port: + intVal: 10054 + scheme: HTTP + initialDelaySeconds: 60 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + name: sidecar + ports: + - containerPort: 10054 + name: metrics + protocol: TCP + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + - command: + - /monitor + - --component=kubedns + - --target-port=10054 + - --stackdriver-prefix=container.googleapis.com/internal/addons + - --api-override=https://monitoring.googleapis.com/ + - --whitelisted-metrics=probe_kubedns_latency_ms,probe_kubedns_errors,dnsmasq_misses,dnsmasq_hits + - --pod-id=$(POD_NAME) + - --namespace-id=$(POD_NAMESPACE) + - --v=2 + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + image: gcr.io/google-containers/prometheus-to-sd:v0.2.3 + imagePullPolicy: IfNotPresent + name: prometheus-to-sd + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + dnsPolicy: Default + nodeName: gke-istio-test-default-pool-866a0405-ftch + priority: 2000000000 + priorityClassName: system-cluster-critical + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + serviceAccount: kube-dns + serviceAccountName: kube-dns + terminationGracePeriodSeconds: "30" + tolerations: + - key: CriticalAddonsOnly + operator: Exists + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: "300" + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: "300" + volumes: + - name: kube-dns-config + volumeSource: + configMap: + defaultMode: 420 + localObjectReference: + name: kube-dns + optional: true + - name: kube-dns-token-lwn8l + volumeSource: + secret: + defaultMode: 420 + secretName: kube-dns-token-lwn8l +status: + conditions: + - lastProbeTime: null + lastTransitionTime: "2018-12-03T17:00:00Z" + status: "True" + type: Initialized + - lastProbeTime: null + lastTransitionTime: "2018-12-03T17:00:20Z" + status: "True" + type: Ready + - lastProbeTime: null + lastTransitionTime: null + status: "True" + type: ContainersReady + - lastProbeTime: null + lastTransitionTime: "2018-12-03T16:59:57Z" + status: "True" + type: PodScheduled + containerStatuses: + - containerID: docker://676f6c98bfa136315c4ccf0fe40e7a56cbf9ac85128e94310eae82f191246b3e + image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 + imageID: docker-pullable://k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64@sha256:45df3e8e0c551bd0c79cdba48ae6677f817971dcbd1eeed7fd1f9a35118410e4 + lastState: {} + name: dnsmasq + ready: true + state: + running: + startedAt: "2018-12-03T17:00:14Z" + - containerID: docker://93fd0664e150982dad0481c5260183308a7035a2f938ec50509d586ed586a107 + image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 + imageID: docker-pullable://k8s.gcr.io/k8s-dns-kube-dns-amd64@sha256:618a82fa66cf0c75e4753369a6999032372be7308866fc9afb381789b1e5ad52 + lastState: {} + name: kubedns + ready: true + state: + running: + startedAt: "2018-12-03T17:00:10Z" + - containerID: docker://e823b79a0a48af75f2eebb1c89ba4c31e8c1ee67ee0d917ac7b4891b67d2cd0f + image: gcr.io/google-containers/prometheus-to-sd:v0.2.3 + imageID: docker-pullable://gcr.io/google-containers/prometheus-to-sd@sha256:be220ec4a66275442f11d420033c106bb3502a3217a99c806eef3cf9858788a2 + lastState: {} + name: prometheus-to-sd + ready: true + state: + running: + startedAt: "2018-12-03T17:00:18Z" + - containerID: docker://74223c401a8dac04b8bd29cdfedcb216881791b4e84bb80a15714991dd18735e + image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 + imageID: docker-pullable://k8s.gcr.io/k8s-dns-sidecar-amd64@sha256:cedc8fe2098dffc26d17f64061296b7aa54258a31513b6c52df271a98bb522b3 + lastState: {} + name: sidecar + ready: true + state: + running: + startedAt: "2018-12-03T17:00:16Z" + hostIP: 10.128.0.5 + phase: Running + podIP: 10.40.1.4 + qosClass: Burstable + startTime: "2018-12-03T17:00:00Z" +`) + +func datasetV1PodYamlSkipBytes() ([]byte, error) { + return _datasetV1PodYamlSkip, nil +} + +func datasetV1PodYamlSkip() (*asset, error) { + bytes, err := datasetV1PodYamlSkipBytes() + if err != nil { + return nil, err + } + + info := bindataFileInfo{name: "dataset/v1/pod.yaml.skip", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)} + a := &asset{bytes: bytes, info: info} + return a, nil +} + +var _datasetV1Pod_expectedJson = []byte(`{ + "k8s/core/v1/pods": [ + { + "TypeURL": "type.googleapis.com/k8s.io.api.core.v1.PodSpec", + "Metadata": { + "name": "kube-dns-548976df6c-d9kkv", + "annotations": { + "scheduler.alpha.kubernetes.io/critical-pod": "", + "seccomp.security.alpha.kubernetes.io/pod": "docker/default" + }, + "labels": { + "k8s-app": "kube-dns", + "pod-template-hash": "1045328927" + } + }, + "Body": { + "containers": [ + { + "args": [ + "--domain=cluster.local.", + "--dns-port=10053", + "--config-dir=/kube-dns-config", + "--v=2" + ], + "env": [ + { + "name": "PROMETHEUS_PORT", + "value": "10055" + } + ], + "image": "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13", + "imagePullPolicy": "IfNotPresent", + "livenessProbe": { + "failureThreshold": 5, + "httpGet": { + "path": "/healthcheck/kubedns", + "port": 10054, + "scheme": "HTTP" + }, + "initialDelaySeconds": 60, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "name": "kubedns", + "ports": [ + { + "containerPort": 10053, + "name": "dns-local", + "protocol": "UDP" + }, + { + "containerPort": 10053, + "name": "dns-tcp-local", + "protocol": "TCP" + }, + { + "containerPort": 10055, + "name": "metrics", + "protocol": "TCP" + } + ], + "readinessProbe": { + "failureThreshold": 3, + "httpGet": { + "path": "/readiness", + "port": 8081, + "scheme": "HTTP" + }, + "initialDelaySeconds": 3, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/kube-dns-config", + "name": "kube-dns-config" + }, + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + }, + { + "args": [ + "-v=2", + "-logtostderr", + "-configDir=/etc/k8s/dns/dnsmasq-nanny", + "-restartDnsmasq=true", + "--", + "-k", + "--cache-size=1000", + "--no-negcache", + "--log-facility=-", + "--server=/cluster.local/127.0.0.1#10053", + "--server=/in-addr.arpa/127.0.0.1#10053", + "--server=/ip6.arpa/127.0.0.1#10053" + ], + "image": "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13", + "imagePullPolicy": "IfNotPresent", + "livenessProbe": { + "failureThreshold": 5, + "httpGet": { + "path": "/healthcheck/dnsmasq", + "port": 10054, + "scheme": "HTTP" + }, + "initialDelaySeconds": 60, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "name": "dnsmasq", + "ports": [ + { + "containerPort": 53, + "name": "dns", + "protocol": "UDP" + }, + { + "containerPort": 53, + "name": "dns-tcp", + "protocol": "TCP" + } + ], + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/etc/k8s/dns/dnsmasq-nanny", + "name": "kube-dns-config" + }, + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + }, + { + "args": [ + "--v=2", + "--logtostderr", + "--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV", + "--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV" + ], + "image": "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13", + "imagePullPolicy": "IfNotPresent", + "livenessProbe": { + "failureThreshold": 5, + "httpGet": { + "path": "/metrics", + "port": 10054, + "scheme": "HTTP" + }, + "initialDelaySeconds": 60, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "name": "sidecar", + "ports": [ + { + "containerPort": 10054, + "name": "metrics", + "protocol": "TCP" + } + ], + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + }, + { + "command": [ + "/monitor", + "--component=kubedns", + "--target-port=10054", + "--stackdriver-prefix=container.googleapis.com/internal/addons", + "--api-override=https://monitoring.googleapis.com/", + "--whitelisted-metrics=probe_kubedns_latency_ms,probe_kubedns_errors,dnsmasq_misses,dnsmasq_hits", + "--pod-id=$(POD_NAME)", + "--namespace-id=$(POD_NAMESPACE)", + "--v=2" + ], + "env": [ + { + "name": "POD_NAME", + "valueFrom": { + "fieldRef": { + "apiVersion": "v1", + "fieldPath": "metadata.name" + } + } + }, + { + "name": "POD_NAMESPACE", + "valueFrom": { + "fieldRef": { + "apiVersion": "v1", + "fieldPath": "metadata.namespace" + } + } + } + ], + "image": "gcr.io/google-containers/prometheus-to-sd:v0.2.3", + "imagePullPolicy": "IfNotPresent", + "name": "prometheus-to-sd", + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + } + ], + "dnsPolicy": "Default", + "nodeName": "gke-istio-test-default-pool-866a0405-ftch", + "priority": 2000000000, + "priorityClassName": "system-cluster-critical", + "restartPolicy": "Always", + "schedulerName": "default-scheduler", + "securityContext": {}, + "serviceAccount": "kube-dns", + "serviceAccountName": "kube-dns", + "terminationGracePeriodSeconds": 30, + "tolerations": [ + { + "key": "CriticalAddonsOnly", + "operator": "Exists" + }, + { + "effect": "NoExecute", + "key": "node.kubernetes.io/not-ready", + "operator": "Exists", + "tolerationSeconds": 300 + }, + { + "effect": "NoExecute", + "key": "node.kubernetes.io/unreachable", + "operator": "Exists", + "tolerationSeconds": 300 + } + ], + "volumes": [ + { + "configMap": { + "defaultMode": 420, + "name": "kube-dns", + "optional": true + }, + "name": "kube-dns-config" + }, + { + "name": "kube-dns-token-lwn8l", + "secret": { + "defaultMode": 420, + "secretName": "kube-dns-token-lwn8l" + } + } + ] + } + } + ] +} +`) + +func datasetV1Pod_expectedJsonBytes() ([]byte, error) { + return _datasetV1Pod_expectedJson, nil +} + +func datasetV1Pod_expectedJson() (*asset, error) { + bytes, err := datasetV1Pod_expectedJsonBytes() + if err != nil { + return nil, err + } + + info := bindataFileInfo{name: "dataset/v1/pod_expected.json", size: 0, mode: os.FileMode(0), modTime: time.Unix(0, 0)} + a := &asset{bytes: bytes, info: info} + return a, nil +} + // Asset loads and returns the asset for the given name. // It returns an error if the asset could not be found or // could not be loaded. @@ -828,21 +1658,24 @@ func AssetNames() []string { // _bindata is a table, holding each asset generator, mapped to its name. var _bindata = map[string]func() (*asset, error){ - "dataset/config.istio.io/v1alpha2/circonus.yaml": datasetConfigIstioIoV1alpha2CirconusYaml, - "dataset/config.istio.io/v1alpha2/circonus_expected.json": datasetConfigIstioIoV1alpha2Circonus_expectedJson, - "dataset/extensions/v1beta1/ingress_basic.yaml": datasetExtensionsV1beta1Ingress_basicYaml, - "dataset/extensions/v1beta1/ingress_basic_expected.json": datasetExtensionsV1beta1Ingress_basic_expectedJson, - "dataset/extensions/v1beta1/ingress_basic_meshconfig.yaml": datasetExtensionsV1beta1Ingress_basic_meshconfigYaml, - "dataset/extensions/v1beta1/ingress_merge_0.skip": datasetExtensionsV1beta1Ingress_merge_0Skip, - "dataset/extensions/v1beta1/ingress_merge_0.yaml": datasetExtensionsV1beta1Ingress_merge_0Yaml, - "dataset/extensions/v1beta1/ingress_merge_0_expected.json": datasetExtensionsV1beta1Ingress_merge_0_expectedJson, - "dataset/extensions/v1beta1/ingress_merge_0_meshconfig.yaml": datasetExtensionsV1beta1Ingress_merge_0_meshconfigYaml, - "dataset/extensions/v1beta1/ingress_merge_1.yaml": datasetExtensionsV1beta1Ingress_merge_1Yaml, - "dataset/extensions/v1beta1/ingress_merge_1_expected.json": datasetExtensionsV1beta1Ingress_merge_1_expectedJson, - "dataset/networking.istio.io/v1alpha3/destinationRule.yaml": datasetNetworkingIstioIoV1alpha3DestinationruleYaml, + "dataset/config.istio.io/v1alpha2/circonus.yaml": datasetConfigIstioIoV1alpha2CirconusYaml, + "dataset/config.istio.io/v1alpha2/circonus_expected.json": datasetConfigIstioIoV1alpha2Circonus_expectedJson, + "dataset/extensions/v1beta1/ingress_basic.yaml": datasetExtensionsV1beta1Ingress_basicYaml, + "dataset/extensions/v1beta1/ingress_basic_expected.json": datasetExtensionsV1beta1Ingress_basic_expectedJson, + "dataset/extensions/v1beta1/ingress_basic_meshconfig.yaml": datasetExtensionsV1beta1Ingress_basic_meshconfigYaml, + "dataset/extensions/v1beta1/ingress_merge_0.yaml": datasetExtensionsV1beta1Ingress_merge_0Yaml, + "dataset/extensions/v1beta1/ingress_merge_0_expected.json": datasetExtensionsV1beta1Ingress_merge_0_expectedJson, + "dataset/extensions/v1beta1/ingress_merge_0_meshconfig.yaml": datasetExtensionsV1beta1Ingress_merge_0_meshconfigYaml, + "dataset/extensions/v1beta1/ingress_merge_1.yaml": datasetExtensionsV1beta1Ingress_merge_1Yaml, + "dataset/extensions/v1beta1/ingress_merge_1_expected.json": datasetExtensionsV1beta1Ingress_merge_1_expectedJson, + "dataset/networking.istio.io/v1alpha3/destinationRule.yaml": datasetNetworkingIstioIoV1alpha3DestinationruleYaml, "dataset/networking.istio.io/v1alpha3/destinationRule_expected.json": datasetNetworkingIstioIoV1alpha3Destinationrule_expectedJson, - "dataset/networking.istio.io/v1alpha3/gateway.yaml": datasetNetworkingIstioIoV1alpha3GatewayYaml, - "dataset/networking.istio.io/v1alpha3/gateway_expected.json": datasetNetworkingIstioIoV1alpha3Gateway_expectedJson, + "dataset/networking.istio.io/v1alpha3/gateway.yaml": datasetNetworkingIstioIoV1alpha3GatewayYaml, + "dataset/networking.istio.io/v1alpha3/gateway_expected.json": datasetNetworkingIstioIoV1alpha3Gateway_expectedJson, + "dataset/v1/node.yaml.skip": datasetV1NodeYamlSkip, + "dataset/v1/node_expected.json": datasetV1Node_expectedJson, + "dataset/v1/pod.yaml.skip": datasetV1PodYamlSkip, + "dataset/v1/pod_expected.json": datasetV1Pod_expectedJson, } // AssetDir returns the file names below a certain @@ -884,36 +1717,40 @@ type bintree struct { Func func() (*asset, error) Children map[string]*bintree } - var _bintree = &bintree{nil, map[string]*bintree{ "dataset": &bintree{nil, map[string]*bintree{ "config.istio.io": &bintree{nil, map[string]*bintree{ "v1alpha2": &bintree{nil, map[string]*bintree{ - "circonus.yaml": &bintree{datasetConfigIstioIoV1alpha2CirconusYaml, map[string]*bintree{}}, + "circonus.yaml": &bintree{datasetConfigIstioIoV1alpha2CirconusYaml, map[string]*bintree{}}, "circonus_expected.json": &bintree{datasetConfigIstioIoV1alpha2Circonus_expectedJson, map[string]*bintree{}}, }}, }}, "extensions": &bintree{nil, map[string]*bintree{ "v1beta1": &bintree{nil, map[string]*bintree{ - "ingress_basic.yaml": &bintree{datasetExtensionsV1beta1Ingress_basicYaml, map[string]*bintree{}}, - "ingress_basic_expected.json": &bintree{datasetExtensionsV1beta1Ingress_basic_expectedJson, map[string]*bintree{}}, - "ingress_basic_meshconfig.yaml": &bintree{datasetExtensionsV1beta1Ingress_basic_meshconfigYaml, map[string]*bintree{}}, - "ingress_merge_0.skip": &bintree{datasetExtensionsV1beta1Ingress_merge_0Skip, map[string]*bintree{}}, - "ingress_merge_0.yaml": &bintree{datasetExtensionsV1beta1Ingress_merge_0Yaml, map[string]*bintree{}}, - "ingress_merge_0_expected.json": &bintree{datasetExtensionsV1beta1Ingress_merge_0_expectedJson, map[string]*bintree{}}, + "ingress_basic.yaml": &bintree{datasetExtensionsV1beta1Ingress_basicYaml, map[string]*bintree{}}, + "ingress_basic_expected.json": &bintree{datasetExtensionsV1beta1Ingress_basic_expectedJson, map[string]*bintree{}}, + "ingress_basic_meshconfig.yaml": &bintree{datasetExtensionsV1beta1Ingress_basic_meshconfigYaml, map[string]*bintree{}}, + "ingress_merge_0.yaml": &bintree{datasetExtensionsV1beta1Ingress_merge_0Yaml, map[string]*bintree{}}, + "ingress_merge_0_expected.json": &bintree{datasetExtensionsV1beta1Ingress_merge_0_expectedJson, map[string]*bintree{}}, "ingress_merge_0_meshconfig.yaml": &bintree{datasetExtensionsV1beta1Ingress_merge_0_meshconfigYaml, map[string]*bintree{}}, - "ingress_merge_1.yaml": &bintree{datasetExtensionsV1beta1Ingress_merge_1Yaml, map[string]*bintree{}}, - "ingress_merge_1_expected.json": &bintree{datasetExtensionsV1beta1Ingress_merge_1_expectedJson, map[string]*bintree{}}, + "ingress_merge_1.yaml": &bintree{datasetExtensionsV1beta1Ingress_merge_1Yaml, map[string]*bintree{}}, + "ingress_merge_1_expected.json": &bintree{datasetExtensionsV1beta1Ingress_merge_1_expectedJson, map[string]*bintree{}}, }}, }}, "networking.istio.io": &bintree{nil, map[string]*bintree{ "v1alpha3": &bintree{nil, map[string]*bintree{ - "destinationRule.yaml": &bintree{datasetNetworkingIstioIoV1alpha3DestinationruleYaml, map[string]*bintree{}}, + "destinationRule.yaml": &bintree{datasetNetworkingIstioIoV1alpha3DestinationruleYaml, map[string]*bintree{}}, "destinationRule_expected.json": &bintree{datasetNetworkingIstioIoV1alpha3Destinationrule_expectedJson, map[string]*bintree{}}, - "gateway.yaml": &bintree{datasetNetworkingIstioIoV1alpha3GatewayYaml, map[string]*bintree{}}, - "gateway_expected.json": &bintree{datasetNetworkingIstioIoV1alpha3Gateway_expectedJson, map[string]*bintree{}}, + "gateway.yaml": &bintree{datasetNetworkingIstioIoV1alpha3GatewayYaml, map[string]*bintree{}}, + "gateway_expected.json": &bintree{datasetNetworkingIstioIoV1alpha3Gateway_expectedJson, map[string]*bintree{}}, }}, }}, + "v1": &bintree{nil, map[string]*bintree{ + "node.yaml.skip": &bintree{datasetV1NodeYamlSkip, map[string]*bintree{}}, + "node_expected.json": &bintree{datasetV1Node_expectedJson, map[string]*bintree{}}, + "pod.yaml.skip": &bintree{datasetV1PodYamlSkip, map[string]*bintree{}}, + "pod_expected.json": &bintree{datasetV1Pod_expectedJson, map[string]*bintree{}}, + }}, }}, }} @@ -963,3 +1800,4 @@ func _filePath(dir, name string) string { cannonicalName := strings.Replace(name, "\\", "/", -1) return filepath.Join(append([]string{dir}, strings.Split(cannonicalName, "/")...)...) } + diff --git a/galley/pkg/testing/testdata/dataset/config.istio.io/v1alpha2/circonus_expected.json b/galley/pkg/testing/testdata/dataset/config.istio.io/v1alpha2/circonus_expected.json index 3680d4ce6930..e9cf1ce73303 100644 --- a/galley/pkg/testing/testdata/dataset/config.istio.io/v1alpha2/circonus_expected.json +++ b/galley/pkg/testing/testdata/dataset/config.istio.io/v1alpha2/circonus_expected.json @@ -1,28 +1,24 @@ { - "type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource": [ + "istio/config/v1alpha2/legacy/circonuses": [ { "Metadata": { - "name": "circonus/valid-circonus" + "name": "valid-circonus" }, - "Resource": { - "contents": { - "fields": { - "submission_interval": { - "Kind": { - "StringValue": "10s" - } - }, - "submission_url": { - "Kind": { - "StringValue": "https://trap.noit.circonus.net/module/httptrap/myuuid/mysecret" - } + "Body": { + "fields": { + "submission_interval": { + "Kind": { + "StringValue": "10s" + } + }, + "submission_url": { + "Kind": { + "StringValue": "https://trap.noit.circonus.net/module/httptrap/myuuid/mysecret" } } - }, - "kind": "circonus", - "name": "valid-circonus" + } }, - "TypeURL": "type.googleapis.com/istio.mcp.v1alpha1.extensions.LegacyMixerResource" + "TypeURL": "type.googleapis.com/google.protobuf.Struct" } ] } diff --git a/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_basic_expected.json b/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_basic_expected.json index ed2ba48dee03..33661f15ada4 100644 --- a/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_basic_expected.json +++ b/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_basic_expected.json @@ -1,11 +1,11 @@ { - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.Gateway", "Metadata": { "name": "istio-system/foo-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, diff --git a/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_0.skip b/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_0.skip deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_0_expected.json b/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_0_expected.json index 783831c12ba7..a02d994118ed 100644 --- a/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_0_expected.json +++ b/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_0_expected.json @@ -1,10 +1,10 @@ { - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "Metadata": { "name": "istio-system/bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -27,7 +27,7 @@ "Metadata": { "name": "istio-system/foo-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -48,12 +48,12 @@ } ], - "type.googleapis.com/istio.networking.v1alpha3.VirtualService": [ + "istio/networking/v1alpha3/virtualservices": [ { "Metadata": { "name": "istio-system/foo-bar-com-bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "gateways": [ "istio-autogenerated-k8s-ingress" ], @@ -114,4 +114,4 @@ "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.VirtualService" } ] -} \ No newline at end of file +} diff --git a/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_1_expected.json b/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_1_expected.json index 1cf8fbb18996..202368ccb0d5 100644 --- a/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_1_expected.json +++ b/galley/pkg/testing/testdata/dataset/extensions/v1beta1/ingress_merge_1_expected.json @@ -1,10 +1,10 @@ { - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "Metadata": { "name": "istio-system/bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -27,7 +27,7 @@ "Metadata": { "name": "istio-system/foo-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "selector": { "istio": "ingress" }, @@ -48,12 +48,12 @@ } ], - "type.googleapis.com/istio.networking.v1alpha3.VirtualService": [ + "istio/networking/v1alpha3/virtualservices": [ { "Metadata": { "name": "istio-system/foo-bar-com-bar-istio-autogenerated-k8s-ingress" }, - "Resource": { + "Body": { "gateways": [ "istio-autogenerated-k8s-ingress" ], @@ -114,4 +114,4 @@ "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.VirtualService" } ] -} \ No newline at end of file +} diff --git a/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/destinationRule_expected.json b/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/destinationRule_expected.json index 419b80e9e67b..88e594910ac1 100644 --- a/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/destinationRule_expected.json +++ b/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/destinationRule_expected.json @@ -1,11 +1,11 @@ { - "type.googleapis.com/istio.networking.v1alpha3.DestinationRule": [ + "istio/networking/v1alpha3/destinationrules": [ { "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.DestinationRule", "Metadata": { "name": "tcp-echo-destination" }, - "Resource": { + "Body": { "host": "tcp-echo", "subsets": [ { diff --git a/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/gateway_expected.json b/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/gateway_expected.json index 856ca85bd48d..7d69a7caa332 100644 --- a/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/gateway_expected.json +++ b/galley/pkg/testing/testdata/dataset/networking.istio.io/v1alpha3/gateway_expected.json @@ -1,11 +1,11 @@ { - "type.googleapis.com/istio.networking.v1alpha3.Gateway": [ + "istio/networking/v1alpha3/gateways": [ { "TypeURL": "type.googleapis.com/istio.networking.v1alpha3.Gateway", "Metadata": { "name": "helloworld-gateway" }, - "Resource": { + "Body": { "selector": { "istio": "ingressgateway" }, diff --git a/galley/pkg/testing/testdata/dataset/v1/node.yaml.skip b/galley/pkg/testing/testdata/dataset/v1/node.yaml.skip new file mode 100644 index 000000000000..b5367a06cc68 --- /dev/null +++ b/galley/pkg/testing/testdata/dataset/v1/node.yaml.skip @@ -0,0 +1,191 @@ +apiVersion: v1 +kind: Node +metadata: + annotations: + container.googleapis.com/instance_id: "2787417306096525587" + node.alpha.kubernetes.io/ttl: "0" + volumes.kubernetes.io/controller-managed-attach-detach: "true" + creationTimestamp: 2018-10-05T19:40:48Z + labels: + beta.kubernetes.io/arch: amd64 + beta.kubernetes.io/fluentd-ds-ready: "true" + beta.kubernetes.io/instance-type: n1-standard-4 + beta.kubernetes.io/os: linux + cloud.google.com/gke-nodepool: default-pool + cloud.google.com/gke-os-distribution: cos + failure-domain.beta.kubernetes.io/region: us-central1 + failure-domain.beta.kubernetes.io/zone: us-central1-a + kubernetes.io/hostname: gke-istio-test-default-pool-866a0405-420r + name: gke-istio-test-default-pool-866a0405-420r + resourceVersion: "59148590" + selfLink: /api/v1/nodes/gke-istio-test-default-pool-866a0405-420r + uid: 8f63dfef-c8d6-11e8-8901-42010a800278 +spec: + externalID: "1929748586650271976" + podCIDR: 10.40.0.0/24 + providerID: gce://nathanmittler-istio-test/us-central1-a/gke-istio-test-default-pool-866a0405-420r +status: + addresses: + - address: 10.128.0.4 + type: InternalIP + - address: 35.238.214.129 + type: ExternalIP + - address: gke-istio-test-default-pool-866a0405-420r + type: Hostname + allocatable: + cpu: 3920m + ephemeral-storage: "47093746742" + hugepages-2Mi: "0" + memory: 12699980Ki + pods: "110" + capacity: + cpu: "4" + ephemeral-storage: 98868448Ki + hugepages-2Mi: "0" + memory: 15399244Ki + pods: "110" + conditions: + - lastHeartbeatTime: 2019-01-08T18:09:08Z + lastTransitionTime: 2018-12-03T17:00:58Z + message: node is functioning properly + reason: UnregisterNetDevice + status: "False" + type: FrequentUnregisterNetDevice + - lastHeartbeatTime: 2019-01-08T18:09:08Z + lastTransitionTime: 2018-12-03T16:55:56Z + message: kernel has no deadlock + reason: KernelHasNoDeadlock + status: "False" + type: KernelDeadlock + - lastHeartbeatTime: 2018-10-05T19:40:58Z + lastTransitionTime: 2018-10-05T19:40:58Z + message: RouteController created a route + reason: RouteCreated + status: "False" + type: NetworkUnavailable + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:55:57Z + message: kubelet has sufficient disk space available + reason: KubeletHasSufficientDisk + status: "False" + type: OutOfDisk + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:55:57Z + message: kubelet has sufficient memory available + reason: KubeletHasSufficientMemory + status: "False" + type: MemoryPressure + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:55:57Z + message: kubelet has no disk pressure + reason: KubeletHasNoDiskPressure + status: "False" + type: DiskPressure + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-10-05T19:40:48Z + message: kubelet has sufficient PID available + reason: KubeletHasSufficientPID + status: "False" + type: PIDPressure + - lastHeartbeatTime: 2019-01-08T18:09:39Z + lastTransitionTime: 2018-12-03T16:56:07Z + message: kubelet is posting ready status. AppArmor enabled + reason: KubeletReady + status: "True" + type: Ready + daemonEndpoints: + kubeletEndpoint: + Port: 10250 + images: + - names: + - gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:a33f69d0034fdce835a1eb7df8a051ea74323f3fc30d911bbd2e3f2aef09fc93 + - gcr.io/stackdriver-agents/stackdriver-logging-agent:0.3-1.5.34-1-k8s-1 + sizeBytes: 554981103 + - names: + - istio/examples-bookinfo-reviews-v3@sha256:8c0385f0ca799e655d8770b52cb4618ba54e8966a0734ab1aeb6e8b14e171a3b + - istio/examples-bookinfo-reviews-v3:1.9.0 + sizeBytes: 525074812 + - names: + - istio/examples-bookinfo-reviews-v2@sha256:d2483dcb235b27309680177726e4e86905d66e47facaf1d57ed590b2bf95c8ad + - istio/examples-bookinfo-reviews-v2:1.9.0 + sizeBytes: 525074812 + - names: + - istio/examples-bookinfo-reviews-v1@sha256:920d46b3c526376b28b90d0e895ca7682d36132e6338301fcbcd567ef81bde05 + - istio/examples-bookinfo-reviews-v1:1.9.0 + sizeBytes: 525074812 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:8cea2c055dd3d3ab78f99584256efcc1cff7d8ddbed11cded404e9d164235502 + sizeBytes: 448337138 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:9949bc22667ef88e54ae91700a64bf1459e8c14ed92b870b7ec2f630e14cf3c1 + sizeBytes: 446407220 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:fc1f957cfa26673768be8fa865066f730f22fde98a6e80654d00f755a643b507 + sizeBytes: 446407220 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:23a52850819d5196d66e8e20f4f63f314f779716f830e1d109ad0e24b1f0df43 + sizeBytes: 446407220 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:e338c2c5cbc379db24c5b2d67a4acc9cca9a069c2927217fca0ce7cbc582d312 + - gcr.io/nathanmittler-istio-test/proxyv2:latest + sizeBytes: 446398900 + - names: + - gcr.io/istio-release/proxyv2@sha256:dec972eab4f46c974feec1563ea484ad4995edf55ea91d42e148c5db04b3d4d2 + - gcr.io/istio-release/proxyv2:master-latest-daily + sizeBytes: 353271308 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:cb4a29362ff9014bf1d96e0ce2bb6337bf034908bb4a8d48af0628a4d8d64413 + sizeBytes: 344543156 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:3f4115cd8c26a17f6bf8ea49f1ff5b875382bda5a6d46281c70c886e802666b0 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:cdd2f527b4bd392b533d2d0e62c257c19d5a35a6b5fc3512aa327c560866aec1 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:6ec1dced4cee8569c77817927938fa4341f939e0dddab511bc3ee8724d652ae2 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:9d502fd29961bc3464f7906ac0e86b07edf01cf4892352ef780e55b3525fb0b8 + sizeBytes: 344257154 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:4e75c42518bb46376cfe0b4fbaa3da1d8f1cea99f706736f1b0b04a3ac554db2 + sizeBytes: 344201616 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:58a7511f549448f6f86280559069bc57f5c754877ebec69da5bbc7ad55e42162 + sizeBytes: 344201616 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:7f60a750d15cda9918e9172e529270ce78c670751d4027f6adc6bdc84ec2d884 + sizeBytes: 344201436 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:6fc25c08212652c7539caaf0f6d913d929f84c54767f20066657ce0f4e6a51e0 + sizeBytes: 344193424 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:4e93825950c831ce6d2b65c9a80921c8860035e39a4b384d38d40f7d2cb2a4ee + sizeBytes: 344185232 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:842216399613774640a4605202d446cf61bd48ff20e12391a0239cbc6a8f2c77 + sizeBytes: 344185052 + - names: + - gcr.io/nathanmittler-istio-test/proxyv2@sha256:8ee2bb6fc5484373227b17e377fc226d8d19be11d38d6dbc304970bd46bc929b + sizeBytes: 344159662 + - names: + - gcr.io/nathanmittler-istio-test/pilot@sha256:111e432396950e01835c23471660968c37928c38bb08f2d53b820bbfe7cbd558 + sizeBytes: 307722351 + - names: + - gcr.io/nathanmittler-istio-test/pilot@sha256:2445d3c2839825be2decbafcd3f2668bdf148ba9acbbb855810006a58899f320 + sizeBytes: 307722351 + - names: + - gcr.io/nathanmittler-istio-test/pilot@sha256:b62e9f12609b89892bb38c858936f76d81aa3ccdc91a3961309f900c1c4f574b + sizeBytes: 307722351 + nodeInfo: + architecture: amd64 + bootID: 8f772c7c-09eb-41eb-8bb5-76ef214eaaa1 + containerRuntimeVersion: docker://17.3.2 + kernelVersion: 4.14.65+ + kubeProxyVersion: v1.11.3-gke.18 + kubeletVersion: v1.11.3-gke.18 + machineID: f325f89cd295bdcda652fd40f0049e32 + operatingSystem: linux + osImage: Container-Optimized OS from Google + systemUUID: F325F89C-D295-BDCD-A652-FD40F0049E32 diff --git a/galley/pkg/testing/testdata/dataset/v1/node_expected.json b/galley/pkg/testing/testdata/dataset/v1/node_expected.json new file mode 100644 index 000000000000..ef121c2d5e97 --- /dev/null +++ b/galley/pkg/testing/testdata/dataset/v1/node_expected.json @@ -0,0 +1,31 @@ +{ + "k8s/core/v1/nodes": [ + { + "TypeURL": "type.googleapis.com/k8s.io.api.core.v1.NodeSpec", + "Metadata": { + "name": "gke-istio-test-default-pool-866a0405-420r", + "annotations": { + "container.googleapis.com/instance_id": "2787417306096525587", + "node.alpha.kubernetes.io/ttl": "0", + "volumes.kubernetes.io/controller-managed-attach-detach": "true" + }, + "labels": { + "beta.kubernetes.io/arch": "amd64", + "beta.kubernetes.io/fluentd-ds-ready": "true", + "beta.kubernetes.io/instance-type": "n1-standard-4", + "beta.kubernetes.io/os": "linux", + "cloud.google.com/gke-nodepool": "default-pool", + "cloud.google.com/gke-os-distribution": "cos", + "failure-domain.beta.kubernetes.io/region": "us-central1", + "failure-domain.beta.kubernetes.io/zone": "us-central1-a", + "kubernetes.io/hostname": "gke-istio-test-default-pool-866a0405-420r" + } + }, + "Body": { + "externalID": "1929748586650271976", + "podCIDR": "10.40.0.0/24", + "providerID": "gce://nathanmittler-istio-test/us-central1-a/gke-istio-test-default-pool-866a0405-420r" + } + } + ] +} diff --git a/galley/pkg/testing/testdata/dataset/v1/pod.yaml.skip b/galley/pkg/testing/testdata/dataset/v1/pod.yaml.skip new file mode 100644 index 000000000000..f3138251daa6 --- /dev/null +++ b/galley/pkg/testing/testdata/dataset/v1/pod.yaml.skip @@ -0,0 +1,275 @@ +apiVersion: v1 +kind: Pod +metadata: + annotations: + scheduler.alpha.kubernetes.io/critical-pod: "" + seccomp.security.alpha.kubernetes.io/pod: docker/default + creationTimestamp: "2018-12-03T16:59:57Z" + generateName: kube-dns-548976df6c- + labels: + k8s-app: kube-dns + pod-template-hash: "1045328927" + name: kube-dns-548976df6c-d9kkv + ownerReferences: + - apiVersion: apps/v1 + blockOwnerDeletion: true + controller: true + kind: ReplicaSet + name: kube-dns-548976df6c + uid: b589a851-f71b-11e8-af4f-42010a800072 + resourceVersion: "50572715" + selfLink: /api/v1/namespaces/kube-system/pods/kube-dns-548976df6c-d9kkv + uid: dd4bbbd4-f71c-11e8-af4f-42010a800072 +spec: + containers: + - args: + - --domain=cluster.local. + - --dns-port=10053 + - --config-dir=/kube-dns-config + - --v=2 + env: + - name: PROMETHEUS_PORT + value: "10055" + image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 + imagePullPolicy: IfNotPresent + livenessProbe: + failureThreshold: 5 + handler: + httpGet: + path: /healthcheck/kubedns + port: + intVal: 10054 + scheme: HTTP + initialDelaySeconds: 60 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + name: kubedns + ports: + - containerPort: 10053 + name: dns-local + protocol: UDP + - containerPort: 10053 + name: dns-tcp-local + protocol: TCP + - containerPort: 10055 + name: metrics + protocol: TCP + readinessProbe: + failureThreshold: 3 + handler: + httpGet: + path: /readiness + port: + intVal: 8081 + scheme: HTTP + initialDelaySeconds: 3 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /kube-dns-config + name: kube-dns-config + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + - args: + - -v=2 + - -logtostderr + - -configDir=/etc/k8s/dns/dnsmasq-nanny + - -restartDnsmasq=true + - -- + - -k + - --cache-size=1000 + - --no-negcache + - --log-facility=- + - --server=/cluster.local/127.0.0.1#10053 + - --server=/in-addr.arpa/127.0.0.1#10053 + - --server=/ip6.arpa/127.0.0.1#10053 + image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 + imagePullPolicy: IfNotPresent + livenessProbe: + failureThreshold: 5 + handler: + httpGet: + path: /healthcheck/dnsmasq + port: + intVal: 10054 + scheme: HTTP + initialDelaySeconds: 60 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + name: dnsmasq + ports: + - containerPort: 53 + name: dns + protocol: UDP + - containerPort: 53 + name: dns-tcp + protocol: TCP + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /etc/k8s/dns/dnsmasq-nanny + name: kube-dns-config + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + - args: + - --v=2 + - --logtostderr + - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV + - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV + image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 + imagePullPolicy: IfNotPresent + livenessProbe: + failureThreshold: 5 + handler: + httpGet: + path: /metrics + port: + intVal: 10054 + scheme: HTTP + initialDelaySeconds: 60 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + name: sidecar + ports: + - containerPort: 10054 + name: metrics + protocol: TCP + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + - command: + - /monitor + - --component=kubedns + - --target-port=10054 + - --stackdriver-prefix=container.googleapis.com/internal/addons + - --api-override=https://monitoring.googleapis.com/ + - --whitelisted-metrics=probe_kubedns_latency_ms,probe_kubedns_errors,dnsmasq_misses,dnsmasq_hits + - --pod-id=$(POD_NAME) + - --namespace-id=$(POD_NAMESPACE) + - --v=2 + env: + - name: POD_NAME + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: metadata.namespace + image: gcr.io/google-containers/prometheus-to-sd:v0.2.3 + imagePullPolicy: IfNotPresent + name: prometheus-to-sd + resources: {} + terminationMessagePath: /dev/termination-log + terminationMessagePolicy: File + volumeMounts: + - mountPath: /var/run/secrets/kubernetes.io/serviceaccount + name: kube-dns-token-lwn8l + readOnly: true + dnsPolicy: Default + nodeName: gke-istio-test-default-pool-866a0405-ftch + priority: 2000000000 + priorityClassName: system-cluster-critical + restartPolicy: Always + schedulerName: default-scheduler + securityContext: {} + serviceAccount: kube-dns + serviceAccountName: kube-dns + terminationGracePeriodSeconds: "30" + tolerations: + - key: CriticalAddonsOnly + operator: Exists + - effect: NoExecute + key: node.kubernetes.io/not-ready + operator: Exists + tolerationSeconds: "300" + - effect: NoExecute + key: node.kubernetes.io/unreachable + operator: Exists + tolerationSeconds: "300" + volumes: + - name: kube-dns-config + volumeSource: + configMap: + defaultMode: 420 + localObjectReference: + name: kube-dns + optional: true + - name: kube-dns-token-lwn8l + volumeSource: + secret: + defaultMode: 420 + secretName: kube-dns-token-lwn8l +status: + conditions: + - lastProbeTime: null + lastTransitionTime: "2018-12-03T17:00:00Z" + status: "True" + type: Initialized + - lastProbeTime: null + lastTransitionTime: "2018-12-03T17:00:20Z" + status: "True" + type: Ready + - lastProbeTime: null + lastTransitionTime: null + status: "True" + type: ContainersReady + - lastProbeTime: null + lastTransitionTime: "2018-12-03T16:59:57Z" + status: "True" + type: PodScheduled + containerStatuses: + - containerID: docker://676f6c98bfa136315c4ccf0fe40e7a56cbf9ac85128e94310eae82f191246b3e + image: k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13 + imageID: docker-pullable://k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64@sha256:45df3e8e0c551bd0c79cdba48ae6677f817971dcbd1eeed7fd1f9a35118410e4 + lastState: {} + name: dnsmasq + ready: true + state: + running: + startedAt: "2018-12-03T17:00:14Z" + - containerID: docker://93fd0664e150982dad0481c5260183308a7035a2f938ec50509d586ed586a107 + image: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13 + imageID: docker-pullable://k8s.gcr.io/k8s-dns-kube-dns-amd64@sha256:618a82fa66cf0c75e4753369a6999032372be7308866fc9afb381789b1e5ad52 + lastState: {} + name: kubedns + ready: true + state: + running: + startedAt: "2018-12-03T17:00:10Z" + - containerID: docker://e823b79a0a48af75f2eebb1c89ba4c31e8c1ee67ee0d917ac7b4891b67d2cd0f + image: gcr.io/google-containers/prometheus-to-sd:v0.2.3 + imageID: docker-pullable://gcr.io/google-containers/prometheus-to-sd@sha256:be220ec4a66275442f11d420033c106bb3502a3217a99c806eef3cf9858788a2 + lastState: {} + name: prometheus-to-sd + ready: true + state: + running: + startedAt: "2018-12-03T17:00:18Z" + - containerID: docker://74223c401a8dac04b8bd29cdfedcb216881791b4e84bb80a15714991dd18735e + image: k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13 + imageID: docker-pullable://k8s.gcr.io/k8s-dns-sidecar-amd64@sha256:cedc8fe2098dffc26d17f64061296b7aa54258a31513b6c52df271a98bb522b3 + lastState: {} + name: sidecar + ready: true + state: + running: + startedAt: "2018-12-03T17:00:16Z" + hostIP: 10.128.0.5 + phase: Running + podIP: 10.40.1.4 + qosClass: Burstable + startTime: "2018-12-03T17:00:00Z" diff --git a/galley/pkg/testing/testdata/dataset/v1/pod_expected.json b/galley/pkg/testing/testdata/dataset/v1/pod_expected.json new file mode 100644 index 000000000000..5e5f3822ca99 --- /dev/null +++ b/galley/pkg/testing/testdata/dataset/v1/pod_expected.json @@ -0,0 +1,282 @@ +{ + "k8s/core/v1/pods": [ + { + "TypeURL": "type.googleapis.com/k8s.io.api.core.v1.PodSpec", + "Metadata": { + "name": "kube-dns-548976df6c-d9kkv", + "annotations": { + "scheduler.alpha.kubernetes.io/critical-pod": "", + "seccomp.security.alpha.kubernetes.io/pod": "docker/default" + }, + "labels": { + "k8s-app": "kube-dns", + "pod-template-hash": "1045328927" + } + }, + "Body": { + "containers": [ + { + "args": [ + "--domain=cluster.local.", + "--dns-port=10053", + "--config-dir=/kube-dns-config", + "--v=2" + ], + "env": [ + { + "name": "PROMETHEUS_PORT", + "value": "10055" + } + ], + "image": "k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.13", + "imagePullPolicy": "IfNotPresent", + "livenessProbe": { + "failureThreshold": 5, + "httpGet": { + "path": "/healthcheck/kubedns", + "port": 10054, + "scheme": "HTTP" + }, + "initialDelaySeconds": 60, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "name": "kubedns", + "ports": [ + { + "containerPort": 10053, + "name": "dns-local", + "protocol": "UDP" + }, + { + "containerPort": 10053, + "name": "dns-tcp-local", + "protocol": "TCP" + }, + { + "containerPort": 10055, + "name": "metrics", + "protocol": "TCP" + } + ], + "readinessProbe": { + "failureThreshold": 3, + "httpGet": { + "path": "/readiness", + "port": 8081, + "scheme": "HTTP" + }, + "initialDelaySeconds": 3, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/kube-dns-config", + "name": "kube-dns-config" + }, + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + }, + { + "args": [ + "-v=2", + "-logtostderr", + "-configDir=/etc/k8s/dns/dnsmasq-nanny", + "-restartDnsmasq=true", + "--", + "-k", + "--cache-size=1000", + "--no-negcache", + "--log-facility=-", + "--server=/cluster.local/127.0.0.1#10053", + "--server=/in-addr.arpa/127.0.0.1#10053", + "--server=/ip6.arpa/127.0.0.1#10053" + ], + "image": "k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.13", + "imagePullPolicy": "IfNotPresent", + "livenessProbe": { + "failureThreshold": 5, + "httpGet": { + "path": "/healthcheck/dnsmasq", + "port": 10054, + "scheme": "HTTP" + }, + "initialDelaySeconds": 60, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "name": "dnsmasq", + "ports": [ + { + "containerPort": 53, + "name": "dns", + "protocol": "UDP" + }, + { + "containerPort": 53, + "name": "dns-tcp", + "protocol": "TCP" + } + ], + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/etc/k8s/dns/dnsmasq-nanny", + "name": "kube-dns-config" + }, + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + }, + { + "args": [ + "--v=2", + "--logtostderr", + "--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV", + "--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV" + ], + "image": "k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.13", + "imagePullPolicy": "IfNotPresent", + "livenessProbe": { + "failureThreshold": 5, + "httpGet": { + "path": "/metrics", + "port": 10054, + "scheme": "HTTP" + }, + "initialDelaySeconds": 60, + "periodSeconds": 10, + "successThreshold": 1, + "timeoutSeconds": 5 + }, + "name": "sidecar", + "ports": [ + { + "containerPort": 10054, + "name": "metrics", + "protocol": "TCP" + } + ], + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + }, + { + "command": [ + "/monitor", + "--component=kubedns", + "--target-port=10054", + "--stackdriver-prefix=container.googleapis.com/internal/addons", + "--api-override=https://monitoring.googleapis.com/", + "--whitelisted-metrics=probe_kubedns_latency_ms,probe_kubedns_errors,dnsmasq_misses,dnsmasq_hits", + "--pod-id=$(POD_NAME)", + "--namespace-id=$(POD_NAMESPACE)", + "--v=2" + ], + "env": [ + { + "name": "POD_NAME", + "valueFrom": { + "fieldRef": { + "apiVersion": "v1", + "fieldPath": "metadata.name" + } + } + }, + { + "name": "POD_NAMESPACE", + "valueFrom": { + "fieldRef": { + "apiVersion": "v1", + "fieldPath": "metadata.namespace" + } + } + } + ], + "image": "gcr.io/google-containers/prometheus-to-sd:v0.2.3", + "imagePullPolicy": "IfNotPresent", + "name": "prometheus-to-sd", + "resources": {}, + "terminationMessagePath": "/dev/termination-log", + "terminationMessagePolicy": "File", + "volumeMounts": [ + { + "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", + "name": "kube-dns-token-lwn8l", + "readOnly": true + } + ] + } + ], + "dnsPolicy": "Default", + "nodeName": "gke-istio-test-default-pool-866a0405-ftch", + "priority": 2000000000, + "priorityClassName": "system-cluster-critical", + "restartPolicy": "Always", + "schedulerName": "default-scheduler", + "securityContext": {}, + "serviceAccount": "kube-dns", + "serviceAccountName": "kube-dns", + "terminationGracePeriodSeconds": 30, + "tolerations": [ + { + "key": "CriticalAddonsOnly", + "operator": "Exists" + }, + { + "effect": "NoExecute", + "key": "node.kubernetes.io/not-ready", + "operator": "Exists", + "tolerationSeconds": 300 + }, + { + "effect": "NoExecute", + "key": "node.kubernetes.io/unreachable", + "operator": "Exists", + "tolerationSeconds": 300 + } + ], + "volumes": [ + { + "configMap": { + "defaultMode": 420, + "name": "kube-dns", + "optional": true + }, + "name": "kube-dns-config" + }, + { + "name": "kube-dns-token-lwn8l", + "secret": { + "defaultMode": 420, + "secretName": "kube-dns-token-lwn8l" + } + } + ] + } + } + ] +} diff --git a/galley/tools/gen-meta/main.go b/galley/tools/gen-meta/main.go index 7eb3d115bbfe..2a56ed214bfe 100644 --- a/galley/tools/gen-meta/main.go +++ b/galley/tools/gen-meta/main.go @@ -38,9 +38,9 @@ var knownProtoTypes = map[string]struct{}{ // metadata is a combination of read and derived metadata. type metadata struct { - Resources []*entry `json:"resources"` - ProtoGoPackages []string `json:"-"` - ProtoDefs []protoDef `json:"-"` + Resources []*entry `json:"resources"` + ProtoGoPackages []string `json:"-"` + CollectionDefs []collectionDef `json:"-"` } // entry in a metadata file @@ -54,12 +54,15 @@ type entry struct { Proto string `json:"proto"` Converter string `json:"converter"` ProtoGoPackage string `json:"protoPackage"` + Collection string `json:"collection"` } -// proto related metadata -type protoDef struct { +// collection related metadata +type collectionDef struct { FullName string `json:"-"` MessageName string `json:"-"` + Collection string `json:"-"` + Kind string `json:"-"` } func main() { @@ -161,28 +164,36 @@ func readMetadata(path string) (*metadata, error) { // Calculate the proto types that needs to be handled. // First, single instance the proto definitions. - protoDefs := make(map[string]protoDef) + collectionDefs := make(map[string]collectionDef) for _, e := range m.Resources { if _, found := knownProtoTypes[e.Proto]; e.Proto == "" || found { continue } parts := strings.Split(e.Proto, ".") msgName := parts[len(parts)-1] - defn := protoDef{MessageName: msgName, FullName: e.Proto} + defn := collectionDef{ + MessageName: msgName, + FullName: e.Proto, + Collection: e.Collection, + Kind: strings.Title(e.Kind), + } - if prevDefn, ok := protoDefs[e.Proto]; ok && defn != prevDefn { + if prevDefn, ok := collectionDefs[e.Collection]; ok && defn != prevDefn { return nil, fmt.Errorf("proto definitions do not match: %+v != %+v", defn, prevDefn) } - protoDefs[e.Proto] = defn + collectionDefs[e.Collection] = defn } - for _, v := range protoDefs { - m.ProtoDefs = append(m.ProtoDefs, v) + for _, v := range collectionDefs { + m.CollectionDefs = append(m.CollectionDefs, v) } - // Then, stable sort based on message name. - sort.Slice(m.ProtoDefs, func(i, j int) bool { - return strings.Compare(m.ProtoDefs[i].MessageName, m.ProtoDefs[j].MessageName) < 0 + // Then, stable sort based on collection name. + sort.Slice(m.CollectionDefs, func(i, j int) bool { + return strings.Compare(m.CollectionDefs[i].Collection, m.CollectionDefs[j].Collection) < 0 + }) + sort.Slice(m.Resources, func(i, j int) bool { + return strings.Compare(m.Resources[i].Collection, m.Resources[j].Collection) < 0 }) return &m, nil @@ -199,8 +210,8 @@ package metadata import ( // Pull in all the known proto types to ensure we get their types registered. -{{range .ProtoGoPackages}} - // Register protos in {{.}}"" +{{range .ProtoGoPackages}} + // Register protos in "{{.}}" _ "{{.}}" {{end}} @@ -211,15 +222,18 @@ import ( var Types *resource.Schema var ( - {{range .ProtoDefs}} - // {{.MessageName}} metadata - {{.MessageName}} resource.Info + {{range .CollectionDefs}} + // {{.Kind}} metadata + {{.Kind}} resource.Info {{end}} ) func init() { b := resource.NewSchemaBuilder() -{{range .ProtoDefs}} {{.MessageName}} = b.Register("type.googleapis.com/{{.FullName}}") + +{{range .CollectionDefs}} {{.Kind}} = b.Register( + "{{.Collection}}", + "type.googleapis.com/{{.FullName}}") {{end}} Types = b.Build() } @@ -234,25 +248,25 @@ const kubeTemplate = ` package kube import ( - "istio.io/istio/galley/pkg/kube" - "istio.io/istio/galley/pkg/kube/converter" + "istio.io/istio/galley/pkg/source/kube/dynamic/converter" + "istio.io/istio/galley/pkg/source/kube/schema" "istio.io/istio/galley/pkg/metadata" ) // Types in the schema. -var Types *kube.Schema +var Types *schema.Instance func init() { - b := kube.NewSchemaBuilder() + b := schema.NewBuilder() {{range .Resources}} - b.Add(kube.ResourceSpec{ + b.Add(schema.ResourceSpec{ Kind: "{{.Kind}}", ListKind: "{{.ListKind}}", Singular: "{{.Singular}}", Plural: "{{.Plural}}", Version: "{{.Version}}", Group: "{{.Group}}", - Target: metadata.Types.Get("type.googleapis.com/{{.Proto}}"), + Target: metadata.Types.Get("{{.Collection}}"), Converter: converter.Get("{{ if .Converter }}{{.Converter}}{{ else }}identity{{end}}"), }) {{end}} diff --git a/galley/tools/gen-meta/metadata.yaml b/galley/tools/gen-meta/metadata.yaml index 021a8b2e5faf..9ece5256be13 100644 --- a/galley/tools/gen-meta/metadata.yaml +++ b/galley/tools/gen-meta/metadata.yaml @@ -10,6 +10,7 @@ resources: proto: "k8s.io.api.extensions.v1beta1.IngressSpec" protoPackage: "k8s.io/api/extensions/v1beta1" converter: "kube-ingress-resource" + collection: "k8s/extensions/v1beta1/ingresses" - kind: "Service" singular: "service" @@ -18,6 +19,23 @@ resources: proto: "k8s.io.api.core.v1.ServiceSpec" protoPackage: "k8s.io/api/core/v1" converter: "kube-service-resource" + collection: "k8s/core/v1/services" + + - kind: "Node" + singular: "node" + plural: "nodes" + version: "v1" + proto: "k8s.io.api.core.v1.NodeSpec" + protoPackage: "k8s.io/api/core/v1" + collection: "k8s/core/v1/nodes" + + - kind: "Pod" + singular: "pod" + plural: "pods" + version: "v1" + proto: "k8s.io.api.core.v1.PodSpec" + protoPackage: "k8s.io/api/core/v1" + collection: "k8s/core/v1/pods" - kind: "VirtualService" singular: "virtualservice" @@ -25,6 +43,7 @@ resources: group: "networking.istio.io" version: "v1alpha3" proto: "istio.networking.v1alpha3.VirtualService" + collection: "istio/networking/v1alpha3/virtualservices" - kind: "Gateway" singular: "gateway" @@ -32,6 +51,7 @@ resources: group: "networking.istio.io" version: "v1alpha3" proto: "istio.networking.v1alpha3.Gateway" + collection: "istio/networking/v1alpha3/gateways" - kind: "ServiceEntry" singular: "serviceentry" @@ -39,6 +59,7 @@ resources: group: "networking.istio.io" version: "v1alpha3" proto: "istio.networking.v1alpha3.ServiceEntry" + collection: "istio/networking/v1alpha3/serviceentries" - kind: "DestinationRule" singular: "destinationrule" @@ -46,6 +67,7 @@ resources: group: "networking.istio.io" version: "v1alpha3" proto: "istio.networking.v1alpha3.DestinationRule" + collection: "istio/networking/v1alpha3/destinationrules" - kind: "EnvoyFilter" singular: "envoyfilter" @@ -53,6 +75,15 @@ resources: group: "networking.istio.io" version: "v1alpha3" proto: "istio.networking.v1alpha3.EnvoyFilter" + collection: "istio/networking/v1alpha3/envoyfilters" + + - kind: "Sidecar" + singular: "sidecar" + plural: "sidecars" + group: "networking.istio.io" + version: "v1alpha3" + proto: "istio.networking.v1alpha3.Sidecar" + collection: "istio/networking/v1alpha3/sidecars" - kind: "HTTPAPISpec" singular: "httpapispec" @@ -60,6 +91,7 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.mixer.v1.config.client.HTTPAPISpec" + collection: "istio/config/v1alpha2/httpapispecs" - kind: "HTTPAPISpecBinding" singular: "httpapispecbinding" @@ -67,6 +99,7 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.mixer.v1.config.client.HTTPAPISpecBinding" + collection: "istio/config/v1alpha2/httpapispecbindings" - kind: "QuotaSpec" singular: "quotaspec" @@ -74,6 +107,7 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.mixer.v1.config.client.QuotaSpec" + collection: "istio/mixer/v1/config/client/quotaspecs" - kind: "QuotaSpecBinding" singular: "quotaspecbinding" @@ -81,6 +115,8 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.mixer.v1.config.client.QuotaSpecBinding" + collection: "istio/mixer/v1/config/client/quotaspecbindings" + - kind: "Policy" singular: "policy" @@ -89,6 +125,7 @@ resources: version: "v1alpha1" proto: "istio.authentication.v1alpha1.Policy" converter: "auth-policy-resource" + collection: "istio/authentication/v1alpha1/policies" - kind: "MeshPolicy" clusterScoped: true @@ -98,6 +135,7 @@ resources: version: "v1alpha1" proto: "istio.authentication.v1alpha1.Policy" converter: "auth-policy-resource" + collection: "istio/authentication/v1alpha1/meshpolicies" - kind: "ServiceRole" singular: "servicerole" @@ -105,6 +143,7 @@ resources: group: "rbac.istio.io" version: "v1alpha1" proto: "istio.rbac.v1alpha1.ServiceRole" + collection: "istio/rbac/v1alpha1/serviceroles" - kind: "ServiceRoleBinding" clusterScoped: false @@ -113,6 +152,7 @@ resources: group: "rbac.istio.io" version: "v1alpha1" proto: "istio.rbac.v1alpha1.ServiceRoleBinding" + collection: "istio/rbac/v1alpha1/servicerolebindings" - kind: "RbacConfig" listKind: "RbacConfigList" @@ -121,6 +161,7 @@ resources: group: "rbac.istio.io" version: "v1alpha1" proto: "istio.rbac.v1alpha1.RbacConfig" + collection: "istio/rbac/v1alpha1/rbacconfigs" - kind: "ClusterRbacConfig" clusterScoped: true @@ -130,6 +171,7 @@ resources: group: "rbac.istio.io" version: "v1alpha1" proto: "istio.rbac.v1alpha1.RbacConfig" + collection: "istio/rbac/v1alpha1/clusterrbacconfigs" # Types from Mixer @@ -139,6 +181,7 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.policy.v1beta1.Rule" + collection: "istio/policy/v1beta1/rules" - kind: "attributemanifest" singular: "attributemanifest" @@ -146,6 +189,7 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.policy.v1beta1.AttributeManifest" + collection: "istio/policy/v1beta1/attributemanifests" - kind: "instance" singular: "instance" @@ -153,6 +197,7 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.policy.v1beta1.Instance" + collection: "istio/policy/v1beta1/instances" - kind: "handler" singular: "handler" @@ -160,24 +205,25 @@ resources: group: "config.istio.io" version: "v1alpha2" proto: "istio.policy.v1beta1.Handler" + collection: "istio/policy/v1beta1/handlers" - kind: "template" singular: "template" plural: "templates" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/templates" - kind: "adapter" singular: "adapter" plural: "adapters" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/adapters" ## Mixer Adapter Types @@ -186,249 +232,286 @@ resources: plural: "bypasses" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/bypasses" - kind: "circonus" singular: "circonus" plural: "circonuses" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/circonuses" - kind: "denier" singular: "denier" plural: "deniers" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/deniers" + - kind: "fluentd" singular: "fluentd" plural: "fluentds" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/fluentds" - kind: "kubernetesenv" singular: "kubernetesenv" plural: "kubernetesenvs" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/kubernetesenvs" - kind: "listchecker" singular: "listchecker" plural: "listcheckers" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/listcheckers" - kind: "memquota" singular: "memquota" plural: "memquotas" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/memquotas" - kind: "noop" singular: "noop" plural: "noops" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/noops" - kind: "opa" singular: "opa" plural: "opas" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/opas" - kind: "prometheus" singular: "prometheus" plural: "prometheuses" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/prometheuses" - kind: "redisquota" singular: "redisquota" plural: "redisquotas" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/redisquotas" - kind: "servicecontrol" singular: "servicecontrol" plural: "servicecontrols" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/servicecontrols" - kind: "signalfx" group: "config.istio.io" plural: "signalfxs" singular: "signalfx" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/signalfxs" - kind: "solarwinds" singular: "solarwinds" plural: "solarwindses" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/solarwindses" - kind: "stackdriver" singular: "stackdriver" plural: "stackdrivers" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/stackdrivers" - kind: "statsd" singular: "statsd" plural: "statsds" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/statsds" - kind: "stdio" singular: "stdio" plural: "stdios" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/stdios" - kind: "apikey" singular: "apikey" plural: "apikeys" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/apikeys" - kind: "authorization" singular: "authorization" plural: "authorizations" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/authorizations" - kind: "checknothing" singular: "checknothing" plural: "checknothings" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/checknothings" - kind: "kubernetes" singular: "kubernetes" plural: "kuberneteses" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/kuberneteses" - kind: "listentry" singular: "listentry" plural: "listentries" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/listentries" - kind: "logentry" singular: "logentry" plural: "logentries" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/logentries" - kind: "metric" singular: "metric" plural: "metrics" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/metrics" - kind: "quota" singular: "quota" plural: "quotas" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/quotas" - kind: "reportnothing" singular: "reportnothing" plural: "reportnothings" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/reportnothings" - kind: "servicecontrolreport" singular: "servicecontrolreport" plural: "servicecontrolreports" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/servicecontrolreports" - kind: "tracespan" singular: "tracespan" plural: "tracespans" group: "config.istio.io" version: "v1alpha2" - proto: "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - converter: "legacy-mixer-resource" - protoPackage: "istio.io/istio/galley/pkg/kube/converter/legacy" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/tracespans" + + - kind: "rbac" + singular: "rbac" + plural: "rbacs" + group: "config.istio.io" + version: "v1alpha2" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/rbacs" + + - kind: "cloudwatch" + singular: "cloudwatch" + plural: "cloudwatches" + group: "config.istio.io" + version: "v1alpha2" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/cloudwatches" + + - kind: "dogstatsd" + singular: "dogstatsd" + plural: "dogstatsds" + group: "config.istio.io" + version: "v1alpha2" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/dogstatsds" + + - kind: "edge" + singular: "edge" + plural: "edges" + group: "config.istio.io" + version: "v1alpha2" + proto: "type.googleapis.com/google.protobuf.Struct" + protoPackage: "github.com/gogo/protobuf/types" + collection: "istio/config/v1alpha2/legacy/edges" diff --git a/galley/tools/mcpc/main.go b/galley/tools/mcpc/main.go index 03f2622caafd..781957c3f29c 100644 --- a/galley/tools/mcpc/main.go +++ b/galley/tools/mcpc/main.go @@ -26,15 +26,16 @@ import ( mcp "istio.io/api/mcp/v1alpha1" "istio.io/istio/pkg/mcp/client" + "istio.io/istio/pkg/mcp/sink" // Import the resource package to pull in all proto types. - _ "istio.io/istio/galley/pkg/kube/converter/legacy" _ "istio.io/istio/galley/pkg/metadata" + "istio.io/istio/pkg/mcp/testing/monitoring" ) var ( serverAddr = flag.String("server", "127.0.0.1:9901", "The server address") - types = flag.String("types", "", "The fully qualified type URLs of resources to deploy") + collection = flag.String("collection", "", "The collection of resources to deploy") id = flag.String("id", "", "The node id for the client") ) @@ -42,11 +43,11 @@ type updater struct { } // Update interface method implementation. -func (u *updater) Apply(ch *client.Change) error { - fmt.Printf("Incoming change: %v\n", ch.TypeURL) +func (u *updater) Apply(ch *sink.Change) error { + fmt.Printf("Incoming change: %v\n", ch.Collection) for i, o := range ch.Objects { - fmt.Printf("%s[%d]\n", ch.TypeURL, i) + fmt.Printf("%s[%d]\n", ch.Collection, i) b, err := json.MarshalIndent(o, " ", " ") if err != nil { @@ -63,7 +64,7 @@ func (u *updater) Apply(ch *client.Change) error { func main() { flag.Parse() - typeNames := strings.Split(*types, ",") + collections := strings.Split(*collection, ",") u := &updater{} @@ -75,6 +76,12 @@ func main() { cl := mcp.NewAggregatedMeshConfigServiceClient(conn) - c := client.New(cl, typeNames, u, *id, map[string]string{}, client.NewStatsContext("mcpc")) + options := &sink.Options{ + CollectionOptions: sink.CollectionOptionsFromSlice(collections), + Updater: u, + ID: *id, + Reporter: monitoring.NewInMemoryStatsContext(), + } + c := client.New(cl, options) c.Run(context.Background()) } diff --git a/install/gcp/deployment_manager/istio-cluster.jinja b/install/gcp/deployment_manager/istio-cluster.jinja index 78f60cf8db11..8ae1e4062c68 100644 --- a/install/gcp/deployment_manager/istio-cluster.jinja +++ b/install/gcp/deployment_manager/istio-cluster.jinja @@ -16,6 +16,7 @@ resources: oauthScopes: - https://www.googleapis.com/auth/logging.write - https://www.googleapis.com/auth/monitoring + - https://www.googleapis.com/auth/devstorage.read_only - type: runtimeconfig.v1beta1.config name: {{ CLUSTER_NAME }}-config diff --git a/install/kubernetes/helm/istio-init/README.md b/install/kubernetes/helm/istio-init/README.md index a3a8269a6e06..46d32a07cd90 100644 --- a/install/kubernetes/helm/istio-init/README.md +++ b/install/kubernetes/helm/istio-init/README.md @@ -33,9 +33,11 @@ The chart deploys pods that consume minimal resources. 1. Install the Istio initializer chart: ``` - $ helm install install/kubernetes/helm/istio-init --name istio-init + $ helm install install/kubernetes/helm/istio-init --name istio-init --namespace istio-system ``` + > Although you can install the `istio-init` chart to any namespace, it is recommended to install `istio-init` in the same namespace(`istio-system`) as other Istio charts. + ## Configuration The Helm chart ships with reasonable defaults. There may be circumstances in which defaults require overrides. diff --git a/install/kubernetes/helm/istio-init/files/crd-10.yaml b/install/kubernetes/helm/istio-init/files/crd-10.yaml index f728bd8ece7a..0ea8f228589e 100644 --- a/install/kubernetes/helm/istio-init/files/crd-10.yaml +++ b/install/kubernetes/helm/istio-init/files/crd-10.yaml @@ -87,6 +87,27 @@ spec: --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition +metadata: + name: sidecars.networking.istio.io + labels: + app: istio-pilot + chart: istio + heritage: Tiller + release: istio +spec: + group: networking.istio.io + names: + kind: Sidecar + plural: sidecars + singular: sidecar + categories: + - istio-io + - networking-istio-io + scope: Namespaced + version: v1alpha3 +--- +apiVersion: apiextensions.k8s.io/v1beta1 +kind: CustomResourceDefinition metadata: name: envoyfilters.networking.istio.io labels: diff --git a/install/kubernetes/helm/istio-init/templates/job-crd-10.yaml b/install/kubernetes/helm/istio-init/templates/job-crd-10.yaml index bd1ff1271124..dd1fe0cf37b8 100644 --- a/install/kubernetes/helm/istio-init/templates/job-crd-10.yaml +++ b/install/kubernetes/helm/istio-init/templates/job-crd-10.yaml @@ -1,6 +1,7 @@ apiVersion: batch/v1 kind: Job metadata: + namespace: {{ .Release.Namespace }} name: istio-init-crd-10 spec: template: diff --git a/install/kubernetes/helm/istio-init/templates/job-crd-11.yaml b/install/kubernetes/helm/istio-init/templates/job-crd-11.yaml index 9332138da76a..1040deb0465a 100644 --- a/install/kubernetes/helm/istio-init/templates/job-crd-11.yaml +++ b/install/kubernetes/helm/istio-init/templates/job-crd-11.yaml @@ -1,6 +1,7 @@ apiVersion: batch/v1 kind: Job metadata: + namespace: {{ .Release.Namespace }} name: istio-init-crd-11 spec: template: diff --git a/install/kubernetes/helm/istio-init/templates/job-crd-certmanager-10.yaml b/install/kubernetes/helm/istio-init/templates/job-crd-certmanager-10.yaml index d435d8965651..803ef1f10609 100644 --- a/install/kubernetes/helm/istio-init/templates/job-crd-certmanager-10.yaml +++ b/install/kubernetes/helm/istio-init/templates/job-crd-certmanager-10.yaml @@ -1,6 +1,7 @@ apiVersion: batch/v1 kind: Job metadata: + namespace: {{ .Release.Namespace }} name: istio-init-crd-certmanager-10 spec: template: diff --git a/install/kubernetes/helm/istio-remote/templates/sidecar-injector-configmap.yaml b/install/kubernetes/helm/istio-remote/templates/sidecar-injector-configmap.yaml index 6d81c7b02a03..569a9456a803 100644 --- a/install/kubernetes/helm/istio-remote/templates/sidecar-injector-configmap.yaml +++ b/install/kubernetes/helm/istio-remote/templates/sidecar-injector-configmap.yaml @@ -84,10 +84,8 @@ data: args: - proxy - sidecar -{{- if .Values.global.proxy.proxyDomain }} - --domain - - {{ .Values.global.proxy.proxyDomain }} -{{- end }} + - $(POD_NAMESPACE).svc.{{ .Values.global.proxy.clusterDomain }} - --configPath - {{ "[[ .ProxyConfig.ConfigPath ]]" }} - --binaryPath diff --git a/install/kubernetes/helm/istio-remote/values.yaml b/install/kubernetes/helm/istio-remote/values.yaml index 0cfc0e7f9f50..276ed55f5e6b 100644 --- a/install/kubernetes/helm/istio-remote/values.yaml +++ b/install/kubernetes/helm/istio-remote/values.yaml @@ -51,11 +51,8 @@ global: proxy: image: proxyv2 - # DNS domain suffix for pilot proxy agent. Default value is "${POD_NAMESPACE}.svc.cluster.local". - proxyDomain: "" - - # DNS domain suffix for pilot proxy discovery. Default value is "cluster.local". - discoveryDomain: "" + # specify cluster domain. Default value is "cluster.local". + clusterDomain: "cluster.local" # Resources for the sidecar. resources: @@ -240,13 +237,6 @@ global: # for more detail. priorityClassName: "" - # Include the crd definition when generating the template. - # For 'helm template' and helm install > 2.10 it should be true. - # For helm < 2.9, crds must be installed ahead of time with - # 'kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml - # and this options must be set off. - crds: true - # Use the Mesh Control Protocol (MCP) for configuring Mixer and # Pilot. Requires galley (`--set galley.enabled=true`). useMCP: true diff --git a/install/kubernetes/helm/istio/README.md b/install/kubernetes/helm/istio/README.md index 059d289972c5..ffb881b9637c 100644 --- a/install/kubernetes/helm/istio/README.md +++ b/install/kubernetes/helm/istio/README.md @@ -30,6 +30,7 @@ To enable or disable each component, change the corresponding `enabled` flag. - Kubernetes 1.9 or newer cluster with RBAC (Role-Based Access Control) enabled is required - Helm 2.7.2 or newer or alternately the ability to modify RBAC rules is also required - If you want to enable automatic sidecar injection, Kubernetes 1.9+ with `admissionregistration` API is required, and `kube-apiserver` process must have the `admission-control` flag set with the `MutatingAdmissionWebhook` and `ValidatingAdmissionWebhook` admission controllers added and listed in the correct order. +- The `istio-init` chart must be run to completion prior to install the `istio` chart. ## Resources Required @@ -53,17 +54,6 @@ The chart deploys pods that consume minimum resources as specified in the resour $ kubectl create ns $NAMESPACE ``` -1. If using a Helm version prior to 2.10.0, install Istio’s [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) via `kubectl apply`, and wait a few seconds for the CRDs to be committed in the kube-apiserver: - ``` - $ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml - ``` - > If you are enabling `certmanager`, you also need to install its CRDs and wait a few seconds for the CRDs to be committed in the kube-apiserver: - ``` - $ kubectl apply -f install/kubernetes/helm/istio/charts/certmanager/templates/crds.yaml - ``` - - > Helm version 2.10.0 supports a way to register CRDs via an internal feature called `crd-install`. This feature does not exist in prior versions of Helm. - 1. If you are enabling `kiali`, you need to create the secret that contains the username and passphrase for `kiali` dashboard: ``` $ echo -n 'admin' | base64 diff --git a/install/kubernetes/helm/istio/templates/configmap.yaml b/install/kubernetes/helm/istio/templates/configmap.yaml index 27b8b70948f3..e8cb76f894b3 100644 --- a/install/kubernetes/helm/istio/templates/configmap.yaml +++ b/install/kubernetes/helm/istio/templates/configmap.yaml @@ -35,11 +35,11 @@ data: # Deprecated: mixer is using EDS {{- if or .Values.mixer.policy.enabled .Values.mixer.telemetry.enabled }} {{- if .Values.global.controlPlaneSecurityEnabled }} - mixerCheckServer: istio-policy.{{ .Release.Namespace }}.svc.cluster.local:15004 - mixerReportServer: istio-telemetry.{{ .Release.Namespace }}.svc.cluster.local:15004 + mixerCheckServer: istio-policy.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }}:15004 + mixerReportServer: istio-telemetry.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }}:15004 {{- else }} - mixerCheckServer: istio-policy.{{ .Release.Namespace }}.svc.cluster.local:9091 - mixerReportServer: istio-telemetry.{{ .Release.Namespace }}.svc.cluster.local:9091 + mixerCheckServer: istio-policy.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }}:9091 + mixerReportServer: istio-telemetry.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }}:9091 {{- end }} # policyCheckFailOpen allows traffic in cases when the mixer policy service cannot be reached. @@ -77,7 +77,14 @@ data: # Refer to https://github.com/spiffe/spiffe/blob/master/standards/SPIFFE-ID.md#21-trust-domain trustDomain: {{ .Values.global.trustDomain }} - # + # Set the default behavior of the sidecar for handling outbound traffic from the application: + # REGISTRY_ONLY - restrict outbound traffic to services defined in the service registry as well + # as those defined through ServiceEntries + # ALLOW_ANY - outbound traffic to unknown destinations will be allowed, in case there are no + # services or ServiceEntries for the destination port + outboundTrafficPolicy: + mode: {{ .Values.global.outboundTrafficPolicy.mode }} + defaultConfig: # # TCP connection timeout between Envoy & the application, and between Envoys. diff --git a/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml b/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml index 16ea53a7f7bf..64274758296a 100644 --- a/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml +++ b/install/kubernetes/helm/istio/templates/sidecar-injector-configmap.yaml @@ -88,10 +88,8 @@ data: args: - proxy - sidecar -{{- if .Values.global.proxy.proxyDomain }} - --domain - - {{ .Values.global.proxy.proxyDomain }} -{{- end }} + - $(POD_NAMESPACE).svc.{{ .Values.global.proxy.clusterDomain }} - --configPath - {{ "[[ .ProxyConfig.ConfigPath ]]" }} - --binaryPath diff --git a/install/kubernetes/helm/istio/values-istio-example-sds-vault.yaml b/install/kubernetes/helm/istio/values-istio-example-sds-vault.yaml new file mode 100644 index 000000000000..6e88206646f0 --- /dev/null +++ b/install/kubernetes/helm/istio/values-istio-example-sds-vault.yaml @@ -0,0 +1,31 @@ +global: + controlPlaneSecurityEnabled: false + + mtls: + # Default setting for service-to-service mtls. Can be set explicitly using + # destination rules or service annotations. + enabled: true + + # Default is 10s second + refreshInterval: 1s + + sds: + enabled: true + udsPath: "unix:/var/run/sds/uds_path" + useNormalJwt: true + +nodeagent: + enabled: true + image: node-agent-k8s + env: + # https://35.233.249.249:8200 is the IP address and the port number + # of a testing Vault server. + CA_ADDR: "https://35.233.249.249:8200" + CA_PROVIDER: "VaultCA" + # https://35.233.249.249:8200 is the IP address and the port number + # of a testing Vault server. + VAULT_ADDR: "https://35.233.249.249:8200" + VAULT_AUTH_PATH: "auth/kubernetes/login" + VAULT_ROLE: "istio-cert" + VAULT_SIGN_CSR_PATH: "istio_ca/sign/istio-pki-role" + VAULT_TLS_ROOT_CERT: '-----BEGIN CERTIFICATE-----\nMIIC3jCCAcagAwIBAgIRAIcSFH1jneS0XPz5r2QDbigwDQYJKoZIhvcNAQELBQAw\nEDEOMAwGA1UEChMFVmF1bHQwIBcNMTgxMjI2MDkwMDU3WhgPMjExODEyMDIwOTAw\nNTdaMBAxDjAMBgNVBAoTBVZhdWx0MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB\nCgKCAQEA2q5lfJCLAOTEjX3xV8qMLEX8zUQpd0AjD6zzOMzx51GVM7Plf7CJmaDq\nyloRz3zcrTEltHUrln5fvouvp4TetOlqEU979vvccnFLgXrSpn+Zt/EyjE0rUYY3\n5e2qxy9bP2E7zJSKONIT6zRDd2zUQGH3zUem1ZG0GFY1ZL5qFSOIy+PvuQ4u8HCa\n1CcnHmI613fVDbFbaxuF2G2MIwCZ/Fg6KBd9kgU7uCOvkbR4AtRe0ntwweIjOIas\nFiohPQzVY4obrYZiTV43HT4lGti7ySn2c96UnRSnmHLWyBb7cafd4WZN/t+OmYSd\nooxCVQ2Zqub6NlZ5OySYOz/0BJq6DQIDAQABozEwLzAOBgNVHQ8BAf8EBAMCBaAw\nDAYDVR0TAQH/BAIwADAPBgNVHREECDAGhwQj6fn5MA0GCSqGSIb3DQEBCwUAA4IB\nAQBORvUcW0wgg/Wo1aKFaZQuPPFVLjOZat0QpCJYNDhsSIO4Y0JS+Y1cEIkvXB3S\nQ3D7IfNP0gh1fhtP/d45LQSPqpyJF5vKWAvwa/LSPKpw2+Zys4oDahcH+SEKiQco\nIhkkHNEgC4LEKEaGvY4A8Cw7uWWquUJB16AapSSnkeD2vTcxErfCO59yR7yEWDa6\n8j6QNzmGNj2YXtT86+Mmedhfh65Rrh94mhAPQHBAdCNGCUwZ6zHPQ6Z1rj+x3Wm9\ngqpveVq2olloNbnLNmM3V6F9mqSZACgADmRqf42bixeHczkTfRDKThJcpY5U44vy\nw4Nm32yDWhD6AC68rDkXX68m\n-----END CERTIFICATE-----' \ No newline at end of file diff --git a/install/kubernetes/helm/istio/values-istio-gateways.yaml b/install/kubernetes/helm/istio/values-istio-gateways.yaml index a1d9458136b2..4e6237fe776e 100644 --- a/install/kubernetes/helm/istio/values-istio-gateways.yaml +++ b/install/kubernetes/helm/istio/values-istio-gateways.yaml @@ -1,12 +1,5 @@ # Common settings. global: - # Include the crd definition when generating the template. - # For 'helm template' and helm install > 2.10 it should be true. - # For helm < 2.9, crds must be installed ahead of time with - # 'kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml - # and this options must be set off. - crds: false - # Omit the istio-sidecar-injector configmap when generate a # standalone gateway. Gateways may be created in namespaces other # than `istio-system` and we don't want to re-create the injector diff --git a/install/kubernetes/helm/istio/values-istio-googleca.yaml b/install/kubernetes/helm/istio/values-istio-googleca.yaml new file mode 100644 index 000000000000..e0c633ea1ddc --- /dev/null +++ b/install/kubernetes/helm/istio/values-istio-googleca.yaml @@ -0,0 +1,22 @@ +global: + controlPlaneSecurityEnabled: false + + mtls: + # Default setting for service-to-service mtls. Can be set explicitly using + # destination rules or service annotations. + enabled: true + + sds: + enabled: true + udsPath: "unix:/var/run/sds/uds_path" + useTrustworthyJwt: true + + trustDomain: "" + +nodeagent: + enabled: true + image: node-agent-k8s + env: + CA_PROVIDER: "GoogleCA" + CA_ADDR: "istioca.googleapis.com:443" + Plugins: "GoogleTokenExchange" diff --git a/install/kubernetes/helm/istio/values-istio-minimal.yaml b/install/kubernetes/helm/istio/values-istio-minimal.yaml index adea612a30f5..19623e10eb07 100644 --- a/install/kubernetes/helm/istio/values-istio-minimal.yaml +++ b/install/kubernetes/helm/istio/values-istio-minimal.yaml @@ -29,12 +29,6 @@ prometheus: # Common settings. global: - # Include the crd definition when generating the template. - # For 'helm template' and helm install > 2.10 it should be true. - # For helm < 2.9, crds must be installed ahead of time with - # 'kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml - # and this options must be set off. - crds: false proxy: # Sets the destination Statsd in envoy (the value of the "--statsdUdpAddress" proxy argument diff --git a/install/kubernetes/helm/istio/values.yaml b/install/kubernetes/helm/istio/values.yaml index e93c9ebfedec..91890f1dc28a 100644 --- a/install/kubernetes/helm/istio/values.yaml +++ b/install/kubernetes/helm/istio/values.yaml @@ -127,11 +127,8 @@ global: proxy: image: proxyv2 - # DNS domain suffix for pilot proxy agent. Default value is "${POD_NAMESPACE}.svc.cluster.local". - proxyDomain: "" - - # DNS domain suffix for pilot proxy discovery. Default value is "cluster.local". - discoveryDomain: "" + # cluster domain. Default value is "cluster.local". + clusterDomain: "cluster.local" # Resources for the sidecar. resources: @@ -280,7 +277,7 @@ global: # If not set, controller watches all namespaces oneNamespace: false - # Default node selector to be applied to all deployments so that all pos can be + # Default node selector to be applied to all deployments so that all pods can be # constrained to run a particular nodes. Each component can overwrite these default # values by adding its node selector block in the relevant section below and setting # the desired values. @@ -339,21 +336,26 @@ global: # for more detail. priorityClassName: "" - # Include the crd definition when generating the template. - # For 'helm template' and helm install > 2.10 it should be true. - # For helm < 2.9, crds must be installed ahead of time with - # 'kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml - # and this options must be set off. - crds: true - # Use the Mesh Control Protocol (MCP) for configuring Mixer and # Pilot. Requires galley (`--set galley.enabled=true`). useMCP: true # The trust domain corresponds to the trust root of a system # Refer to https://github.com/spiffe/spiffe/blob/master/standards/SPIFFE-ID.md#21-trust-domain + # Indicate the domain used in SPIFFE identity URL + # The default depends on the environment. + # kubernetes: cluster.local + # else: default dns domain trustDomain: "" + # Set the default behavior of the sidecar for handling outbound traffic from the application: + # REGISTRY_ONLY - restrict outbound traffic to services defined in the service registry as well + # as those defined through ServiceEntries + # ALLOW_ANY - outbound traffic to unknown destinations will be allowed, in case there are no + # services or ServiceEntries for the destination port + outboundTrafficPolicy: + mode: REGISTRY_ONLY + sds: # SDS enabled. IF set to true, mTLS certificates for the sidecars will be # distributed through the SecretDiscoveryService instead of using K8S secrets to mount the certificates. diff --git a/install/kubernetes/helm/subcharts/galley/templates/clusterrole.yaml b/install/kubernetes/helm/subcharts/galley/templates/clusterrole.yaml index 8acc513280b8..2b93543a5cd3 100644 --- a/install/kubernetes/helm/subcharts/galley/templates/clusterrole.yaml +++ b/install/kubernetes/helm/subcharts/galley/templates/clusterrole.yaml @@ -27,6 +27,15 @@ rules: resources: ["deployments"] resourceNames: ["istio-galley"] verbs: ["get"] +- apiGroups: ["*"] + resources: ["pods"] + verbs: ["get", "list", "watch"] +- apiGroups: ["*"] + resources: ["nodes"] + verbs: ["get", "list", "watch"] +- apiGroups: ["*"] + resources: ["services"] + verbs: ["get", "list", "watch"] - apiGroups: ["*"] resources: ["endpoints"] verbs: ["get", "list", "watch"] diff --git a/install/kubernetes/helm/subcharts/gateways/templates/deployment.yaml b/install/kubernetes/helm/subcharts/gateways/templates/deployment.yaml index 1582d9abd13c..4f961fccb0dd 100644 --- a/install/kubernetes/helm/subcharts/gateways/templates/deployment.yaml +++ b/install/kubernetes/helm/subcharts/gateways/templates/deployment.yaml @@ -95,10 +95,8 @@ spec: args: - proxy - router -{{- if $.Values.global.proxy.proxyDomain }} - --domain - - {{ $.Values.global.proxy.proxyDomain }} -{{- end }} + - $(POD_NAMESPACE).svc.{{ $.Values.global.proxy.clusterDomain }} - --log_output_level - 'info' - --drainDuration diff --git a/install/kubernetes/helm/subcharts/gateways/templates/preconfigured.yaml b/install/kubernetes/helm/subcharts/gateways/templates/preconfigured.yaml index 0eb1eb6c00ff..bc32212d2fe5 100644 --- a/install/kubernetes/helm/subcharts/gateways/templates/preconfigured.yaml +++ b/install/kubernetes/helm/subcharts/gateways/templates/preconfigured.yaml @@ -165,7 +165,7 @@ spec: filterType: NETWORK filterConfig: cluster_pattern: "\\.global$" - cluster_replacement: ".svc.cluster.local" + cluster_replacement: ".svc.{{ .Values.global.proxy.clusterDomain }}" --- ## To ensure all traffic to *.global is using mTLS apiVersion: networking.istio.io/v1alpha3 diff --git a/install/kubernetes/helm/subcharts/grafana/templates/gateway.yaml b/install/kubernetes/helm/subcharts/grafana/templates/gateway.yaml index 0f8e7576dd89..717476979b57 100644 --- a/install/kubernetes/helm/subcharts/grafana/templates/gateway.yaml +++ b/install/kubernetes/helm/subcharts/grafana/templates/gateway.yaml @@ -35,7 +35,7 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - host: grafana.{{ .Release.Namespace }}.svc.cluster.local + host: grafana.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} trafficPolicy: tls: mode: DISABLE @@ -60,7 +60,7 @@ spec: - port: 15031 route: - destination: - host: grafana.{{ .Release.Namespace }}.svc.cluster.local + host: grafana.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: {{ .Values.service.externalPort }} {{- end }} diff --git a/install/kubernetes/helm/subcharts/ingress/templates/deployment.yaml b/install/kubernetes/helm/subcharts/ingress/templates/deployment.yaml index 8fdc9b85250f..7a03bff0a1cb 100644 --- a/install/kubernetes/helm/subcharts/ingress/templates/deployment.yaml +++ b/install/kubernetes/helm/subcharts/ingress/templates/deployment.yaml @@ -37,10 +37,8 @@ spec: args: - proxy - ingress -{{- if $.Values.global.proxy.proxyDomain }} - --domain - - {{ $.Values.global.proxy.proxyDomain }} -{{- end }} + - $(POD_NAMESPACE).svc.{{ .Values.global.proxy.clusterDomain }} - --log_output_level - 'info' - --drainDuration diff --git a/install/kubernetes/helm/subcharts/kiali/templates/gateway.yaml b/install/kubernetes/helm/subcharts/kiali/templates/gateway.yaml index 94fceddd04c6..5a193a0a67aa 100644 --- a/install/kubernetes/helm/subcharts/kiali/templates/gateway.yaml +++ b/install/kubernetes/helm/subcharts/kiali/templates/gateway.yaml @@ -35,7 +35,7 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - host: kiali.{{ .Release.Namespace }}.svc.cluster.local + host: kiali.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} trafficPolicy: tls: mode: DISABLE @@ -60,7 +60,7 @@ spec: - port: 15029 route: - destination: - host: kiali.{{ .Release.Namespace }}.svc.cluster.local + host: kiali.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 20001 {{- end }} diff --git a/install/kubernetes/helm/subcharts/mixer/templates/config.yaml b/install/kubernetes/helm/subcharts/mixer/templates/config.yaml index d4fdba880876..c0d9fb933f5f 100644 --- a/install/kubernetes/helm/subcharts/mixer/templates/config.yaml +++ b/install/kubernetes/helm/subcharts/mixer/templates/config.yaml @@ -980,7 +980,7 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - host: istio-policy.{{ .Release.Namespace }}.svc.cluster.local + host: istio-policy.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} trafficPolicy: {{- if .Values.global.controlPlaneSecurityEnabled }} portLevelSettings: @@ -1007,7 +1007,7 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - host: istio-telemetry.{{ .Release.Namespace }}.svc.cluster.local + host: istio-telemetry.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} trafficPolicy: {{- if .Values.global.controlPlaneSecurityEnabled }} portLevelSettings: diff --git a/install/kubernetes/helm/subcharts/mixer/templates/deployment.yaml b/install/kubernetes/helm/subcharts/mixer/templates/deployment.yaml index 2311496685b0..36f283ef71fe 100644 --- a/install/kubernetes/helm/subcharts/mixer/templates/deployment.yaml +++ b/install/kubernetes/helm/subcharts/mixer/templates/deployment.yaml @@ -90,6 +90,8 @@ name: http-envoy-prom args: - proxy + - --domain + - $(POD_NAMESPACE).svc.{{ $.Values.global.proxy.clusterDomain }} - --serviceCluster - istio-policy - --templateFile @@ -226,10 +228,8 @@ name: http-envoy-prom args: - proxy -{{- if $.Values.global.proxy.proxyDomain }} - --domain - - {{ $.Values.global.proxy.proxyDomain }} -{{- end }} + - $(POD_NAMESPACE).svc.{{ .Values.global.proxy.clusterDomain }} - --serviceCluster - istio-telemetry - --templateFile diff --git a/install/kubernetes/helm/subcharts/mixer/templates/service.yaml b/install/kubernetes/helm/subcharts/mixer/templates/service.yaml index 6ac6f0af70ba..06a29a198520 100644 --- a/install/kubernetes/helm/subcharts/mixer/templates/service.yaml +++ b/install/kubernetes/helm/subcharts/mixer/templates/service.yaml @@ -23,6 +23,9 @@ spec: {{- if eq $key "telemetry" }} - name: prometheus port: 42422 +{{- if $spec.sessionAffinityEnabled }} + sessionAffinity: ClientIP +{{- end }} {{- end }} selector: istio: mixer diff --git a/install/kubernetes/helm/subcharts/mixer/values.yaml b/install/kubernetes/helm/subcharts/mixer/values.yaml index 63e37ed34120..db70915b87d5 100644 --- a/install/kubernetes/helm/subcharts/mixer/values.yaml +++ b/install/kubernetes/helm/subcharts/mixer/values.yaml @@ -29,6 +29,7 @@ telemetry: podDisruptionBudget: {} # minAvailable: 1 # maxUnavailable: 1 + sessionAffinityEnabled: false podAnnotations: {} nodeSelector: {} diff --git a/install/kubernetes/helm/subcharts/pilot/templates/deployment.yaml b/install/kubernetes/helm/subcharts/pilot/templates/deployment.yaml index 28ce335a9356..98241a96a1d2 100644 --- a/install/kubernetes/helm/subcharts/pilot/templates/deployment.yaml +++ b/install/kubernetes/helm/subcharts/pilot/templates/deployment.yaml @@ -45,10 +45,8 @@ spec: args: - "discovery" - --monitoringAddr=:{{ .Values.global.monitoringPort }} -{{- if $.Values.global.proxy.discoveryDomain }} - --domain - - {{ $.Values.global.proxy.discoveryDomain }} -{{- end }} + - {{ .Values.global.proxy.clusterDomain }} {{- if .Values.global.oneNamespace }} - "-a" - {{ .Release.Namespace }} @@ -68,6 +66,9 @@ spec: {{- else }} - --mcpServerAddrs=mcp://istio-galley.{{ $.Release.Namespace }}.svc:9901 {{- end }} +{{- end }} +{{- if .Values.global.trustDomain }} + - --trust-domain={{ .Values.global.trustDomain }} {{- end }} ports: - containerPort: 8080 @@ -130,10 +131,8 @@ spec: - containerPort: 15011 args: - proxy -{{- if $.Values.global.proxy.proxyDomain }} - --domain - - {{ $.Values.global.proxy.proxyDomain }} -{{- end }} + - $(POD_NAMESPACE).svc.{{ .Values.global.proxy.clusterDomain }} - --serviceCluster - istio-pilot - --templateFile diff --git a/install/kubernetes/helm/subcharts/pilot/templates/meshexpansion.yaml b/install/kubernetes/helm/subcharts/pilot/templates/meshexpansion.yaml index ab4a4a73b701..4f3d595706f1 100644 --- a/install/kubernetes/helm/subcharts/pilot/templates/meshexpansion.yaml +++ b/install/kubernetes/helm/subcharts/pilot/templates/meshexpansion.yaml @@ -12,7 +12,7 @@ metadata: release: {{ .Release.Name }} spec: hosts: - - istio-pilot.{{ $.Release.Namespace }}.svc.cluster.local + - istio-pilot.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} gateways: - meshexpansion-ilb-gateway tcp: @@ -20,21 +20,21 @@ spec: - port: 15011 route: - destination: - host: istio-pilot.{{ $.Release.Namespace }}.svc.cluster.local + host: istio-pilot.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 15011 - match: - port: 15010 route: - destination: - host: istio-pilot.{{ $.Release.Namespace }}.svc.cluster.local + host: istio-pilot.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 15010 - match: - port: 5353 route: - destination: - host: kube-dns.kube-system.svc.cluster.local + host: kube-dns.kube-system.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 53 --- @@ -52,7 +52,7 @@ metadata: release: {{ .Release.Name }} spec: hosts: - - istio-pilot.{{ $.Release.Namespace }}.svc.cluster.local + - istio-pilot.{{ $.Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} gateways: - meshexpansion-gateway tcp: @@ -60,7 +60,7 @@ spec: - port: 15011 route: - destination: - host: istio-pilot.{{ $.Release.Namespace }}.svc.cluster.local + host: istio-pilot.{{ $.Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 15011 --- @@ -78,7 +78,7 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - host: istio-pilot.{{ .Release.Namespace }}.svc.cluster.local + host: istio-pilot.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} trafficPolicy: portLevelSettings: - port: diff --git a/install/kubernetes/helm/subcharts/prometheus/templates/configmap.yaml b/install/kubernetes/helm/subcharts/prometheus/templates/configmap.yaml index 1d84e5ee6e61..a0bc1e797cd1 100644 --- a/install/kubernetes/helm/subcharts/prometheus/templates/configmap.yaml +++ b/install/kubernetes/helm/subcharts/prometheus/templates/configmap.yaml @@ -233,9 +233,10 @@ data: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] action: keep regex: true - - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status] - action: drop - regex: (.+) + # Keep target if there's no sidecar or if prometheus.io/scheme is explicitly set to "http" + - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_prometheus_io_scheme] + action: keep + regex: ((;.*)|(.*;http)) - source_labels: [__meta_kubernetes_pod_annotation_istio_mtls] action: drop regex: (true) @@ -275,6 +276,9 @@ data: - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls] action: keep regex: (([^;]+);([^;]*))|(([^;]*);(true)) + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] + action: drop + regex: (http) - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] action: replace target_label: __metrics_path__ diff --git a/install/kubernetes/helm/subcharts/prometheus/templates/gateway.yaml b/install/kubernetes/helm/subcharts/prometheus/templates/gateway.yaml index 21390af9f078..5f92943b47ae 100644 --- a/install/kubernetes/helm/subcharts/prometheus/templates/gateway.yaml +++ b/install/kubernetes/helm/subcharts/prometheus/templates/gateway.yaml @@ -35,7 +35,7 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - host: prometheus.{{ .Release.Namespace }}.svc.cluster.local + host: prometheus.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} trafficPolicy: tls: mode: DISABLE @@ -60,7 +60,7 @@ spec: - port: 15030 route: - destination: - host: prometheus.{{ .Release.Namespace }}.svc.cluster.local + host: prometheus.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 9090 {{- end }} diff --git a/install/kubernetes/helm/subcharts/security/templates/enable-mesh-mtls.yaml b/install/kubernetes/helm/subcharts/security/templates/enable-mesh-mtls.yaml index 352a4943ffbb..d6ab1adb107b 100644 --- a/install/kubernetes/helm/subcharts/security/templates/enable-mesh-mtls.yaml +++ b/install/kubernetes/helm/subcharts/security/templates/enable-mesh-mtls.yaml @@ -46,7 +46,7 @@ metadata: heritage: {{ .Release.Service }} release: {{ .Release.Name }} spec: - host: "kubernetes.default.svc.cluster.local" + host: "kubernetes.default.svc.{{ .Values.global.proxy.clusterDomain }}" trafficPolicy: tls: mode: DISABLE diff --git a/install/kubernetes/helm/subcharts/security/templates/meshexpansion.yaml b/install/kubernetes/helm/subcharts/security/templates/meshexpansion.yaml index 691e240f3447..581ce964a7d0 100644 --- a/install/kubernetes/helm/subcharts/security/templates/meshexpansion.yaml +++ b/install/kubernetes/helm/subcharts/security/templates/meshexpansion.yaml @@ -13,7 +13,7 @@ metadata: istio: citadel spec: hosts: - - istio-citadel.{{ $.Release.Namespace }}.svc.cluster.local + - istio-citadel.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} gateways: - meshexpansion-ilb-gateway tcp: @@ -21,7 +21,7 @@ spec: - port: 8060 route: - destination: - host: istio-citadel.{{ $.Release.Namespace }}.svc.cluster.local + host: istio-citadel.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 8060 --- @@ -40,7 +40,7 @@ metadata: istio: citadel spec: hosts: - - istio-citadel.{{ $.Release.Namespace }}.svc.cluster.local + - istio-citadel.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} gateways: - meshexpansion-gateway tcp: @@ -48,7 +48,7 @@ spec: - port: 8060 route: - destination: - host: istio-citadel.{{ $.Release.Namespace }}.svc.cluster.local + host: istio-citadel.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 8060 --- diff --git a/install/kubernetes/helm/subcharts/tracing/templates/deployment-jaeger.yaml b/install/kubernetes/helm/subcharts/tracing/templates/deployment-jaeger.yaml index 7aa4373690f5..5506f9e50a41 100644 --- a/install/kubernetes/helm/subcharts/tracing/templates/deployment-jaeger.yaml +++ b/install/kubernetes/helm/subcharts/tracing/templates/deployment-jaeger.yaml @@ -56,10 +56,8 @@ spec: value: "9411" - name: MEMORY_MAX_TRACES value: "{{ .Values.jaeger.memory.max_traces }}" -{{- if .Values.jaeger.contextPath }} - name: QUERY_BASE_PATH - value: {{ .Values.jaeger.contextPath }} -{{- end }} + value: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} /{{ .Values.provider }} {{ end }} livenessProbe: httpGet: path: / diff --git a/install/kubernetes/helm/subcharts/tracing/templates/gateway.yaml b/install/kubernetes/helm/subcharts/tracing/templates/gateway.yaml index 45f830481671..1aafcaf9b9ad 100644 --- a/install/kubernetes/helm/subcharts/tracing/templates/gateway.yaml +++ b/install/kubernetes/helm/subcharts/tracing/templates/gateway.yaml @@ -25,7 +25,7 @@ metadata: name: tracing namespace: {{ .Release.Namespace }} spec: - host: tracing.{{ .Release.Namespace }}.svc.cluster.local + host: tracing.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} trafficPolicy: tls: mode: DISABLE @@ -45,7 +45,7 @@ spec: - port: 15032 route: - destination: - host: tracing.{{ .Release.Namespace }}.svc.cluster.local + host: tracing.{{ .Release.Namespace }}.svc.{{ .Values.global.proxy.clusterDomain }} port: number: 80 --- diff --git a/install/kubernetes/helm/subcharts/tracing/templates/ingress-jaeger.yaml b/install/kubernetes/helm/subcharts/tracing/templates/ingress-jaeger.yaml deleted file mode 100644 index fbd08a7f35fd..000000000000 --- a/install/kubernetes/helm/subcharts/tracing/templates/ingress-jaeger.yaml +++ /dev/null @@ -1,41 +0,0 @@ -{{ if (.Values.jaeger.ingress.enabled) and eq .Values.provider "jaeger" }} -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: jaeger-query - namespace: {{ .Release.Namespace }} - labels: - app: jaeger - chart: {{ template "tracing.chart" . }} - heritage: {{ .Release.Service }} - release: {{ .Release.Name }} - annotations: - {{- range $key, $value := .Values.jaeger.ingress.annotations }} - {{ $key }}: {{ $value | quote }} - {{- end }} -spec: - rules: -{{- if .Values.ingress.hosts }} - {{- range $host := .Values.jaeger.ingress.hosts }} - - host: {{ $host }} - http: - paths: - - path: {{ if $.Values.jaeger.contextPath }} {{ $.Values.jaeger.contextPath }} {{ else }} / {{ end }} - backend: - serviceName: jaeger-query - servicePort: 16686 - - {{- end -}} -{{- else }} - - http: - paths: - - path: {{ if .Values.jaeger.contextPath }} {{ .Values.jaeger.contextPath }} {{ else }} / {{ end }} - backend: - serviceName: jaeger-query - servicePort: 16686 -{{- end }} - {{- if .Values.jaeger.ingress.tls }} - tls: -{{ toYaml .Values.jaeger.ingress.tls | indent 4 }} - {{- end -}} -{{- end -}} diff --git a/install/kubernetes/helm/subcharts/tracing/templates/ingress.yaml b/install/kubernetes/helm/subcharts/tracing/templates/ingress.yaml index ecd722f95da3..575dd37860f5 100644 --- a/install/kubernetes/helm/subcharts/tracing/templates/ingress.yaml +++ b/install/kubernetes/helm/subcharts/tracing/templates/ingress.yaml @@ -1,6 +1,4 @@ {{- if .Values.ingress.enabled -}} -{{- $serviceName := "zipkin" -}} -{{- $servicePort := .Values.service.externalPort -}} apiVersion: extensions/v1beta1 kind: Ingress metadata: @@ -17,16 +15,26 @@ metadata: {{- end }} spec: rules: +{{- if .Values.ingress.hosts }} {{- range $host := .Values.ingress.hosts }} - host: {{ $host }} http: paths: - - path: / + - path: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} /{{ .Values.provider }} {{ end }} backend: - serviceName: {{ $serviceName }} - servicePort: {{ $servicePort }} + serviceName: tracing + servicePort: 80 + {{- end -}} - {{- if .Values.ingress.tls }} +{{- else }} + - http: + paths: + - path: {{ if .Values.contextPath }} {{ .Values.contextPath }} {{ else }} /{{ .Values.provider }} {{ end }} + backend: + serviceName: tracing + servicePort: 80 +{{- end }} + {{- if .Values.ingress.tls }} tls: {{ toYaml .Values.ingress.tls | indent 4 }} {{- end -}} diff --git a/install/kubernetes/helm/subcharts/tracing/values.yaml b/install/kubernetes/helm/subcharts/tracing/values.yaml index cd408687ea6d..b2f4f71b7b2b 100644 --- a/install/kubernetes/helm/subcharts/tracing/values.yaml +++ b/install/kubernetes/helm/subcharts/tracing/values.yaml @@ -11,20 +11,6 @@ jaeger: tag: 1.8 memory: max_traces: 50000 - contextPath: /jaeger - ingress: - enabled: false - # Used to create an Ingress record. - hosts: - - jaeger.local - annotations: - # kubernetes.io/ingress.class: nginx - # kubernetes.io/tls-acme: "true" - tls: - # Secrets must be manually created in the namespace. - # - secretName: jaeger-tls - # hosts: - # - jaeger.local zipkin: hub: docker.io/openzipkin @@ -57,7 +43,7 @@ ingress: enabled: false # Used to create an Ingress record. hosts: - - tracing.local + # - tracing.local annotations: # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" diff --git a/istioctl/pkg/writer/envoy/clusters/clusters.go b/istioctl/pkg/writer/envoy/clusters/clusters.go index 439dbf6c72f3..f5ddc12cf5db 100644 --- a/istioctl/pkg/writer/envoy/clusters/clusters.go +++ b/istioctl/pkg/writer/envoy/clusters/clusters.go @@ -80,17 +80,17 @@ func (e *EndpointFilter) Verify(host *adminapi.HostStatus, cluster string) bool if e.Address == "" && e.Port == 0 && e.Cluster == "" && e.Status == "" { return true } - if e.Address != "" && strings.ToLower(retrieveEndpointAddress(host)) != strings.ToLower(e.Address) { + if e.Address != "" && !strings.EqualFold(retrieveEndpointAddress(host), e.Address) { return false } if e.Port != 0 && retrieveEndpointPort(host) != e.Port { return false } - if e.Cluster != "" && strings.ToLower(cluster) != strings.ToLower(e.Cluster) { + if e.Cluster != "" && !strings.EqualFold(cluster, e.Cluster) { return false } status := retrieveEndpointStatus(host) - if e.Status != "" && strings.ToLower(core.HealthStatus_name[int32(status)]) != strings.ToLower(e.Status) { + if e.Status != "" && !strings.EqualFold(core.HealthStatus_name[int32(status)], e.Status) { return false } return true diff --git a/istioctl/pkg/writer/envoy/configdump/listener.go b/istioctl/pkg/writer/envoy/configdump/listener.go index e6c532505820..eb442a41e939 100644 --- a/istioctl/pkg/writer/envoy/configdump/listener.go +++ b/istioctl/pkg/writer/envoy/configdump/listener.go @@ -45,13 +45,13 @@ func (l *ListenerFilter) Verify(listener *xdsapi.Listener) bool { if l.Address == "" && l.Port == 0 && l.Type == "" { return true } - if l.Address != "" && strings.ToLower(retrieveListenerAddress(listener)) != strings.ToLower(l.Address) { + if l.Address != "" && !strings.EqualFold(retrieveListenerAddress(listener), l.Address) { return false } if l.Port != 0 && retrieveListenerPort(listener) != l.Port { return false } - if l.Type != "" && strings.ToLower(retrieveListenerType(listener)) != strings.ToLower(l.Type) { + if l.Type != "" && !strings.EqualFold(retrieveListenerType(listener), l.Type) { return false } return true diff --git a/mixer/adapter/cloudwatch/client.go b/mixer/adapter/cloudwatch/client.go index ecaf1fc6443b..a13a3f6e191f 100644 --- a/mixer/adapter/cloudwatch/client.go +++ b/mixer/adapter/cloudwatch/client.go @@ -17,6 +17,7 @@ package cloudwatch import ( "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/cloudwatch" + "github.com/aws/aws-sdk-go/service/cloudwatchlogs" ) // newCloudWatchClient creates a cloudwatch client @@ -25,3 +26,10 @@ func newCloudWatchClient() *cloudwatch.CloudWatch { SharedConfigState: session.SharedConfigEnable, }))) } + +// newCloudWatchLogsClient creates a cloudwatchlogs client +func newCloudWatchLogsClient() *cloudwatchlogs.CloudWatchLogs { + return cloudwatchlogs.New(session.Must(session.NewSessionWithOptions(session.Options{ + SharedConfigState: session.SharedConfigEnable, + }))) +} diff --git a/mixer/adapter/cloudwatch/cloudwatch.go b/mixer/adapter/cloudwatch/cloudwatch.go index 025500285e86..015230041a48 100644 --- a/mixer/adapter/cloudwatch/cloudwatch.go +++ b/mixer/adapter/cloudwatch/cloudwatch.go @@ -20,12 +20,16 @@ package cloudwatch import ( "context" "fmt" + "html/template" + "strings" "github.com/aws/aws-sdk-go/service/cloudwatch/cloudwatchiface" + "github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface" istio_policy_v1beta1 "istio.io/api/policy/v1beta1" "istio.io/istio/mixer/adapter/cloudwatch/config" "istio.io/istio/mixer/pkg/adapter" + "istio.io/istio/mixer/template/logentry" "istio.io/istio/mixer/template/metric" ) @@ -50,34 +54,55 @@ var supportedDurationUnits = map[config.Params_MetricDatum_Unit]bool{ type ( builder struct { - adpCfg *config.Params - metricTypes map[string]*metric.Type + adpCfg *config.Params + metricTypes map[string]*metric.Type + logEntryTypes map[string]*logentry.Type } // handler holds data for the cloudwatch adapter handler handler struct { - metricTypes map[string]*metric.Type - env adapter.Env - cfg *config.Params - cloudwatch cloudwatchiface.CloudWatchAPI + metricTypes map[string]*metric.Type + logEntryTypes map[string]*logentry.Type + logEntryTemplates map[string]*template.Template + env adapter.Env + cfg *config.Params + cloudwatch cloudwatchiface.CloudWatchAPI + cloudwatchlogs cloudwatchlogsiface.CloudWatchLogsAPI } ) // ensure types implement the requisite interfaces -var _ metric.HandlerBuilder = &builder{} -var _ metric.Handler = &handler{} +var ( + _ metric.HandlerBuilder = &builder{} + _ metric.Handler = &handler{} + _ logentry.HandlerBuilder = &builder{} + _ logentry.Handler = &handler{} +) ///////////////// Configuration-time Methods /////////////// -// newHandler initializes a cloudwatch handler -func newHandler(metricTypes map[string]*metric.Type, env adapter.Env, cfg *config.Params, cloudwatch cloudwatchiface.CloudWatchAPI) adapter.Handler { - return &handler{metricTypes: metricTypes, env: env, cfg: cfg, cloudwatch: cloudwatch} +// newHandler initializes both cloudwatch and cloudwatchlogs handler +func newHandler(metricTypes map[string]*metric.Type, logEntryTypes map[string]*logentry.Type, logEntryTemplates map[string]*template.Template, + env adapter.Env, cfg *config.Params, cloudwatch cloudwatchiface.CloudWatchAPI, cloudwatchlogs cloudwatchlogsiface.CloudWatchLogsAPI) adapter.Handler { + return &handler{metricTypes: metricTypes, logEntryTypes: logEntryTypes, logEntryTemplates: logEntryTemplates, env: env, cfg: cfg, + cloudwatch: cloudwatch, cloudwatchlogs: cloudwatchlogs} } // adapter.HandlerBuilder#Build func (b *builder) Build(ctx context.Context, env adapter.Env) (adapter.Handler, error) { cloudwatch := newCloudWatchClient() - return newHandler(b.metricTypes, env, b.adpCfg, cloudwatch), nil + cloudwatchlogs := newCloudWatchLogsClient() + templates := make(map[string]*template.Template) + for name, l := range b.adpCfg.GetLogs() { + if strings.TrimSpace(l.PayloadTemplate) == "" { + l.PayloadTemplate = defaultTemplate + } + tmpl, err := template.New(name).Parse(l.PayloadTemplate) + if err == nil { + templates[name] = tmpl + } + } + return newHandler(b.metricTypes, b.logEntryTypes, templates, env, b.adpCfg, cloudwatch, cloudwatchlogs), nil } // adapter.HandlerBuilder#SetAdapterConfig @@ -119,6 +144,25 @@ func (b *builder) Validate() (ce *adapter.ConfigErrors) { ce = ce.Append("dimensions", fmt.Errorf("metrics can only contain %v dimensions", dimensionLimit)) } } + + // LogGroupName should not be empty + if len(b.adpCfg.GetLogGroupName()) == 0 { + ce = ce.Append("log_group_name", fmt.Errorf("log_group_name should not be empty")) + } + // LogStreamName should not be empty + if len(b.adpCfg.GetLogStreamName()) == 0 { + ce = ce.Append("log_stream_name", fmt.Errorf("log_stream_name should not be empty")) + } + // Logs info should not be nil + if b.adpCfg.GetLogs() == nil { + ce = ce.Append("logs", fmt.Errorf("logs info should not be nil")) + } + // variables in the attributes should not be empty + for _, v := range b.logEntryTypes { + if len(v.Variables) == 0 { + ce = ce.Append("instancevariables", fmt.Errorf("instance variables should not be empty")) + } + } return ce } @@ -127,6 +171,11 @@ func (b *builder) SetMetricTypes(types map[string]*metric.Type) { b.metricTypes = types } +// logentry.HandlerBuilder#SetLogEntryTypes +func (b *builder) SetLogEntryTypes(types map[string]*logentry.Type) { + b.logEntryTypes = types +} + // HandleMetric sends metrics to cloudwatch func (h *handler) HandleMetric(ctx context.Context, insts []*metric.Instance) error { metricData := h.generateMetricData(insts) @@ -134,6 +183,13 @@ func (h *handler) HandleMetric(ctx context.Context, insts []*metric.Instance) er return err } +// HandleLogEntry sends logentries to cloudwatchlogs +func (h *handler) HandleLogEntry(ctx context.Context, insts []*logentry.Instance) error { + logentryData := h.generateLogEntryData(insts) + _, err := h.sendLogEntriesToCloudWatch(logentryData) + return err +} + // Close implements client closing functionality if necessary func (h *handler) Close() error { return nil @@ -143,9 +199,10 @@ func (h *handler) Close() error { func GetInfo() adapter.Info { return adapter.Info{ Name: "cloudwatch", - Description: "Sends metrics to cloudwatch", + Description: "Sends metrics to cloudwatch and logs to cloudwatchlogs", SupportedTemplates: []string{ metric.TemplateName, + logentry.TemplateName, }, NewBuilder: func() adapter.HandlerBuilder { return &builder{} }, DefaultConfig: &config.Params{}, diff --git a/mixer/adapter/cloudwatch/cloudwatch_test.go b/mixer/adapter/cloudwatch/cloudwatch_test.go index 9741bf139811..002d07413d9a 100644 --- a/mixer/adapter/cloudwatch/cloudwatch_test.go +++ b/mixer/adapter/cloudwatch/cloudwatch_test.go @@ -15,42 +15,151 @@ package cloudwatch import ( + "context" "strings" "testing" istio_policy_v1beta1 "istio.io/api/policy/v1beta1" "istio.io/istio/mixer/adapter/cloudwatch/config" + "istio.io/istio/mixer/pkg/adapter/test" + "istio.io/istio/mixer/template/logentry" "istio.io/istio/mixer/template/metric" ) +func TestBasic(t *testing.T) { + info := GetInfo() + + if !containsTemplate(info.SupportedTemplates, logentry.TemplateName, metric.TemplateName) { + t.Error("Didn't find all expected supported templates") + } + + cfg := info.DefaultConfig + b := info.NewBuilder() + + params := cfg.(*config.Params) + params.Namespace = "default" + params.LogGroupName = "group" + params.LogStreamName = "stream" + params.Logs = make(map[string]*config.Params_LogInfo) + params.Logs["empty"] = &config.Params_LogInfo{PayloadTemplate: " "} + params.Logs["other"] = &config.Params_LogInfo{PayloadTemplate: `{{or (.source_ip) "-"}}`} + + b.SetAdapterConfig(cfg) + + if err := b.Validate(); err != nil { + t.Errorf("Got error %v, expecting success", err) + } + + handler, err := b.Build(context.Background(), test.NewEnv(t)) + if err != nil { + t.Errorf("Got error %v, expecting success", err) + } + + if err = handler.Close(); err != nil { + t.Errorf("Got error %v, expecting success", err) + } +} + +func containsTemplate(s []string, template ...string) bool { + found := 0 + for _, a := range s { + for _, t := range template { + if t == a { + found++ + } + } + } + return found == len(template) +} + func TestValidate(t *testing.T) { b := &builder{} cases := []struct { cfg *config.Params metricTypes map[string]*metric.Type + logentryTypes map[string]*logentry.Type expectedErrors string }{ // config missing namespace { - &config.Params{}, + &config.Params{ + LogGroupName: "logGroupName", + LogStreamName: "logStreamName", + }, map[string]*metric.Type{ "metric": { Value: istio_policy_v1beta1.STRING, }, }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, "namespace", }, + // config missing logGroupName + { + &config.Params{ + Namespace: "namespace", + LogStreamName: "logStreamName", + }, + map[string]*metric.Type{ + "metric": { + Value: istio_policy_v1beta1.STRING, + }, + }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, + "log_group_name", + }, + // config missing logStreamName + { + &config.Params{ + Namespace: "namespace", + LogGroupName: "logGroupName", + }, + map[string]*metric.Type{ + "metric": { + Value: istio_policy_v1beta1.STRING, + }, + }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, + "log_stream_name", + }, // length of instance and handler metrics does not match { &config.Params{ - Namespace: "namespace", + Namespace: "namespace", + LogGroupName: "logGroupName", + LogStreamName: "logStreamName", }, map[string]*metric.Type{ "metric": { Value: istio_policy_v1beta1.STRING, }, }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, "metricInfo", }, // instance and handler metrics do not match @@ -60,12 +169,21 @@ func TestValidate(t *testing.T) { MetricInfo: map[string]*config.Params_MetricDatum{ "metric": {}, }, + LogGroupName: "logGroupName", + LogStreamName: "logStreamName", }, map[string]*metric.Type{ "newmetric": { Value: istio_policy_v1beta1.STRING, }, }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, "metricInfo", }, // validate duration metric has a duration unit @@ -77,12 +195,21 @@ func TestValidate(t *testing.T) { Unit: config.Count, }, }, + LogGroupName: "logGroupName", + LogStreamName: "logStreamName", }, map[string]*metric.Type{ "duration": { Value: istio_policy_v1beta1.DURATION, }, }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, "duration", }, // validate that value can be handled by the cloudwatch handler @@ -94,12 +221,21 @@ func TestValidate(t *testing.T) { Unit: config.Count, }, }, + LogGroupName: "logGroupName", + LogStreamName: "logStreamName", }, map[string]*metric.Type{ "dns": { Value: istio_policy_v1beta1.DNS_NAME, }, }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, "value type", }, // validate dimension cloudwatch_limits @@ -111,6 +247,8 @@ func TestValidate(t *testing.T) { Unit: config.Count, }, }, + LogGroupName: "logGroupName", + LogStreamName: "logStreamName", }, map[string]*metric.Type{ "dns": { @@ -130,12 +268,36 @@ func TestValidate(t *testing.T) { }, }, }, + map[string]*logentry.Type{ + "logentry": { + Variables: map[string]istio_policy_v1beta1.ValueType{ + "sourceUser": istio_policy_v1beta1.STRING, + }, + }, + }, "dimensions", }, + // config missing variables + { + &config.Params{ + LogGroupName: "logGroupName", + LogStreamName: "logStreamName", + }, + map[string]*metric.Type{ + "metric": { + Value: istio_policy_v1beta1.STRING, + }, + }, + map[string]*logentry.Type{ + "logentry": {}, + }, + "namespace", + }, } for _, c := range cases { b.SetMetricTypes(c.metricTypes) + b.SetLogEntryTypes(c.logentryTypes) b.SetAdapterConfig(c.cfg) errs := b.Validate() diff --git a/mixer/adapter/cloudwatch/config/adapter.cloudwatch.config.pb.html b/mixer/adapter/cloudwatch/config/adapter.cloudwatch.config.pb.html index 75a5d84be3a4..deee948ffa9b 100644 --- a/mixer/adapter/cloudwatch/config/adapter.cloudwatch.config.pb.html +++ b/mixer/adapter/cloudwatch/config/adapter.cloudwatch.config.pb.html @@ -4,17 +4,23 @@ location: https://istio.io/docs/reference/config/policy-and-telemetry/adapters/cloudwatch.html layout: protoc-gen-docs generator: protoc-gen-docs +supported_templates: logentry supported_templates: metric aliases: - /docs/reference/config/adapters/cloudwatch.html -number_of_entries: 3 +number_of_entries: 4 ---

The CloudWatch adapter enables Istio to deliver metrics to -Amazon CloudWatch.

+Amazon CloudWatch. +Amazon CloudWatch and logs to +Amazon CloudWatchLogs.

-

To push metrics to CloudWatch using this adapter you must provide AWS credentials the AWS SDK. +

To push metrics and logs to CloudWatch using this adapter you must provide AWS credentials to the AWS SDK. (see AWS docs).

+

To activate the CloudWatch adapter, operators need to provide configuration for the +cloudwatch adapter.

+

The handler configuration must contain the same metrics as the instance configuration. The metrics specified in both instance and handler configurations will be sent to CloudWatch.

@@ -47,6 +53,53 @@

Params

A map of Istio metric name to CloudWatch metric info.

+ + + +logGroupName +string + +

The name of the log group in cloudwatchlogs.

+ + + + +logStreamName +string + +

The name of the log stream in cloudwatchlogs.

+ + + + +logs +map<string, Params.LogInfo> + +

A map of Istio logentry name to CloudWatch logentry info.

+ + + + + + +

Params.LogInfo

+
+ + + + + + + + + + + + + diff --git a/mixer/adapter/cloudwatch/config/cloudwatch.yaml b/mixer/adapter/cloudwatch/config/cloudwatch.yaml index 6d0c8dfbe4a5..5ecac692c332 100644 --- a/mixer/adapter/cloudwatch/config/cloudwatch.yaml +++ b/mixer/adapter/cloudwatch/config/cloudwatch.yaml @@ -10,5 +10,5 @@ spec: session_based: true templates: - metric - config: CqMjCixtaXhlci9hZGFwdGVyL2Nsb3Vkd2F0Y2gvY29uZmlnL2NvbmZpZy5wcm90bxIZYWRhcHRlci5jbG91ZHdhdGNoLmNvbmZpZyKGBgoGUGFyYW1zEhwKCW5hbWVzcGFjZRgBIAEoCVIJbmFtZXNwYWNlElIKC21ldHJpY19pbmZvGAIgAygLMjEuYWRhcHRlci5jbG91ZHdhdGNoLmNvbmZpZy5QYXJhbXMuTWV0cmljSW5mb0VudHJ5UgptZXRyaWNJbmZvGmwKD01ldHJpY0luZm9FbnRyeRIQCgNrZXkYASABKAlSA2tleRJDCgV2YWx1ZRgCIAEoCzItLmFkYXB0ZXIuY2xvdWR3YXRjaC5jb25maWcuUGFyYW1zLk1ldHJpY0RhdHVtUgV2YWx1ZToCOAEamwQKC01ldHJpY0RhdHVtEkYKBHVuaXQYAyABKA4yMi5hZGFwdGVyLmNsb3Vkd2F0Y2guY29uZmlnLlBhcmFtcy5NZXRyaWNEYXR1bS5Vbml0UgR1bml0IsMDCgRVbml0EggKBE5vbmUQABILCgdTZWNvbmRzEAESEAoMTWljcm9zZWNvbmRzEAISEAoMTWlsbGlzZWNvbmRzEAMSCQoFQ291bnQQBBIJCgVCeXRlcxAFEg0KCUtpbG9ieXRlcxAGEg0KCU1lZ2FieXRlcxAHEg0KCUdpZ2FieXRlcxAIEg0KCVRlcmFieXRlcxAJEggKBEJpdHMQChIMCghLaWxvYml0cxALEgwKCE1lZ2FiaXRzEAwSDAoIR2lnYWJpdHMQDRIMCghUZXJhYml0cxAOEgsKB1BlcmNlbnQQDxIQCgxCeXRlc19TZWNvbmQQEBIUChBLaWxvYnl0ZXNfU2Vjb25kEBESFAoQTWVnYWJ5dGVzX1NlY29uZBASEhQKEEdpZ2FieXRlc19TZWNvbmQQExIUChBUZXJhYnl0ZXNfU2Vjb25kEBQSDwoLQml0c19TZWNvbmQQFRITCg9LaWxvYml0c19TZWNvbmQQFhITCg9NZWdhYml0c19TZWNvbmQQFxITCg9HaWdhYml0c19TZWNvbmQQGBITCg9UZXJhYml0c19TZWNvbmQQGRIQCgxDb3VudF9TZWNvbmQQGkIIWgZjb25maWdKvBwKBhIEDgBQAQq/BAoBDBIDDgASMrQEIENvcHlyaWdodCAyMDE4IElzdGlvIEF1dGhvcnMKCiBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKIHlvdSBtYXkgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4KIFlvdSBtYXkgb2J0YWluIGEgY29weSBvZiB0aGUgTGljZW5zZSBhdAoKICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKCiBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAogV0lUSE9VVCBXQVJSQU5USUVTIE9SIENPTkRJVElPTlMgT0YgQU5ZIFR5cGUsIGVpdGhlciBleHByZXNzIG9yIGltcGxpZWQuCiBTZWUgdGhlIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kCiBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCoUHCgECEgMhCCEa9AQgVGhlIENsb3VkV2F0Y2ggYWRhcHRlciBlbmFibGVzIElzdGlvIHRvIGRlbGl2ZXIgbWV0cmljcyB0bwogW0FtYXpvbiBDbG91ZFdhdGNoXShodHRwczovL2F3cy5hbWF6b24uY29tL2Nsb3Vkd2F0Y2gvKS4KCiBUbyBwdXNoIG1ldHJpY3MgdG8gQ2xvdWRXYXRjaCB1c2luZyB0aGlzIGFkYXB0ZXIgeW91IG11c3QgcHJvdmlkZSBBV1MgY3JlZGVudGlhbHMgdGhlIEFXUyBTREsuCiAoc2VlIFtBV1MgZG9jc10oaHR0cHM6Ly9kb2NzLmF3cy5hbWF6b24uY29tL3Nkay1mb3ItamF2YS92MS9kZXZlbG9wZXItZ3VpZGUvc2V0dXAtY3JlZGVudGlhbHMuaHRtbCkpLgoKIFRoZSBoYW5kbGVyIGNvbmZpZ3VyYXRpb24gbXVzdCBjb250YWluIHRoZSBzYW1lIG1ldHJpY3MgYXMgdGhlIGluc3RhbmNlIGNvbmZpZ3VyYXRpb24uCiBUaGUgbWV0cmljcyBzcGVjaWZpZWQgaW4gYm90aCBpbnN0YW5jZSBhbmQgaGFuZGxlciBjb25maWd1cmF0aW9ucyB3aWxsIGJlIHNlbnQgdG8gQ2xvdWRXYXRjaC4KCiBUaGlzIGFkYXB0ZXIgc3VwcG9ydHMgdGhlIFttZXRyaWMgdGVtcGxhdGVdKGh0dHBzOi8vaXN0aW8uaW8vZG9jcy9yZWZlcmVuY2UvY29uZmlnL3BvbGljeS1hbmQtdGVsZW1ldHJ5L3RlbXBsYXRlcy9tZXRyaWMvKS4KMoMCICR0aXRsZTogQ2xvdWRXYXRjaAogJGRlc2NyaXB0aW9uOiBBZGFwdGVyIGZvciBjbG91ZHdhdGNoIG1ldHJpY3MuCiAkbG9jYXRpb246IGh0dHBzOi8vaXN0aW8uaW8vZG9jcy9yZWZlcmVuY2UvY29uZmlnL3BvbGljeS1hbmQtdGVsZW1ldHJ5L2FkYXB0ZXJzL2Nsb3Vkd2F0Y2guaHRtbAogJHN1cHBvcnRlZF90ZW1wbGF0ZXM6IG1ldHJpYwogJGFsaWFzZXM6CiAkICAtIC9kb2NzL3JlZmVyZW5jZS9jb25maWcvYWRhcHRlcnMvY2xvdWR3YXRjaC5odG1sCgoICgEIEgMjABsKCwoECOcHABIDIwAbCgwKBQjnBwACEgMjBxEKDQoGCOcHAAIAEgMjBxEKDgoHCOcHAAIAARIDIwcRCgwKBQjnBwAHEgMjEhoKOQoCBAASBCYAUAEaLSBDb25maWd1cmF0aW9uIGZvciB0aGUgYGNsb3Vkd2F0Y2hgIGFkYXB0ZXIuCgoKCgMEAAESAyYIDgorCgQEAAIAEgMoBBkaHiBDbG91ZFdhdGNoIG1ldHJpYyBuYW1lc3BhY2UuCgoNCgUEAAIABBIEKAQmEAoMCgUEAAIABRIDKAQKCgwKBQQAAgABEgMoCxQKDAoFBAACAAMSAygXGApECgQEAAIBEgMrBC0aNyBBIG1hcCBvZiBJc3RpbyBtZXRyaWMgbmFtZSB0byBDbG91ZFdhdGNoIG1ldHJpYyBpbmZvLgoKDQoFBAACAQQSBCsEKBkKDAoFBAACAQYSAysEHAoMCgUEAAIBARIDKx0oCgwKBQQAAgEDEgMrKywKKQoEBAADARIELgRPBRobIENsb3VkV2F0Y2ggbWV0cmljIGZvcm1hdC4KCgwKBQQAAwEBEgMuDBcKDgoGBAADAQQAEgQvBksHCg4KBwQAAwEEAAESAy8LDwoPCggEAAMBBAACABIDMAgRChAKCQQAAwEEAAIAARIDMAgMChAKCQQAAwEEAAIAAhIDMA8QCg8KCAQAAwEEAAIBEgMxCBQKEAoJBAADAQQAAgEBEgMxCA8KEAoJBAADAQQAAgECEgMxEhMKDwoIBAADAQQAAgISAzIIGQoQCgkEAAMBBAACAgESAzIIFAoQCgkEAAMBBAACAgISAzIXGAoPCggEAAMBBAACAxIDMwgZChAKCQQAAwEEAAIDARIDMwgUChAKCQQAAwEEAAIDAhIDMxcYCg8KCAQAAwEEAAIEEgM0CBIKEAoJBAADAQQAAgQBEgM0CA0KEAoJBAADAQQAAgQCEgM0EBEKDwoIBAADAQQAAgUSAzUIEgoQCgkEAAMBBAACBQESAzUIDQoQCgkEAAMBBAACBQISAzUQEQoPCggEAAMBBAACBhIDNggWChAKCQQAAwEEAAIGARIDNggRChAKCQQAAwEEAAIGAhIDNhQVCg8KCAQAAwEEAAIHEgM3CBYKEAoJBAADAQQAAgcBEgM3CBEKEAoJBAADAQQAAgcCEgM3FBUKDwoIBAADAQQAAggSAzgIFgoQCgkEAAMBBAACCAESAzgIEQoQCgkEAAMBBAACCAISAzgUFQoPCggEAAMBBAACCRIDOQgWChAKCQQAAwEEAAIJARIDOQgRChAKCQQAAwEEAAIJAhIDORQVCg8KCAQAAwEEAAIKEgM6CBIKEAoJBAADAQQAAgoBEgM6CAwKEAoJBAADAQQAAgoCEgM6DxEKDwoIBAADAQQAAgsSAzsIFgoQCgkEAAMBBAACCwESAzsIEAoQCgkEAAMBBAACCwISAzsTFQoPCggEAAMBBAACDBIDPAgWChAKCQQAAwEEAAIMARIDPAgQChAKCQQAAwEEAAIMAhIDPBMVCg8KCAQAAwEEAAINEgM9CBYKEAoJBAADAQQAAg0BEgM9CBAKEAoJBAADAQQAAg0CEgM9ExUKDwoIBAADAQQAAg4SAz4IFgoQCgkEAAMBBAACDgESAz4IEAoQCgkEAAMBBAACDgISAz4TFQoPCggEAAMBBAACDxIDPwgVChAKCQQAAwEEAAIPARIDPwgPChAKCQQAAwEEAAIPAhIDPxIUCg8KCAQAAwEEAAIQEgNACBoKEAoJBAADAQQAAhABEgNACBQKEAoJBAADAQQAAhACEgNAFxkKDwoIBAADAQQAAhESA0EIHgoQCgkEAAMBBAACEQESA0EIGAoQCgkEAAMBBAACEQISA0EbHQoPCggEAAMBBAACEhIDQggeChAKCQQAAwEEAAISARIDQggYChAKCQQAAwEEAAISAhIDQhsdCg8KCAQAAwEEAAITEgNDCB4KEAoJBAADAQQAAhMBEgNDCBgKEAoJBAADAQQAAhMCEgNDGx0KDwoIBAADAQQAAhQSA0QIHgoQCgkEAAMBBAACFAESA0QIGAoQCgkEAAMBBAACFAISA0QbHQoPCggEAAMBBAACFRIDRQgZChAKCQQAAwEEAAIVARIDRQgTChAKCQQAAwEEAAIVAhIDRRYYCg8KCAQAAwEEAAIWEgNGCB0KEAoJBAADAQQAAhYBEgNGCBcKEAoJBAADAQQAAhYCEgNGGhwKDwoIBAADAQQAAhcSA0cIHQoQCgkEAAMBBAACFwESA0cIFwoQCgkEAAMBBAACFwISA0caHAoPCggEAAMBBAACGBIDSAgdChAKCQQAAwEEAAIYARIDSAgXChAKCQQAAwEEAAIYAhIDSBocCg8KCAQAAwEEAAIZEgNJCB0KEAoJBAADAQQAAhkBEgNJCBcKEAoJBAADAQQAAhkCEgNJGhwKDwoIBAADAQQAAhoSA0oIGgoQCgkEAAMBBAACGgESA0oIFAoQCgkEAAMBBAACGgISA0oXGQq4AQoGBAADAQIAEgNOBhQaqAEgVGhlIHVuaXQgb2YgdGhlIG1ldHJpYy4gTXVzdCBiZSB2YWxpZCBjbG91ZHdhdGNoIHVuaXQgdmFsdWUuCiBbQ2xvdWRXYXRjaCBkb2NzXShodHRwczovL2RvY3MuYXdzLmFtYXpvbi5jb20vQW1hem9uQ2xvdWRXYXRjaC9sYXRlc3QvQVBJUmVmZXJlbmNlL0FQSV9NZXRyaWNEYXR1bS5odG1sKQoKDwoHBAADAQIABBIETgZLBwoOCgcEAAMBAgAGEgNOBgoKDgoHBAADAQIAARIDTgsPCg4KBwQAAwECAAMSA04SE2IGcHJvdG8z + config: CvYtCixtaXhlci9hZGFwdGVyL2Nsb3Vkd2F0Y2gvY29uZmlnL2NvbmZpZy5wcm90bxIZYWRhcHRlci5jbG91ZHdhdGNoLmNvbmZpZyKvCAoGUGFyYW1zEhwKCW5hbWVzcGFjZRgBIAEoCVIJbmFtZXNwYWNlElIKC21ldHJpY19pbmZvGAIgAygLMjEuYWRhcHRlci5jbG91ZHdhdGNoLmNvbmZpZy5QYXJhbXMuTWV0cmljSW5mb0VudHJ5UgptZXRyaWNJbmZvEiQKDmxvZ19ncm91cF9uYW1lGAQgASgJUgxsb2dHcm91cE5hbWUSJgoPbG9nX3N0cmVhbV9uYW1lGAUgASgJUg1sb2dTdHJlYW1OYW1lEj8KBGxvZ3MYBiADKAsyKy5hZGFwdGVyLmNsb3Vkd2F0Y2guY29uZmlnLlBhcmFtcy5Mb2dzRW50cnlSBGxvZ3MabAoPTWV0cmljSW5mb0VudHJ5EhAKA2tleRgBIAEoCVIDa2V5EkMKBXZhbHVlGAIgASgLMi0uYWRhcHRlci5jbG91ZHdhdGNoLmNvbmZpZy5QYXJhbXMuTWV0cmljRGF0dW1SBXZhbHVlOgI4ARqbBAoLTWV0cmljRGF0dW0SRgoEdW5pdBgDIAEoDjIyLmFkYXB0ZXIuY2xvdWR3YXRjaC5jb25maWcuUGFyYW1zLk1ldHJpY0RhdHVtLlVuaXRSBHVuaXQiwwMKBFVuaXQSCAoETm9uZRAAEgsKB1NlY29uZHMQARIQCgxNaWNyb3NlY29uZHMQAhIQCgxNaWxsaXNlY29uZHMQAxIJCgVDb3VudBAEEgkKBUJ5dGVzEAUSDQoJS2lsb2J5dGVzEAYSDQoJTWVnYWJ5dGVzEAcSDQoJR2lnYWJ5dGVzEAgSDQoJVGVyYWJ5dGVzEAkSCAoEQml0cxAKEgwKCEtpbG9iaXRzEAsSDAoITWVnYWJpdHMQDBIMCghHaWdhYml0cxANEgwKCFRlcmFiaXRzEA4SCwoHUGVyY2VudBAPEhAKDEJ5dGVzX1NlY29uZBAQEhQKEEtpbG9ieXRlc19TZWNvbmQQERIUChBNZWdhYnl0ZXNfU2Vjb25kEBISFAoQR2lnYWJ5dGVzX1NlY29uZBATEhQKEFRlcmFieXRlc19TZWNvbmQQFBIPCgtCaXRzX1NlY29uZBAVEhMKD0tpbG9iaXRzX1NlY29uZBAWEhMKD01lZ2FiaXRzX1NlY29uZBAXEhMKD0dpZ2FiaXRzX1NlY29uZBAYEhMKD1RlcmFiaXRzX1NlY29uZBAZEhAKDENvdW50X1NlY29uZBAaGmIKCUxvZ3NFbnRyeRIQCgNrZXkYASABKAlSA2tleRI/CgV2YWx1ZRgCIAEoCzIpLmFkYXB0ZXIuY2xvdWR3YXRjaC5jb25maWcuUGFyYW1zLkxvZ0luZm9SBXZhbHVlOgI4ARo0CgdMb2dJbmZvEikKEHBheWxvYWRfdGVtcGxhdGUYASABKAlSD3BheWxvYWRUZW1wbGF0ZUIIWgZjb25maWdK5iQKBhIEDgBlAQq/BAoBDBIDDgASMrQEIENvcHlyaWdodCAyMDE4IElzdGlvIEF1dGhvcnMKCiBMaWNlbnNlZCB1bmRlciB0aGUgQXBhY2hlIExpY2Vuc2UsIFZlcnNpb24gMi4wICh0aGUgIkxpY2Vuc2UiKTsKIHlvdSBtYXkgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4KIFlvdSBtYXkgb2J0YWluIGEgY29weSBvZiB0aGUgTGljZW5zZSBhdAoKICAgICBodHRwOi8vd3d3LmFwYWNoZS5vcmcvbGljZW5zZXMvTElDRU5TRS0yLjAKCiBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiBkaXN0cmlidXRlZCB1bmRlciB0aGUgTGljZW5zZSBpcyBkaXN0cmlidXRlZCBvbiBhbiAiQVMgSVMiIEJBU0lTLAogV0lUSE9VVCBXQVJSQU5USUVTIE9SIENPTkRJVElPTlMgT0YgQU5ZIFR5cGUsIGVpdGhlciBleHByZXNzIG9yIGltcGxpZWQuCiBTZWUgdGhlIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kCiBsaW1pdGF0aW9ucyB1bmRlciB0aGUgTGljZW5zZS4KCpMKCgECEgMnCCEa4gcgVGhlIENsb3VkV2F0Y2ggYWRhcHRlciBlbmFibGVzIElzdGlvIHRvIGRlbGl2ZXIgbWV0cmljcyB0bwogW0FtYXpvbiBDbG91ZFdhdGNoXShodHRwczovL2F3cy5hbWF6b24uY29tL2Nsb3Vkd2F0Y2gvKS4KIFtBbWF6b24gQ2xvdWRXYXRjaF0oaHR0cHM6Ly9hd3MuYW1hem9uLmNvbS9jbG91ZHdhdGNoLykgYW5kIGxvZ3MgdG8KIFtBbWF6b24gQ2xvdWRXYXRjaExvZ3NdKGh0dHBzOi8vZG9jcy5hd3MuYW1hem9uLmNvbS9BbWF6b25DbG91ZFdhdGNoL2xhdGVzdC9sb2dzL1doYXRJc0Nsb3VkV2F0Y2hMb2dzLmh0bWwvKS4KCiBUbyBwdXNoIG1ldHJpY3MgYW5kIGxvZ3MgdG8gQ2xvdWRXYXRjaCB1c2luZyB0aGlzIGFkYXB0ZXIgeW91IG11c3QgcHJvdmlkZSBBV1MgY3JlZGVudGlhbHMgdG8gdGhlIEFXUyBTREsuCiAoc2VlIFtBV1MgZG9jc10oaHR0cHM6Ly9kb2NzLmF3cy5hbWF6b24uY29tL3Nkay1mb3ItamF2YS92MS9kZXZlbG9wZXItZ3VpZGUvc2V0dXAtY3JlZGVudGlhbHMuaHRtbCkpLgoKIFRvIGFjdGl2YXRlIHRoZSBDbG91ZFdhdGNoIGFkYXB0ZXIsIG9wZXJhdG9ycyBuZWVkIHRvIHByb3ZpZGUgY29uZmlndXJhdGlvbiBmb3IgdGhlCiBbY2xvdWR3YXRjaCBhZGFwdGVyXShodHRwczovL2lzdGlvLmlvL2RvY3MvcmVmZXJlbmNlL2NvbmZpZy9hZGFwdGVycy9jbG91ZHdhdGNoLmh0bWwpLgoKIFRoZSBoYW5kbGVyIGNvbmZpZ3VyYXRpb24gbXVzdCBjb250YWluIHRoZSBzYW1lIG1ldHJpY3MgYXMgdGhlIGluc3RhbmNlIGNvbmZpZ3VyYXRpb24uCiBUaGUgbWV0cmljcyBzcGVjaWZpZWQgaW4gYm90aCBpbnN0YW5jZSBhbmQgaGFuZGxlciBjb25maWd1cmF0aW9ucyB3aWxsIGJlIHNlbnQgdG8gQ2xvdWRXYXRjaC4KCiBUaGlzIGFkYXB0ZXIgc3VwcG9ydHMgdGhlIFttZXRyaWMgdGVtcGxhdGVdKGh0dHBzOi8vaXN0aW8uaW8vZG9jcy9yZWZlcmVuY2UvY29uZmlnL3BvbGljeS1hbmQtdGVsZW1ldHJ5L3RlbXBsYXRlcy9tZXRyaWMvKS4KMqMCICR0aXRsZTogQ2xvdWRXYXRjaAogJGRlc2NyaXB0aW9uOiBBZGFwdGVyIGZvciBjbG91ZHdhdGNoIG1ldHJpY3MuCiAkbG9jYXRpb246IGh0dHBzOi8vaXN0aW8uaW8vZG9jcy9yZWZlcmVuY2UvY29uZmlnL3BvbGljeS1hbmQtdGVsZW1ldHJ5L2FkYXB0ZXJzL2Nsb3Vkd2F0Y2guaHRtbAogJHN1cHBvcnRlZF90ZW1wbGF0ZXM6IGxvZ2VudHJ5CiAkc3VwcG9ydGVkX3RlbXBsYXRlczogbWV0cmljCiAkYWxpYXNlczoKICQgIC0gL2RvY3MvcmVmZXJlbmNlL2NvbmZpZy9hZGFwdGVycy9jbG91ZHdhdGNoLmh0bWwKCggKAQgSAykAGwoLCgQI5wcAEgMpABsKDAoFCOcHAAISAykHEQoNCgYI5wcAAgASAykHEQoOCgcI5wcAAgABEgMpBxEKDAoFCOcHAAcSAykSGgo5CgIEABIELABlARotIENvbmZpZ3VyYXRpb24gZm9yIHRoZSBgY2xvdWR3YXRjaGAgYWRhcHRlci4KCgoKAwQAARIDLAgOCisKBAQAAgASAy4EGRoeIENsb3VkV2F0Y2ggbWV0cmljIG5hbWVzcGFjZS4KCg0KBQQAAgAEEgQuBCwQCgwKBQQAAgAFEgMuBAoKDAoFBAACAAESAy4LFAoMCgUEAAIAAxIDLhcYCkQKBAQAAgESAzEELRo3IEEgbWFwIG9mIElzdGlvIG1ldHJpYyBuYW1lIHRvIENsb3VkV2F0Y2ggbWV0cmljIGluZm8uCgoNCgUEAAIBBBIEMQQuGQoMCgUEAAIBBhIDMQQcCgwKBQQAAgEBEgMxHSgKDAoFBAACAQMSAzErLAopCgQEAAMBEgQ0BFUFGhsgQ2xvdWRXYXRjaCBtZXRyaWMgZm9ybWF0LgoKDAoFBAADAQESAzQMFwoOCgYEAAMBBAASBDUGUQcKDgoHBAADAQQAARIDNQsPCg8KCAQAAwEEAAIAEgM2CBEKEAoJBAADAQQAAgABEgM2CAwKEAoJBAADAQQAAgACEgM2DxAKDwoIBAADAQQAAgESAzcIFAoQCgkEAAMBBAACAQESAzcIDwoQCgkEAAMBBAACAQISAzcSEwoPCggEAAMBBAACAhIDOAgZChAKCQQAAwEEAAICARIDOAgUChAKCQQAAwEEAAICAhIDOBcYCg8KCAQAAwEEAAIDEgM5CBkKEAoJBAADAQQAAgMBEgM5CBQKEAoJBAADAQQAAgMCEgM5FxgKDwoIBAADAQQAAgQSAzoIEgoQCgkEAAMBBAACBAESAzoIDQoQCgkEAAMBBAACBAISAzoQEQoPCggEAAMBBAACBRIDOwgSChAKCQQAAwEEAAIFARIDOwgNChAKCQQAAwEEAAIFAhIDOxARCg8KCAQAAwEEAAIGEgM8CBYKEAoJBAADAQQAAgYBEgM8CBEKEAoJBAADAQQAAgYCEgM8FBUKDwoIBAADAQQAAgcSAz0IFgoQCgkEAAMBBAACBwESAz0IEQoQCgkEAAMBBAACBwISAz0UFQoPCggEAAMBBAACCBIDPggWChAKCQQAAwEEAAIIARIDPggRChAKCQQAAwEEAAIIAhIDPhQVCg8KCAQAAwEEAAIJEgM/CBYKEAoJBAADAQQAAgkBEgM/CBEKEAoJBAADAQQAAgkCEgM/FBUKDwoIBAADAQQAAgoSA0AIEgoQCgkEAAMBBAACCgESA0AIDAoQCgkEAAMBBAACCgISA0APEQoPCggEAAMBBAACCxIDQQgWChAKCQQAAwEEAAILARIDQQgQChAKCQQAAwEEAAILAhIDQRMVCg8KCAQAAwEEAAIMEgNCCBYKEAoJBAADAQQAAgwBEgNCCBAKEAoJBAADAQQAAgwCEgNCExUKDwoIBAADAQQAAg0SA0MIFgoQCgkEAAMBBAACDQESA0MIEAoQCgkEAAMBBAACDQISA0MTFQoPCggEAAMBBAACDhIDRAgWChAKCQQAAwEEAAIOARIDRAgQChAKCQQAAwEEAAIOAhIDRBMVCg8KCAQAAwEEAAIPEgNFCBUKEAoJBAADAQQAAg8BEgNFCA8KEAoJBAADAQQAAg8CEgNFEhQKDwoIBAADAQQAAhASA0YIGgoQCgkEAAMBBAACEAESA0YIFAoQCgkEAAMBBAACEAISA0YXGQoPCggEAAMBBAACERIDRwgeChAKCQQAAwEEAAIRARIDRwgYChAKCQQAAwEEAAIRAhIDRxsdCg8KCAQAAwEEAAISEgNICB4KEAoJBAADAQQAAhIBEgNICBgKEAoJBAADAQQAAhICEgNIGx0KDwoIBAADAQQAAhMSA0kIHgoQCgkEAAMBBAACEwESA0kIGAoQCgkEAAMBBAACEwISA0kbHQoPCggEAAMBBAACFBIDSggeChAKCQQAAwEEAAIUARIDSggYChAKCQQAAwEEAAIUAhIDShsdCg8KCAQAAwEEAAIVEgNLCBkKEAoJBAADAQQAAhUBEgNLCBMKEAoJBAADAQQAAhUCEgNLFhgKDwoIBAADAQQAAhYSA0wIHQoQCgkEAAMBBAACFgESA0wIFwoQCgkEAAMBBAACFgISA0waHAoPCggEAAMBBAACFxIDTQgdChAKCQQAAwEEAAIXARIDTQgXChAKCQQAAwEEAAIXAhIDTRocCg8KCAQAAwEEAAIYEgNOCB0KEAoJBAADAQQAAhgBEgNOCBcKEAoJBAADAQQAAhgCEgNOGhwKDwoIBAADAQQAAhkSA08IHQoQCgkEAAMBBAACGQESA08IFwoQCgkEAAMBBAACGQISA08aHAoPCggEAAMBBAACGhIDUAgaChAKCQQAAwEEAAIaARIDUAgUChAKCQQAAwEEAAIaAhIDUBcZCrgBCgYEAAMBAgASA1QGFBqoASBUaGUgdW5pdCBvZiB0aGUgbWV0cmljLiBNdXN0IGJlIHZhbGlkIGNsb3Vkd2F0Y2ggdW5pdCB2YWx1ZS4KIFtDbG91ZFdhdGNoIGRvY3NdKGh0dHBzOi8vZG9jcy5hd3MuYW1hem9uLmNvbS9BbWF6b25DbG91ZFdhdGNoL2xhdGVzdC9BUElSZWZlcmVuY2UvQVBJX01ldHJpY0RhdHVtLmh0bWwpCgoPCgcEAAMBAgAEEgRUBlEHCg4KBwQAAwECAAYSA1QGCgoOCgcEAAMBAgABEgNUCw8KDgoHBAADAQIAAxIDVBITCjsKBAQAAgISA1gEHhouIFRoZSBuYW1lIG9mIHRoZSBsb2cgZ3JvdXAgaW4gY2xvdWR3YXRjaGxvZ3MuCgoNCgUEAAICBBIEWARVBQoMCgUEAAICBRIDWAQKCgwKBQQAAgIBEgNYCxkKDAoFBAACAgMSA1gcHQo8CgQEAAIDEgNbBB8aLyBUaGUgbmFtZSBvZiB0aGUgbG9nIHN0cmVhbSBpbiBjbG91ZHdhdGNobG9ncy4KCg0KBQQAAgMEEgRbBFgeCgwKBQQAAgMFEgNbBAoKDAoFBAACAwESA1sLGgoMCgUEAAIDAxIDWx0eCkgKBAQAAgQSA14EIho7IEEgbWFwIG9mIElzdGlvIGxvZ2VudHJ5IG5hbWUgdG8gQ2xvdWRXYXRjaCBsb2dlbnRyeSBpbmZvLgoKDQoFBAACBAQSBF4EWx8KDAoFBAACBAYSA14EGAoMCgUEAAIEARIDXhkdCgwKBQQAAgQDEgNeICEKDAoEBAADAxIEYAVkBQoMCgUEAAMDARIDYA0UCswBCgYEAAMDAgASA2MGIhq8ASBBIGdvbGFuZyB0ZXh0L3RlbXBsYXRlIHRlbXBsYXRlIHRoYXQgd2lsbCBiZSBleGVjdXRlZCB0byBjb25zdHJ1Y3QgdGhlIHBheWxvYWQgZm9yIHRoaXMgbG9nIGVudHJ5LgogSXQgd2lsbCBiZSBnaXZlbiB0aGUgZnVsbCBzZXQgb2YgdmFyaWFibGVzIGZvciB0aGUgbG9nIHRvIHVzZSB0byBjb25zdHJ1Y3QgaXRzIHJlc3VsdC4KCg8KBwQAAwMCAAQSBGMGYBUKDgoHBAADAwIABRIDYwYMCg4KBwQAAwMCAAESA2MNHQoOCgcEAAMDAgADEgNjICFiBnByb3RvMw== --- diff --git a/mixer/adapter/cloudwatch/config/config.pb.go b/mixer/adapter/cloudwatch/config/config.pb.go index 2384b51b33c1..71eab713a8cc 100644 --- a/mixer/adapter/cloudwatch/config/config.pb.go +++ b/mixer/adapter/cloudwatch/config/config.pb.go @@ -6,10 +6,15 @@ The CloudWatch adapter enables Istio to deliver metrics to [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/). + [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) and logs to + [Amazon CloudWatchLogs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html/). - To push metrics to CloudWatch using this adapter you must provide AWS credentials the AWS SDK. + To push metrics and logs to CloudWatch using this adapter you must provide AWS credentials to the AWS SDK. (see [AWS docs](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html)). + To activate the CloudWatch adapter, operators need to provide configuration for the + [cloudwatch adapter](https://istio.io/docs/reference/config/adapters/cloudwatch.html). + The handler configuration must contain the same metrics as the instance configuration. The metrics specified in both instance and handler configurations will be sent to CloudWatch. @@ -147,6 +152,12 @@ type Params struct { Namespace string `protobuf:"bytes,1,opt,name=namespace,proto3" json:"namespace,omitempty"` // A map of Istio metric name to CloudWatch metric info. MetricInfo map[string]*Params_MetricDatum `protobuf:"bytes,2,rep,name=metric_info,json=metricInfo" json:"metric_info,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value"` + // The name of the log group in cloudwatchlogs. + LogGroupName string `protobuf:"bytes,4,opt,name=log_group_name,json=logGroupName,proto3" json:"log_group_name,omitempty"` + // The name of the log stream in cloudwatchlogs. + LogStreamName string `protobuf:"bytes,5,opt,name=log_stream_name,json=logStreamName,proto3" json:"log_stream_name,omitempty"` + // A map of Istio logentry name to CloudWatch logentry info. + Logs map[string]*Params_LogInfo `protobuf:"bytes,6,rep,name=logs" json:"logs,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value"` } func (m *Params) Reset() { *m = Params{} } @@ -167,6 +178,27 @@ func (m *Params) GetMetricInfo() map[string]*Params_MetricDatum { return nil } +func (m *Params) GetLogGroupName() string { + if m != nil { + return m.LogGroupName + } + return "" +} + +func (m *Params) GetLogStreamName() string { + if m != nil { + return m.LogStreamName + } + return "" +} + +func (m *Params) GetLogs() map[string]*Params_LogInfo { + if m != nil { + return m.Logs + } + return nil +} + // CloudWatch metric format. type Params_MetricDatum struct { // The unit of the metric. Must be valid cloudwatch unit value. @@ -185,9 +217,27 @@ func (m *Params_MetricDatum) GetUnit() Params_MetricDatum_Unit { return None } +type Params_LogInfo struct { + // A golang text/template template that will be executed to construct the payload for this log entry. + // It will be given the full set of variables for the log to use to construct its result. + PayloadTemplate string `protobuf:"bytes,1,opt,name=payload_template,json=payloadTemplate,proto3" json:"payload_template,omitempty"` +} + +func (m *Params_LogInfo) Reset() { *m = Params_LogInfo{} } +func (*Params_LogInfo) ProtoMessage() {} +func (*Params_LogInfo) Descriptor() ([]byte, []int) { return fileDescriptorConfig, []int{0, 3} } + +func (m *Params_LogInfo) GetPayloadTemplate() string { + if m != nil { + return m.PayloadTemplate + } + return "" +} + func init() { proto.RegisterType((*Params)(nil), "adapter.cloudwatch.config.Params") proto.RegisterType((*Params_MetricDatum)(nil), "adapter.cloudwatch.config.Params.MetricDatum") + proto.RegisterType((*Params_LogInfo)(nil), "adapter.cloudwatch.config.Params.LogInfo") proto.RegisterEnum("adapter.cloudwatch.config.Params_MetricDatum_Unit", Params_MetricDatum_Unit_name, Params_MetricDatum_Unit_value) } func (x Params_MetricDatum_Unit) String() string { @@ -227,6 +277,20 @@ func (this *Params) Equal(that interface{}) bool { return false } } + if this.LogGroupName != that1.LogGroupName { + return false + } + if this.LogStreamName != that1.LogStreamName { + return false + } + if len(this.Logs) != len(that1.Logs) { + return false + } + for i := range this.Logs { + if !this.Logs[i].Equal(that1.Logs[i]) { + return false + } + } return true } func (this *Params_MetricDatum) Equal(that interface{}) bool { @@ -253,11 +317,35 @@ func (this *Params_MetricDatum) Equal(that interface{}) bool { } return true } +func (this *Params_LogInfo) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*Params_LogInfo) + if !ok { + that2, ok := that.(Params_LogInfo) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.PayloadTemplate != that1.PayloadTemplate { + return false + } + return true +} func (this *Params) GoString() string { if this == nil { return "nil" } - s := make([]string, 0, 6) + s := make([]string, 0, 9) s = append(s, "&config.Params{") s = append(s, "Namespace: "+fmt.Sprintf("%#v", this.Namespace)+",\n") keysForMetricInfo := make([]string, 0, len(this.MetricInfo)) @@ -273,6 +361,21 @@ func (this *Params) GoString() string { if this.MetricInfo != nil { s = append(s, "MetricInfo: "+mapStringForMetricInfo+",\n") } + s = append(s, "LogGroupName: "+fmt.Sprintf("%#v", this.LogGroupName)+",\n") + s = append(s, "LogStreamName: "+fmt.Sprintf("%#v", this.LogStreamName)+",\n") + keysForLogs := make([]string, 0, len(this.Logs)) + for k, _ := range this.Logs { + keysForLogs = append(keysForLogs, k) + } + sortkeys.Strings(keysForLogs) + mapStringForLogs := "map[string]*Params_LogInfo{" + for _, k := range keysForLogs { + mapStringForLogs += fmt.Sprintf("%#v: %#v,", k, this.Logs[k]) + } + mapStringForLogs += "}" + if this.Logs != nil { + s = append(s, "Logs: "+mapStringForLogs+",\n") + } s = append(s, "}") return strings.Join(s, "") } @@ -286,6 +389,16 @@ func (this *Params_MetricDatum) GoString() string { s = append(s, "}") return strings.Join(s, "") } +func (this *Params_LogInfo) GoString() string { + if this == nil { + return "nil" + } + s := make([]string, 0, 5) + s = append(s, "&config.Params_LogInfo{") + s = append(s, "PayloadTemplate: "+fmt.Sprintf("%#v", this.PayloadTemplate)+",\n") + s = append(s, "}") + return strings.Join(s, "") +} func valueToGoStringConfig(v interface{}, typ string) string { rv := reflect.ValueOf(v) if rv.IsNil() { @@ -343,6 +456,46 @@ func (m *Params) MarshalTo(dAtA []byte) (int, error) { } } } + if len(m.LogGroupName) > 0 { + dAtA[i] = 0x22 + i++ + i = encodeVarintConfig(dAtA, i, uint64(len(m.LogGroupName))) + i += copy(dAtA[i:], m.LogGroupName) + } + if len(m.LogStreamName) > 0 { + dAtA[i] = 0x2a + i++ + i = encodeVarintConfig(dAtA, i, uint64(len(m.LogStreamName))) + i += copy(dAtA[i:], m.LogStreamName) + } + if len(m.Logs) > 0 { + for k, _ := range m.Logs { + dAtA[i] = 0x32 + i++ + v := m.Logs[k] + msgSize := 0 + if v != nil { + msgSize = v.Size() + msgSize += 1 + sovConfig(uint64(msgSize)) + } + mapSize := 1 + len(k) + sovConfig(uint64(len(k))) + msgSize + i = encodeVarintConfig(dAtA, i, uint64(mapSize)) + dAtA[i] = 0xa + i++ + i = encodeVarintConfig(dAtA, i, uint64(len(k))) + i += copy(dAtA[i:], k) + if v != nil { + dAtA[i] = 0x12 + i++ + i = encodeVarintConfig(dAtA, i, uint64(v.Size())) + n2, err := v.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n2 + } + } + } return i, nil } @@ -369,6 +522,30 @@ func (m *Params_MetricDatum) MarshalTo(dAtA []byte) (int, error) { return i, nil } +func (m *Params_LogInfo) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Params_LogInfo) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i + var l int + _ = l + if len(m.PayloadTemplate) > 0 { + dAtA[i] = 0xa + i++ + i = encodeVarintConfig(dAtA, i, uint64(len(m.PayloadTemplate))) + i += copy(dAtA[i:], m.PayloadTemplate) + } + return i, nil +} + func encodeVarintConfig(dAtA []byte, offset int, v uint64) int { for v >= 1<<7 { dAtA[offset] = uint8(v&0x7f | 0x80) @@ -398,6 +575,27 @@ func (m *Params) Size() (n int) { n += mapEntrySize + 1 + sovConfig(uint64(mapEntrySize)) } } + l = len(m.LogGroupName) + if l > 0 { + n += 1 + l + sovConfig(uint64(l)) + } + l = len(m.LogStreamName) + if l > 0 { + n += 1 + l + sovConfig(uint64(l)) + } + if len(m.Logs) > 0 { + for k, v := range m.Logs { + _ = k + _ = v + l = 0 + if v != nil { + l = v.Size() + l += 1 + sovConfig(uint64(l)) + } + mapEntrySize := 1 + len(k) + sovConfig(uint64(len(k))) + l + n += mapEntrySize + 1 + sovConfig(uint64(mapEntrySize)) + } + } return n } @@ -410,6 +608,16 @@ func (m *Params_MetricDatum) Size() (n int) { return n } +func (m *Params_LogInfo) Size() (n int) { + var l int + _ = l + l = len(m.PayloadTemplate) + if l > 0 { + n += 1 + l + sovConfig(uint64(l)) + } + return n +} + func sovConfig(x uint64) (n int) { for { n++ @@ -437,9 +645,22 @@ func (this *Params) String() string { mapStringForMetricInfo += fmt.Sprintf("%v: %v,", k, this.MetricInfo[k]) } mapStringForMetricInfo += "}" + keysForLogs := make([]string, 0, len(this.Logs)) + for k, _ := range this.Logs { + keysForLogs = append(keysForLogs, k) + } + sortkeys.Strings(keysForLogs) + mapStringForLogs := "map[string]*Params_LogInfo{" + for _, k := range keysForLogs { + mapStringForLogs += fmt.Sprintf("%v: %v,", k, this.Logs[k]) + } + mapStringForLogs += "}" s := strings.Join([]string{`&Params{`, `Namespace:` + fmt.Sprintf("%v", this.Namespace) + `,`, `MetricInfo:` + mapStringForMetricInfo + `,`, + `LogGroupName:` + fmt.Sprintf("%v", this.LogGroupName) + `,`, + `LogStreamName:` + fmt.Sprintf("%v", this.LogStreamName) + `,`, + `Logs:` + mapStringForLogs + `,`, `}`, }, "") return s @@ -454,6 +675,16 @@ func (this *Params_MetricDatum) String() string { }, "") return s } +func (this *Params_LogInfo) String() string { + if this == nil { + return "nil" + } + s := strings.Join([]string{`&Params_LogInfo{`, + `PayloadTemplate:` + fmt.Sprintf("%v", this.PayloadTemplate) + `,`, + `}`, + }, "") + return s +} func valueToStringConfig(v interface{}) string { rv := reflect.ValueOf(v) if rv.IsNil() { @@ -643,6 +874,187 @@ func (m *Params) Unmarshal(dAtA []byte) error { } m.MetricInfo[mapkey] = mapvalue iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LogGroupName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthConfig + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.LogGroupName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field LogStreamName", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthConfig + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.LogStreamName = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Logs", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthConfig + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Logs == nil { + m.Logs = make(map[string]*Params_LogInfo) + } + var mapkey string + var mapvalue *Params_LogInfo + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthConfig + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var mapmsglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + mapmsglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if mapmsglen < 0 { + return ErrInvalidLengthConfig + } + postmsgIndex := iNdEx + mapmsglen + if mapmsglen < 0 { + return ErrInvalidLengthConfig + } + if postmsgIndex > l { + return io.ErrUnexpectedEOF + } + mapvalue = &Params_LogInfo{} + if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil { + return err + } + iNdEx = postmsgIndex + } else { + iNdEx = entryPreIndex + skippy, err := skipConfig(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthConfig + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Logs[mapkey] = mapvalue + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipConfig(dAtA[iNdEx:]) @@ -733,6 +1145,85 @@ func (m *Params_MetricDatum) Unmarshal(dAtA []byte) error { } return nil } +func (m *Params_LogInfo) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: LogInfo: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: LogInfo: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field PayloadTemplate", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowConfig + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthConfig + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.PayloadTemplate = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipConfig(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthConfig + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} func skipConfig(dAtA []byte) (n int, err error) { l := len(dAtA) iNdEx := 0 @@ -841,37 +1332,44 @@ var ( func init() { proto.RegisterFile("mixer/adapter/cloudwatch/config/config.proto", fileDescriptorConfig) } var fileDescriptorConfig = []byte{ - // 502 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x93, 0x3d, 0x6f, 0x13, 0x31, - 0x18, 0xc7, 0xe3, 0xbc, 0x35, 0x79, 0x2e, 0x6d, 0x8c, 0x5b, 0x20, 0x8d, 0x90, 0x15, 0x75, 0xca, - 0x00, 0x17, 0x11, 0x18, 0x10, 0x63, 0xca, 0x8b, 0x10, 0x0a, 0xaa, 0x02, 0x2c, 0x2c, 0x95, 0x7b, - 0x71, 0x8a, 0xc5, 0x9d, 0x1d, 0xdd, 0x39, 0x40, 0x36, 0x26, 0x66, 0x76, 0xbe, 0x00, 0xdf, 0x83, - 0x85, 0xb1, 0x23, 0x23, 0x39, 0x16, 0xc6, 0x7e, 0x04, 0xf4, 0xf8, 0xe2, 0x14, 0x21, 0x21, 0xd1, - 0x29, 0xfe, 0xfd, 0x6c, 0xff, 0xfd, 0x7f, 0xa2, 0x04, 0x6e, 0x26, 0xea, 0xbd, 0x4c, 0x07, 0x62, - 0x2a, 0xe6, 0x56, 0xa6, 0x83, 0x28, 0x36, 0x8b, 0xe9, 0x3b, 0x61, 0xa3, 0xd7, 0x83, 0xc8, 0xe8, - 0x99, 0x3a, 0x5d, 0x7f, 0x84, 0xf3, 0xd4, 0x58, 0xc3, 0xf6, 0xd7, 0xe7, 0xc2, 0x8b, 0x73, 0x61, - 0x71, 0xe0, 0xe0, 0x63, 0x1d, 0xea, 0x47, 0x22, 0x15, 0x49, 0xc6, 0x6e, 0x40, 0x53, 0x8b, 0x44, - 0x66, 0x73, 0x11, 0xc9, 0x0e, 0xe9, 0x91, 0x7e, 0x73, 0x72, 0x21, 0xd8, 0x04, 0x82, 0x44, 0xda, - 0x54, 0x45, 0xc7, 0x4a, 0xcf, 0x4c, 0xa7, 0xdc, 0xab, 0xf4, 0x83, 0xe1, 0xed, 0xf0, 0x9f, 0xc9, - 0x61, 0x91, 0x1a, 0x8e, 0xdd, 0xa5, 0x27, 0x7a, 0x66, 0x1e, 0x6a, 0x9b, 0x2e, 0x27, 0x90, 0x6c, - 0x44, 0x37, 0x86, 0xf6, 0x5f, 0xdb, 0x8c, 0x42, 0xe5, 0x8d, 0x5c, 0xae, 0x9f, 0xc7, 0x25, 0x3b, - 0x84, 0xda, 0x5b, 0x11, 0x2f, 0x64, 0xa7, 0xdc, 0x23, 0xfd, 0x60, 0x78, 0xeb, 0x7f, 0x9f, 0x7c, - 0x20, 0xec, 0x22, 0x99, 0x14, 0x77, 0xef, 0x97, 0xef, 0x91, 0xee, 0xe7, 0x2a, 0x04, 0x7f, 0x6c, - 0xb1, 0x47, 0x50, 0x5d, 0x68, 0x65, 0x3b, 0x95, 0x1e, 0xe9, 0xef, 0x0c, 0x87, 0x97, 0xca, 0x0d, - 0x5f, 0x6a, 0x65, 0x27, 0xee, 0xfe, 0xc1, 0xd7, 0x0a, 0x54, 0x11, 0x59, 0x03, 0xaa, 0xcf, 0x8c, - 0x96, 0xb4, 0xc4, 0x02, 0xd8, 0x7a, 0x2e, 0x23, 0xa3, 0xa7, 0x19, 0x25, 0x8c, 0x42, 0x6b, 0xac, - 0xa2, 0xd4, 0x64, 0x6b, 0x53, 0x2e, 0x4c, 0x1c, 0x2b, 0x6f, 0x2a, 0xac, 0x09, 0xb5, 0x43, 0xb3, - 0xd0, 0x96, 0x56, 0x71, 0x39, 0x5a, 0x5a, 0x99, 0xd1, 0x1a, 0xdb, 0x86, 0xe6, 0x53, 0x15, 0x9b, - 0x13, 0x87, 0x75, 0xc4, 0xb1, 0x3c, 0x15, 0x05, 0x6e, 0x21, 0x3e, 0x56, 0x1e, 0x1b, 0x88, 0x2f, - 0x64, 0xba, 0xc6, 0x26, 0x96, 0x19, 0x29, 0x9b, 0x51, 0x60, 0x2d, 0x68, 0xb8, 0x14, 0xa4, 0x00, - 0xc9, 0x85, 0x20, 0xb5, 0x90, 0x5c, 0x06, 0xd2, 0x36, 0x92, 0x8b, 0x40, 0xda, 0xc1, 0x21, 0x8e, - 0x64, 0x1a, 0x49, 0x6d, 0x69, 0x1b, 0x2b, 0xbb, 0x56, 0xc7, 0xc5, 0x5c, 0x94, 0xb2, 0x3d, 0xa0, - 0x9b, 0x72, 0xde, 0x5e, 0x41, 0xbb, 0xe9, 0xe8, 0x2d, 0x43, 0xbb, 0xa9, 0xea, 0xed, 0x2e, 0xda, - 0x4d, 0x63, 0x6f, 0xf7, 0x58, 0x1b, 0x02, 0x2c, 0xee, 0xc5, 0x55, 0xb6, 0x0b, 0x6d, 0xdf, 0xdf, - 0xcb, 0x6b, 0x28, 0xfd, 0x18, 0x5e, 0x5e, 0x47, 0xe9, 0xa7, 0xf1, 0xb2, 0x83, 0xd2, 0x0f, 0xe5, - 0xe5, 0x3e, 0x8e, 0xe3, 0xbe, 0x6f, 0x6f, 0xba, 0xa3, 0xbb, 0x67, 0x2b, 0x5e, 0xfa, 0xbe, 0xe2, - 0xa5, 0xf3, 0x15, 0x27, 0x1f, 0x72, 0x4e, 0xbe, 0xe4, 0x9c, 0x7c, 0xcb, 0x39, 0x39, 0xcb, 0x39, - 0xf9, 0x91, 0x73, 0xf2, 0x2b, 0xe7, 0xa5, 0xf3, 0x9c, 0x93, 0x4f, 0x3f, 0x79, 0xe9, 0x55, 0xbd, - 0xf8, 0x65, 0x9c, 0xd4, 0xdd, 0x1f, 0xec, 0xce, 0xef, 0x00, 0x00, 0x00, 0xff, 0xff, 0x14, 0xcf, - 0x8d, 0x35, 0x90, 0x03, 0x00, 0x00, + // 620 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x94, 0xc1, 0x6e, 0xd3, 0x4c, + 0x10, 0xc7, 0xb3, 0x89, 0x93, 0x26, 0xe3, 0xb4, 0xf1, 0xb7, 0xed, 0x07, 0x6e, 0x84, 0xac, 0xa8, + 0x42, 0x28, 0x15, 0xe0, 0x8a, 0xd0, 0x03, 0xe2, 0x52, 0xa9, 0x05, 0x2a, 0x04, 0xad, 0xaa, 0xb4, + 0x5c, 0xb8, 0x44, 0x5b, 0x67, 0x6b, 0x2c, 0x6c, 0x6f, 0x64, 0x6f, 0x80, 0xdc, 0x78, 0x04, 0xee, + 0xbc, 0x00, 0x37, 0x1e, 0x82, 0x0b, 0xc7, 0x1e, 0x39, 0x12, 0x73, 0xe1, 0xd8, 0x47, 0x40, 0xb3, + 0xf6, 0xba, 0x05, 0x81, 0x28, 0xa7, 0xf8, 0xf7, 0xcb, 0xec, 0xf8, 0x3f, 0xab, 0x91, 0xe1, 0x56, + 0x14, 0xbc, 0xe1, 0xc9, 0x06, 0x1b, 0xb3, 0x89, 0xe4, 0xc9, 0x86, 0x17, 0x8a, 0xe9, 0xf8, 0x35, + 0x93, 0xde, 0x8b, 0x0d, 0x4f, 0xc4, 0x27, 0x81, 0x5f, 0xfc, 0xb8, 0x93, 0x44, 0x48, 0x41, 0x57, + 0x8b, 0x3a, 0xf7, 0xbc, 0xce, 0xcd, 0x0b, 0xd6, 0x3e, 0x36, 0xa1, 0x71, 0xc0, 0x12, 0x16, 0xa5, + 0xf4, 0x1a, 0xb4, 0x62, 0x16, 0xf1, 0x74, 0xc2, 0x3c, 0x6e, 0x93, 0x1e, 0xe9, 0xb7, 0x86, 0xe7, + 0x82, 0x0e, 0xc1, 0x8c, 0xb8, 0x4c, 0x02, 0x6f, 0x14, 0xc4, 0x27, 0xc2, 0xae, 0xf6, 0x6a, 0x7d, + 0x73, 0x70, 0xc7, 0xfd, 0x63, 0x67, 0x37, 0xef, 0xea, 0xee, 0xa9, 0x43, 0x8f, 0xe3, 0x13, 0xf1, + 0x30, 0x96, 0xc9, 0x6c, 0x08, 0x51, 0x29, 0xe8, 0x75, 0x58, 0x0a, 0x85, 0x3f, 0xf2, 0x13, 0x31, + 0x9d, 0x8c, 0xf0, 0x55, 0xb6, 0xa1, 0x5e, 0xdb, 0x0e, 0x85, 0xbf, 0x8b, 0x72, 0x9f, 0x45, 0x9c, + 0xde, 0x80, 0x0e, 0x56, 0xa5, 0x32, 0xe1, 0x2c, 0xca, 0xcb, 0xea, 0xaa, 0x6c, 0x31, 0x14, 0xfe, + 0xa1, 0xb2, 0xaa, 0x6e, 0x0b, 0x8c, 0x50, 0xf8, 0xa9, 0xdd, 0x50, 0xd1, 0x6e, 0xfe, 0x3d, 0xda, + 0x53, 0xe1, 0xa7, 0x79, 0x28, 0x75, 0xb0, 0x1b, 0x42, 0xe7, 0x97, 0xb4, 0xd4, 0x82, 0xda, 0x4b, + 0x3e, 0x2b, 0x6e, 0x03, 0x1f, 0xe9, 0x0e, 0xd4, 0x5f, 0xb1, 0x70, 0xca, 0xed, 0x6a, 0x8f, 0xf4, + 0xcd, 0xc1, 0xed, 0xcb, 0xde, 0xc0, 0x03, 0x26, 0xa7, 0xd1, 0x30, 0x3f, 0x7b, 0xbf, 0x7a, 0x8f, + 0x74, 0xdf, 0x1b, 0x60, 0x5e, 0xf8, 0x8b, 0x3e, 0x02, 0x63, 0x1a, 0x07, 0xd2, 0xae, 0xf5, 0x48, + 0x7f, 0x69, 0x30, 0xf8, 0xa7, 0xbe, 0xee, 0xb3, 0x38, 0x90, 0x43, 0x75, 0x7e, 0xed, 0x53, 0x0d, + 0x0c, 0x44, 0xda, 0x04, 0x63, 0x5f, 0xc4, 0xdc, 0xaa, 0x50, 0x13, 0x16, 0x0e, 0xb9, 0x27, 0xe2, + 0x71, 0x6a, 0x11, 0x6a, 0x41, 0x7b, 0x2f, 0xf0, 0x12, 0x91, 0x16, 0xa6, 0x9a, 0x9b, 0x30, 0x0c, + 0xb4, 0xa9, 0xd1, 0x16, 0xd4, 0x77, 0xc4, 0x34, 0x96, 0x96, 0x81, 0x8f, 0xdb, 0x33, 0xc9, 0x53, + 0xab, 0x4e, 0x17, 0xa1, 0xf5, 0x24, 0x08, 0xc5, 0xb1, 0xc2, 0x06, 0xe2, 0x1e, 0xf7, 0x59, 0x8e, + 0x0b, 0x88, 0xbb, 0x81, 0xc6, 0x26, 0xe2, 0x11, 0x4f, 0x0a, 0x6c, 0x61, 0x98, 0xed, 0x40, 0xa6, + 0x16, 0xd0, 0x36, 0x34, 0x55, 0x17, 0x24, 0x13, 0x49, 0x35, 0x41, 0x6a, 0x23, 0xa9, 0x1e, 0x48, + 0x8b, 0x48, 0xaa, 0x05, 0xd2, 0x12, 0x0e, 0x71, 0xc0, 0x13, 0x8f, 0xc7, 0xd2, 0xea, 0x60, 0x64, + 0x95, 0x6a, 0x94, 0xcf, 0x65, 0x59, 0x74, 0x05, 0xac, 0x32, 0x9c, 0xb6, 0xff, 0xa1, 0x2d, 0x33, + 0x6a, 0x4b, 0xd1, 0x96, 0x51, 0xb5, 0x5d, 0x46, 0x5b, 0x26, 0xd6, 0x76, 0x85, 0x76, 0xc0, 0xc4, + 0xe0, 0x5a, 0xfc, 0x4f, 0x97, 0xa1, 0xa3, 0xf3, 0x6b, 0x79, 0x05, 0xa5, 0x1e, 0x43, 0xcb, 0xab, + 0x28, 0xf5, 0x34, 0x5a, 0xda, 0x28, 0xf5, 0x50, 0x5a, 0xae, 0xe2, 0x38, 0xea, 0xbe, 0xb5, 0xe9, + 0x76, 0x8f, 0xa1, 0x55, 0xae, 0xe7, 0x6f, 0xb6, 0x70, 0xeb, 0xe7, 0x2d, 0x5c, 0xbf, 0xd4, 0xb2, + 0xe3, 0x5a, 0x5f, 0xdc, 0xc0, 0x4d, 0x58, 0x28, 0x2c, 0x5d, 0x07, 0x6b, 0xc2, 0x66, 0xa1, 0x60, + 0xe3, 0x91, 0xe4, 0xd1, 0x24, 0x64, 0x52, 0x7f, 0x02, 0x3a, 0x85, 0x3f, 0x2a, 0xf4, 0xf6, 0xe6, + 0xe9, 0xdc, 0xa9, 0x7c, 0x99, 0x3b, 0x95, 0xb3, 0xb9, 0x43, 0xde, 0x66, 0x0e, 0xf9, 0x90, 0x39, + 0xe4, 0x73, 0xe6, 0x90, 0xd3, 0xcc, 0x21, 0x5f, 0x33, 0x87, 0x7c, 0xcf, 0x9c, 0xca, 0x59, 0xe6, + 0x90, 0x77, 0xdf, 0x9c, 0xca, 0xf3, 0x46, 0x9e, 0xe2, 0xb8, 0xa1, 0xbe, 0x44, 0x77, 0x7f, 0x04, + 0x00, 0x00, 0xff, 0xff, 0x89, 0x19, 0x94, 0x27, 0xb9, 0x04, 0x00, 0x00, } diff --git a/mixer/adapter/cloudwatch/config/config.proto b/mixer/adapter/cloudwatch/config/config.proto index b1a21df0e486..62cad6350685 100644 --- a/mixer/adapter/cloudwatch/config/config.proto +++ b/mixer/adapter/cloudwatch/config/config.proto @@ -17,16 +17,22 @@ syntax = "proto3"; // $title: CloudWatch // $description: Adapter for cloudwatch metrics. // $location: https://istio.io/docs/reference/config/policy-and-telemetry/adapters/cloudwatch.html +// $supported_templates: logentry // $supported_templates: metric // $aliases: // $ - /docs/reference/config/adapters/cloudwatch.html // The CloudWatch adapter enables Istio to deliver metrics to // [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/). +// [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) and logs to +// [Amazon CloudWatchLogs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html/). // -// To push metrics to CloudWatch using this adapter you must provide AWS credentials the AWS SDK. +// To push metrics and logs to CloudWatch using this adapter you must provide AWS credentials to the AWS SDK. // (see [AWS docs](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html)). // +// To activate the CloudWatch adapter, operators need to provide configuration for the +// [cloudwatch adapter](https://istio.io/docs/reference/config/adapters/cloudwatch.html). +// // The handler configuration must contain the same metrics as the instance configuration. // The metrics specified in both instance and handler configurations will be sent to CloudWatch. // @@ -78,4 +84,19 @@ message Params { // [CloudWatch docs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html) Unit unit = 3; } + + // The name of the log group in cloudwatchlogs. + string log_group_name = 4; + + // The name of the log stream in cloudwatchlogs. + string log_stream_name = 5; + + // A map of Istio logentry name to CloudWatch logentry info. + map logs = 6; + + message LogInfo{ + // A golang text/template template that will be executed to construct the payload for this log entry. + // It will be given the full set of variables for the log to use to construct its result. + string payload_template = 1; + } } diff --git a/mixer/adapter/cloudwatch/config/config.proto_descriptor b/mixer/adapter/cloudwatch/config/config.proto_descriptor index 6be0055ed41df41874808b3a007f2da830221da7..f7d881a216315726ac066d8fab9cc4a467193b03 100644 GIT binary patch literal 5881 zcmbVQ%~l)96-MetG$lZC0R}Y2Q|9<2<_t(286Q*T^!NBgSrzLmdNtnkzh>`ut9u*t67 zkzvnv>RZR2X&G}uRqtM=VvCK%yj zL7Dc+hfRBe-LX*$Sn*itEo*t18Q;l45)Jy7Z?o}p+v`h2AcqL7e^^Z{bH|SQox>Qr znZKQ@{Ly5^%o4xP%zoF677fUq+1U6=a;wjPJXH9Ve;CXpl3UE&_q(z@#*1v?NFw2_ zu$+TSQ+uux_+i4y#~jadQ;xwW+4#2K??z>l{@ofxGAxhtX*T)T_52e>7Kq%FZ966> zh*ooN@TLK z>Q(GlxykbORPwSr;K?%%Sx&JDf2QhQ#VrP z0x?r>QsyEtQ>Rkq61k(|opLIb6E-<4Pb|M(h4=I-T>WshKpLI7Yz2PCCK4`RW#yhd z@O-=VDw3U^XGiH3P)hjCguP|`x)9$TkN$m~6}UMwCUfk2ljWwkF*C+D{%VSCzc&cn z_E{u0KD~KcG{VUBMZF)L`9a8-c;X_9p=^nMwy0rGuYjWlj7_2BGVB#YXj0 zaf3FjBugu+cbFLXebKQ8qU%SZA4*7YLviYQQph)s>_x)u3de^VuHAK{xNxI0r6rM3 zW#ZdJf`1a(*lc5aZy@~B>^5OXFqZz#qNul7t6f}NRBdHw)eqV=FLooWJ!x$3?jP;0 z!&m^Gbv+q|B9Oo9y8)a$83?-vQylvQ272~|@B?AD1BtcBCsQv1H*&k}H4*x!(S;pI zCR#48JNKj?X;Dh-7w|T_0dcasVx@j08b>Q)tA5ltT4UmQqxtaQSyMc(A0F2Cn~mKg zad0TM5B7H&&Bnn#sQaS6|E*{adU8!j#0a^NZ+Zb)4-;LApKKvmN0LH3%#Jw4D$!w2 zI_|0Kz^QJ#Z?~mr`{y#~!WYq#LB|a#harq>G2yu#H?mO(V2tuU%n17fW4Syxer{c> z{A)rqA%2^h+H-1ZBIzt;-&}>G|0j|h6dsz%d(_iG9+}Yc9oV4T5nDVmxJCysjA|sb_WaC_8sTV> z#8uCtj#q=tOf-Gb>xXB<5M+$WgaGG+(xOgE8clVK`e7t6C7io0oWkcv!a>dAj&*I1 z+);TZ_R-E`1n5SHQaC1>y!c_C!q#`|sFd~J+UIud{AR5s&!y+1F4o(9D654s>i5<& z>Xc8bsZX{Ox#x(2vS8$VS_*4|MA;E;6wytNQtC#0J@v6lS4dK=$gVS-l2R!n4{+XG zzv}uma;z4}QyECy9qBlgge=TdOl+>o0E+btVfRpoy6z4|7c6zjs3U`cW|S#J7*Mwx zM(Uo@u!uGtg0!%cvxe&c7siQ?3vCFaguhdLh?~#z#0gF+GD_JT#WA)U7fD(SVZYb& zgD5uXI94mtUht#i)bnxC4c1}OdL%tbVg|KTN?4130y(B0I(RI&vRGK%{B~3^C1DSlMh3`u*6eMwxqsAC+(qTtLgTedO zV*{~jqXmUB+{C{^tc#ymU5r^QXK`b7>?2lWruCnRv6z@*<3!-TT%EYYrddHzvFS$@1nHH`ma$E9>ag^jdi9sEF62SiepS1O>yGVU9uQnl-~dVWv5j zhsLV8RJoc-VN$Vz#(DLW0C1X((`J*KRdWrqPE3vyn57itR*Q2fWq?{(WIIX=mTsEs zm0O~YuHU0B9ZwW-l$9pExKkVGr7~2XgZi6hbtzGiE0EMHDMdP#Z=@8|-~42at&(~p z$IZ{oXXBNR-r*%KmyR7lhw{nDLCj}U3rInMfE4J@3%{J8l*BaFKQES8iCHmC5&Xis z#L8?kWnjw})>M{*G=EVlXF%9;%fh%i1d-lbRw>H?=oSr|Lrq}Zww5wm@&Mh|#N=rY z!af2a5MOE_L<5|xZUdk%HIT`TO^atVnE-8C(^(Eco8?OxO{j)D8fYA#I~r&lpgS4} zIlOCK&d4nQbXSXK0ie5c^BGMz2w!QS34p%RKobCcrGan?zt%t&KwoPh3!twx&?GnR zX`o4f?rES&fbMA^#*J^ZKrn#5(JF!g^i3%ny&^a2nx-N^b*r2SO%b5_Y&H-W*tay$ z6hK=VXbPY$4K&S-Z4ERH(6$De254IY&2VE!1I+-mqk(1s+R;EIZtPm~84pST?P^tC z0%#Wxw^6~CxpCjBWI%ZFf|D)SGC=nimNKAA+<0JJ&44Ze^uSuoasYbp(Q*dFx$#f~ zaey9bAP&$&4K&M*h6b7isG)&o0cvQVIc_}CKyv^+(m-q)+<2^=+}$l8fb|d2O4MzpaTuG1kiy7s&M0}RuL6|o@y0Q0q7|z;%~Y5-lki*S(vN*Ij3eY z`Ye;a4db3u6?-(qYRuISL z$CcVUwZ`B5en71Owa4a*tBG0zYLAC%sb3uzE9n|(TE5C2Dz)fSFU^(8oww(Tbh>-j zWHDURW(yN0yflwLPSlu)`Ac&#rC`F#rR9`@2`|MpP&8W@+Ev9G7+*gxYs zG^h@p!b;`uIhfe?>6wj@{7p2RsD}eO9ujsm9QWlL>8MW!82Rz2Km_{^=6^gVl8-sb zqX^?aM3__rfHdi`Yd@$;t- zM<3mL_T=bpGOl)#WA)ZfehS1au>L7hG&Lyz$6XFu5Tl zJw1r9VIxx^9Mmc+ zl>=I3C?vV2O4boVpf%O)^?s{|Kx8{@$rM3tGF2eCWh+qt+G3~xw8c=7!RRBWpX)E-k6s6D2t zB==dVD$qVdRiJ%_Y9tRBssSAsbnV|Htw9|yRVUeIT~UW>v#zKEwb2!y-QEZ{1G 0 { + input.SetSequenceToken(nextSequenceToken) + } + + _, err := h.cloudwatchlogs.PutLogEvents(&input) + if err != nil { + h.env.Logger().Errorf("could not put logentry data into cloudwatchlogs: %v. %v", input, err) + return err + } + return nil +} + +func getNextSequenceToken(h *handler) (string, error) { + input := cloudwatchlogs.DescribeLogStreamsInput{ + LogGroupName: aws.String(h.cfg.LogGroupName), + LogStreamNamePrefix: aws.String(h.cfg.LogStreamName), + } + + output, err := h.cloudwatchlogs.DescribeLogStreams(&input) + if err != nil { + h.env.Logger().Errorf("could not retrieve Log Stream info from cloudwatch: %v. %v", input, err) + return "", err + } + + for _, logStream := range output.LogStreams { + if strings.Compare(*logStream.LogStreamName, h.cfg.LogStreamName) == 0 { + if logStream.UploadSequenceToken != nil { + return *logStream.UploadSequenceToken, nil + } + } + } + return "", nil +} diff --git a/mixer/adapter/cloudwatch/logHandler_test.go b/mixer/adapter/cloudwatch/logHandler_test.go new file mode 100644 index 000000000000..adeccb087ab9 --- /dev/null +++ b/mixer/adapter/cloudwatch/logHandler_test.go @@ -0,0 +1,355 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY Type, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package cloudwatch + +import ( + "errors" + "html/template" + "reflect" + "strconv" + "strings" + "testing" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/cloudwatchlogs" + "github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface" + + "istio.io/istio/mixer/adapter/cloudwatch/config" + "istio.io/istio/mixer/pkg/adapter" + "istio.io/istio/mixer/pkg/adapter/test" + "istio.io/istio/mixer/template/logentry" +) + +type mockLogsClient struct { + cloudwatchlogsiface.CloudWatchLogsAPI + resp *cloudwatchlogs.DescribeLogStreamsOutput + describeError error +} + +func (m *mockLogsClient) PutLogEvents(input *cloudwatchlogs.PutLogEventsInput) (*cloudwatchlogs.PutLogEventsOutput, error) { + return &cloudwatchlogs.PutLogEventsOutput{}, nil +} + +func (m *mockLogsClient) DescribeLogStreams(input *cloudwatchlogs.DescribeLogStreamsInput) (*cloudwatchlogs.DescribeLogStreamsOutput, error) { + return m.resp, m.describeError +} + +type failLogsClient struct { + cloudwatchlogsiface.CloudWatchLogsAPI + resp *cloudwatchlogs.DescribeLogStreamsOutput + describeError error +} + +func (m *failLogsClient) PutLogEvents(input *cloudwatchlogs.PutLogEventsInput) (*cloudwatchlogs.PutLogEventsOutput, error) { + return nil, errors.New("put logentry data failed") +} + +func (m *failLogsClient) DescribeLogStreams(input *cloudwatchlogs.DescribeLogStreamsInput) (*cloudwatchlogs.DescribeLogStreamsOutput, error) { + return m.resp, m.describeError +} + +func generateLogStreamOutput() *cloudwatchlogs.DescribeLogStreamsOutput { + logstreams := make([]*cloudwatchlogs.LogStream, 0, 1) + + stream := cloudwatchlogs.LogStream{ + LogStreamName: aws.String("TestLogStream"), + UploadSequenceToken: aws.String("49579643037721145729486712515534281748446571659784111634"), + } + + logstreams = append(logstreams, &stream) + + return &cloudwatchlogs.DescribeLogStreamsOutput{ + LogStreams: logstreams, + } +} + +func generateNilLogStreamOutput() *cloudwatchlogs.DescribeLogStreamsOutput { + return &cloudwatchlogs.DescribeLogStreamsOutput{ + LogStreams: nil, + } +} + +func TestPutLogEntryData(t *testing.T) { + cfg := &config.Params{ + LogGroupName: "TestLogGroup", + LogStreamName: "TestLogStream", + } + + env := test.NewEnv(t) + + logEntryTemplates := make(map[string]*template.Template) + logEntryTemplates["accesslog"], _ = template.New("accesslog").Parse(`{{or (.sourceIp) "-"}} - {{or (.sourceUser) "-"}}`) + + cases := []struct { + name string + logEntryData []*cloudwatchlogs.InputLogEvent + expectedErrString string + handler adapter.Handler + nextSequenceToken string + }{ + { + "testValidPutLogEntryData", + []*cloudwatchlogs.InputLogEvent{ + {Message: aws.String("testMessage2"), Timestamp: aws.Int64(1)}, + }, + "", + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{}), + "49579643037721145729486712515534281748446571659784111634", + }, + { + "testEmptyLogEntryData", + []*cloudwatchlogs.InputLogEvent{}, + "put logentry data", + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &failLogsClient{}), + "49579643037721145729486712515534281748446571659784111634", + }, + { + "testFailLogsClient", + []*cloudwatchlogs.InputLogEvent{ + {Message: aws.String("testMessage2"), Timestamp: aws.Int64(1)}, + }, + "put logentry data", + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &failLogsClient{}), + "49579643037721145729486712515534281748446571659784111634", + }, + { + "testNoSequenceToken", + []*cloudwatchlogs.InputLogEvent{ + {Message: aws.String("testMessage2"), Timestamp: aws.Int64(1)}, + }, + "put logentry data", + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &failLogsClient{}), + "", + }, + } + + for _, c := range cases { + h, ok := c.handler.(*handler) + if !ok { + t.Errorf("test case: %s has the wrong type of handler.", c.name) + } + + err := h.putLogEntryData(c.logEntryData, c.nextSequenceToken) + if len(c.expectedErrString) > 0 { + if err == nil { + t.Errorf("putLogEntryData() for test case: %s did not produce expected error: %s", c.name, c.expectedErrString) + + } else if !strings.Contains(strings.ToLower(err.Error()), c.expectedErrString) { + t.Errorf("putLogEntryData() for test case: %s returned error message %s, wanted %s", c.name, err, c.expectedErrString) + } + } else if err != nil { + t.Errorf("putLogEntryData() for test case: %s generated unexpected error: %v", c.name, err) + } + } +} + +func TestSendLogEntriesToCloudWatch(t *testing.T) { + env := test.NewEnv(t) + cfg := &config.Params{ + LogGroupName: "TestLogGroup", + LogStreamName: "TestLogStream", + } + logEntryTemplates := make(map[string]*template.Template) + logEntryTemplates["accesslog"], _ = template.New("accesslog").Parse(`{{or (.sourceIp) "-"}} - {{or (.sourceUser) "-"}}`) + + h := newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{resp: generateLogStreamOutput(), describeError: nil}) + + cases := []struct { + name string + logEntryData []*cloudwatchlogs.InputLogEvent + expectedCloudWatchCallCount int + Handler adapter.Handler + }{ + {"testNilLogEntryData", []*cloudwatchlogs.InputLogEvent{}, 0, h}, + {"testMockLogsClient", generateTestLogEntryData(1), 1, h}, + {"testBatchCountLimit", generateTestLogEntryData(10001), 2, h}, + { + "testLogstreamDescribeFailure", + generateTestLogEntryData(0), + 0, + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{ + resp: generateLogStreamOutput(), describeError: errors.New("describe logstream failed"), + }), + }, + { + "testNilOutputStream", + generateTestLogEntryData(0), + 0, + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{resp: generateNilLogStreamOutput(), describeError: nil}), + }, + { + "testFailLogsClient", + generateTestLogEntryData(1), + 0, + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &failLogsClient{resp: generateLogStreamOutput(), describeError: nil}), + }, + } + + for _, c := range cases { + h, ok := c.Handler.(*handler) + if !ok { + t.Errorf("test case: %s has the wrong type of handler.", c.name) + } + + count, _ := h.sendLogEntriesToCloudWatch(c.logEntryData) + if count != c.expectedCloudWatchCallCount { + t.Errorf("sendLogEntriesToCloudWatch() for test case: %s resulted in %d calls; wanted %d", c.name, count, c.expectedCloudWatchCallCount) + } + } +} + +func generateTestLogEntryData(count int) []*cloudwatchlogs.InputLogEvent { + logEntryData := make([]*cloudwatchlogs.InputLogEvent, 0, count) + + for i := 0; i < count; i++ { + logEntryData = append(logEntryData, &cloudwatchlogs.InputLogEvent{ + Message: aws.String("testMessage" + strconv.Itoa(i)), + Timestamp: aws.Int64(1), + }) + } + + return logEntryData +} + +func TestGenerateLogEntryData(t *testing.T) { + timestp := time.Now() + env := test.NewEnv(t) + tmpl := `{{or (.sourceIp) "-"}} - {{or (.sourceUser) "-"}}` + cfg := &config.Params{ + LogGroupName: "testLogGroup", + LogStreamName: "testLogStream", + Logs: map[string]*config.Params_LogInfo{ + "accesslog": { + PayloadTemplate: tmpl, + }, + }, + } + emptyPayloadCfg := &config.Params{ + LogGroupName: "testLogGroup", + LogStreamName: "testLogStream", + Logs: map[string]*config.Params_LogInfo{ + "accesslog": { + PayloadTemplate: "", + }, + }, + } + logEntryTemplates := make(map[string]*template.Template) + logEntryTemplates["accesslog"], _ = template.New("accesslog").Parse(tmpl) + + emptyTemplateMap := make(map[string]*template.Template) + emptyTemplateMap["accesslog"], _ = template.New("accesslog").Parse("") + + cases := []struct { + name string + handler adapter.Handler + insts []*logentry.Instance + expectedLogEntryData []*cloudwatchlogs.InputLogEvent + }{ + // empty instances + { + "testEmptyInstance", + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{resp: generateLogStreamOutput(), describeError: nil}), + []*logentry.Instance{}, + []*cloudwatchlogs.InputLogEvent{}, + }, + // non empty instances + { + "testNonEmptyInstance", + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{resp: generateLogStreamOutput(), describeError: nil}), + []*logentry.Instance{ + { + Timestamp: timestp, + Name: "accesslog", + Severity: "Default", + Variables: map[string]interface{}{ + "sourceUser": "abc", + }, + }, + }, + []*cloudwatchlogs.InputLogEvent{ + { + Message: aws.String("- - abc"), + Timestamp: aws.Int64(timestp.UnixNano() / int64(time.Millisecond)), + }, + }, + }, + { + "testMultipleVariables", + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{resp: generateLogStreamOutput(), describeError: nil}), + []*logentry.Instance{ + { + Timestamp: timestp, + Name: "accesslog", + Severity: "Default", + Variables: map[string]interface{}{ + "sourceUser": "abc", + "sourceIp": "10.0.0.0", + }, + }, + }, + []*cloudwatchlogs.InputLogEvent{ + { + Message: aws.String("10.0.0.0 - abc"), + Timestamp: aws.Int64(timestp.UnixNano() / int64(time.Millisecond)), + }, + }, + }, + // payload template not provided explicitly + { + "testEmptyTemplate", + newHandler(nil, nil, emptyTemplateMap, env, emptyPayloadCfg, &mockCloudWatchClient{}, &mockLogsClient{resp: generateLogStreamOutput(), describeError: nil}), + []*logentry.Instance{ + { + Timestamp: timestp, + Name: "accesslog", + Severity: "Default", + Variables: map[string]interface{}{ + "sourceUser": "abc", + "sourceIp": "10.0.0.0", + }, + }, + }, + []*cloudwatchlogs.InputLogEvent{ + { + Message: aws.String(""), + Timestamp: aws.Int64(timestp.UnixNano() / int64(time.Millisecond)), + }, + }, + }, + } + + for _, c := range cases { + h, ok := c.handler.(*handler) + if !ok { + t.Errorf("test case: %s has the wrong type of handler.", c.name) + } + + ld := h.generateLogEntryData(c.insts) + + if len(c.expectedLogEntryData) != len(ld) { + t.Errorf("generateLogEntryData() for test case: %s generated %d items; wanted %d", c.name, len(ld), len(c.expectedLogEntryData)) + } + + for i := 0; i < len(c.expectedLogEntryData); i++ { + expectedLD := c.expectedLogEntryData[i] + actualLD := ld[i] + + if !reflect.DeepEqual(expectedLD, actualLD) { + t.Errorf("generateLogEntryData() for test case: %s generated %v; wanted %v", c.name, actualLD, expectedLD) + } + } + } +} diff --git a/mixer/adapter/cloudwatch/handler.go b/mixer/adapter/cloudwatch/metricHandler.go similarity index 100% rename from mixer/adapter/cloudwatch/handler.go rename to mixer/adapter/cloudwatch/metricHandler.go diff --git a/mixer/adapter/cloudwatch/handler_test.go b/mixer/adapter/cloudwatch/metricHandler_test.go similarity index 88% rename from mixer/adapter/cloudwatch/handler_test.go rename to mixer/adapter/cloudwatch/metricHandler_test.go index 2339800f2ded..d1a8466f3b49 100644 --- a/mixer/adapter/cloudwatch/handler_test.go +++ b/mixer/adapter/cloudwatch/metricHandler_test.go @@ -16,6 +16,7 @@ package cloudwatch import ( "errors" + "html/template" "reflect" "strconv" "strings" @@ -55,6 +56,9 @@ func TestPutMetricData(t *testing.T) { env := test.NewEnv(t) + logEntryTemplates := make(map[string]*template.Template) + logEntryTemplates["inst"], _ = template.New("inst").Parse(`{{or (.sourceIp) "-"}} - {{or (.sourceUser) "-"}}`) + cases := []struct { metricData []*cloudwatch.MetricDatum expectedErrString string @@ -65,14 +69,14 @@ func TestPutMetricData(t *testing.T) { {MetricName: aws.String("testMetric"), Value: aws.Float64(1)}, }, "put metric data", - newHandler(nil, env, cfg, &mockFailCloudWatchClient{}), + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockFailCloudWatchClient{}, &mockLogsClient{}), }, { []*cloudwatch.MetricDatum{ {MetricName: aws.String("testMetric"), Value: aws.Float64(1)}, }, "", - newHandler(nil, env, cfg, &mockCloudWatchClient{}), + newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{}), }, } @@ -216,8 +220,10 @@ func TestSendMetricsToCloudWatch(t *testing.T) { cfg := &config.Params{ Namespace: "istio-mixer-cloudwatch", } + logEntryTemplates := make(map[string]*template.Template) + logEntryTemplates["inst"], _ = template.New("inst").Parse(`{{or (.sourceIp) "-"}} - {{or (.sourceUser) "-"}}`) - h := newHandler(nil, env, cfg, &mockCloudWatchClient{}) + h := newHandler(nil, nil, logEntryTemplates, env, cfg, &mockCloudWatchClient{}, &mockLogsClient{}) cases := []struct { metricData []*cloudwatch.MetricDatum @@ -259,6 +265,8 @@ func generateTestMetricData(count int) []*cloudwatch.MetricDatum { func TestGenerateMetricData(t *testing.T) { env := test.NewEnv(t) + logEntryTemplates := make(map[string]*template.Template) + logEntryTemplates["inst"], _ = template.New("inst").Parse(`{{or (.sourceIp) "-"}} - {{or (.sourceUser) "-"}}`) cases := []struct { handler adapter.Handler insts []*metric.Instance @@ -266,15 +274,15 @@ func TestGenerateMetricData(t *testing.T) { }{ // empty instances { - newHandler(nil, env, + newHandler(nil, nil, logEntryTemplates, env, generateCfgWithUnit(config.Count), - &mockCloudWatchClient{}), + &mockCloudWatchClient{}, &mockLogsClient{}), []*metric.Instance{}, []*cloudwatch.MetricDatum{}}, // timestamp value { - newHandler(nil, env, + newHandler(nil, nil, logEntryTemplates, env, generateCfgWithNameAndUnit("requestduration", config.Milliseconds), - &mockCloudWatchClient{}), + &mockCloudWatchClient{}, &mockLogsClient{}), []*metric.Instance{ { Value: 1 * time.Minute, @@ -292,9 +300,9 @@ func TestGenerateMetricData(t *testing.T) { }, // count value and dimensions { - newHandler(nil, env, + newHandler(nil, nil, logEntryTemplates, env, generateCfgWithNameAndUnit("requestcount", config.Count), - &mockCloudWatchClient{}), + &mockCloudWatchClient{}, &mockLogsClient{}), []*metric.Instance{ { Value: "1", diff --git a/mixer/adapter/cloudwatch/operatorconfig/cloudwatch.yaml b/mixer/adapter/cloudwatch/operatorconfig/cloudwatch.yaml index a27207a0be9f..f28849449681 100644 --- a/mixer/adapter/cloudwatch/operatorconfig/cloudwatch.yaml +++ b/mixer/adapter/cloudwatch/operatorconfig/cloudwatch.yaml @@ -14,7 +14,31 @@ spec: response_code: response.code | 200 monitored_resource_type: '"UNSPECIFIED"' --- -# handler configuration for adapter 'metric' +# instance configuration for template 'logentry' +apiVersion: "config.istio.io/v1alpha2" +kind: logentry +metadata: + name: accesslog + namespace: istio-system +spec: + severity: '"Default"' + timestamp: request.time + variables: + source_service: source.labels["app"] | "unknown" + source_ip: source.ip | ip("0.0.0.0") + destination_service: destination.labels["app"] | "unknown" + destination_ip: destination.ip | ip("0.0.0.0") + source_user: source.user | "" + method: request.method | "" + url: request.path | "" + protocol: request.scheme | "http" + response_code: response.code | 0 + response_size: response.size | 0 + request_size: request.size | 0 + latency: response.duration | "0ms" + monitored_resource_type: '"UNSPECIFIED"' +--- +# handler configuration for adapter apiVersion: "config.istio.io/v1alpha2" kind: cloudwatch metadata: @@ -25,6 +49,11 @@ spec: metricInfo: requestcount.metric.istio-system: unit: Count + logGroupName: "TestLogGroup" + logStreamName: "TestLogStream" + logs: + accesslog.logentry.istio-system: + payloadTemplate: '{{or (.source_ip) "-"}} - {{or (.source_user) "-"}} [{{or (.timestamp.Format "2006-01-02T15:04:05Z07:00") "-"}}] "{{or (.method) "-"}} {{or (.url) "-"}} {{or (.protocol) "-"}}" {{or (.response_code) "-"}} {{or (.response_size) "-"}}' --- # rule to dispatch to your handler apiVersion: "config.istio.io/v1alpha2" @@ -38,3 +67,4 @@ spec: - handler: hndlrTest.cloudwatch instances: - requestcount.metric + - accesslog.logentry \ No newline at end of file diff --git a/mixer/pkg/config/mcp/backend.go b/mixer/pkg/config/mcp/backend.go index b17b509571c9..c86a8f37ce45 100644 --- a/mixer/pkg/config/mcp/backend.go +++ b/mixer/pkg/config/mcp/backend.go @@ -24,18 +24,18 @@ import ( "sync" "time" - "github.com/gogo/protobuf/proto" "google.golang.org/grpc" "k8s.io/apimachinery/pkg/runtime/schema" mcp "istio.io/api/mcp/v1alpha1" - "istio.io/istio/galley/pkg/kube/converter/legacy" "istio.io/istio/galley/pkg/metadata/kube" "istio.io/istio/mixer/pkg/config/store" "istio.io/istio/pkg/log" "istio.io/istio/pkg/mcp/client" "istio.io/istio/pkg/mcp/configz" "istio.io/istio/pkg/mcp/creds" + "istio.io/istio/pkg/mcp/monitoring" + "istio.io/istio/pkg/mcp/sink" "istio.io/istio/pkg/mcp/snapshot" "istio.io/istio/pkg/probe" ) @@ -84,7 +84,7 @@ type updateHookFn func() // backend is StoreBackend implementation using MCP. type backend struct { - // mapping of CRD <> typeURLs. + // mapping of CRD <> collections. mapping *mapping // Use insecure communication for gRPC. @@ -99,6 +99,8 @@ type backend struct { // The cancellation function that is used to cancel gRPC/MCP operations. cancel context.CancelFunc + mcpReporter monitoring.Reporter + // The in-memory state, where resources are kept for out-of-band get and list calls. state *state @@ -117,7 +119,7 @@ type backend struct { var _ store.Backend = &backend{} var _ probe.SupportsProbe = &backend{} -var _ client.Updater = &backend{} +var _ sink.Updater = &backend{} // state is the in-memory cache. type state struct { @@ -125,7 +127,7 @@ type state struct { // items stored by kind, then by key. items map[string]map[store.Key]*store.BackEndResource - synced map[string]bool // by kind + synced map[string]bool // by collection } // Init implements store.Backend.Init. @@ -136,10 +138,11 @@ func (b *backend) Init(kinds []string) error { } b.mapping = m - typeURLs := b.mapping.typeURLs() - scope.Infof("Requesting following types:") - for i, url := range typeURLs { - scope.Infof(" [%d] %s", i, url) + collections := b.mapping.collections() + + scope.Infof("Requesting following collections:") + for i, name := range collections { + scope.Infof(" [%d] %s", i, name) } // nolint: govet @@ -160,6 +163,7 @@ func (b *backend) Init(kinds []string) error { select { case <-ctx.Done(): // nolint: govet + cancel() return ctx.Err() case <-time.After(requiredCertCheckFreq): // retry @@ -173,6 +177,7 @@ func (b *backend) Init(kinds []string) error { watcher, err := creds.WatchFiles(ctx.Done(), b.credOptions) if err != nil { + cancel() return err } credentials := creds.CreateForClient(address, watcher) @@ -187,20 +192,26 @@ func (b *backend) Init(kinds []string) error { } cl := mcp.NewAggregatedMeshConfigServiceClient(conn) - c := client.New(cl, typeURLs, b, mixerNodeID, map[string]string{}, client.NewStatsContext("mixer")) + b.mcpReporter = monitoring.NewStatsContext("mixer") + options := &sink.Options{ + CollectionOptions: sink.CollectionOptionsFromSlice(collections), + Updater: b, + ID: mixerNodeID, + Reporter: b.mcpReporter, + } + c := client.New(cl, options) configz.Register(c) b.state = &state{ items: make(map[string]map[store.Key]*store.BackEndResource), synced: make(map[string]bool), } - for _, typeURL := range typeURLs { - b.state.synced[typeURL] = false + for _, collection := range collections { + b.state.synced[collection] = false } go c.Run(ctx) b.cancel = cancel - return nil } @@ -236,6 +247,7 @@ func (b *backend) Stop() { b.cancel() b.cancel = nil } + b.mcpReporter.Close() } // Watch creates a channel to receive the events. @@ -286,41 +298,28 @@ func (b *backend) List() map[store.Key]*store.BackEndResource { } // Apply implements client.Updater.Apply -func (b *backend) Apply(change *client.Change) error { +func (b *backend) Apply(change *sink.Change) error { b.state.Lock() defer b.state.Unlock() defer b.callUpdateHook() newTypeStates := make(map[string]map[store.Key]*store.BackEndResource) - typeURL := change.TypeURL - b.state.synced[typeURL] = true + b.state.synced[change.Collection] = true - scope.Debugf("Received update for: type:%s, count:%d", typeURL, len(change.Objects)) + scope.Debugf("Received update for: collection:%s, count:%d", change.Collection, len(change.Objects)) for _, o := range change.Objects { - var kind string - var name string - var contents proto.Message - if scope.DebugEnabled() { scope.Debugf("Processing incoming resource: %q @%s [%s]", o.Metadata.Name, o.Metadata.Version, o.TypeURL) } - // Demultiplex the resource, if it is a legacy type, and figure out its kind. - if isLegacyTypeURL(typeURL) { - // Extract the kind from payload. - legacyResource := o.Resource.(*legacy.LegacyMixerResource) - name = legacyResource.Name - kind = legacyResource.Kind - contents = legacyResource.Contents - } else { - // Otherwise, simply do a direct mapping from typeURL to kind - name = o.Metadata.Name - kind = b.mapping.kind(typeURL) - contents = o.Resource - } + name := o.Metadata.Name + kind := b.mapping.kind(change.Collection) + contents := o.Body + labels := o.Metadata.Labels + annotations := o.Metadata.Annotations collection, found := newTypeStates[kind] if !found { @@ -331,7 +330,7 @@ func (b *backend) Apply(change *client.Change) error { // Map it to Mixer's store model, and put it in the new collection. key := toKey(kind, name) - resource, err := toBackendResource(key, contents, o.Metadata.Version) + resource, err := toBackendResource(key, labels, annotations, contents, o.Metadata.Version) if err != nil { return err } diff --git a/mixer/pkg/config/mcp/backend_test.go b/mixer/pkg/config/mcp/backend_test.go index 38d06708a449..5b034ff3a5a1 100644 --- a/mixer/pkg/config/mcp/backend_test.go +++ b/mixer/pkg/config/mcp/backend_test.go @@ -31,6 +31,7 @@ import ( "istio.io/istio/mixer/pkg/config/store" "istio.io/istio/mixer/pkg/runtime/config/constant" "istio.io/istio/pkg/mcp/snapshot" + "istio.io/istio/pkg/mcp/source" mcptest "istio.io/istio/pkg/mcp/testing" ) @@ -47,22 +48,33 @@ var ( Match: "baz", } - fakeCreateTime = time.Date(2018, time.January, 1, 2, 3, 4, 5, time.UTC) - fakeCreateTimeProto *types.Timestamp + fakeCreateTime = time.Date(2018, time.January, 1, 2, 3, 4, 5, time.UTC) + fakeLabels = map[string]string{"lk1": "lv1"} + fakeAnnotations = map[string]string{"ak1": "av1"} + + // Well-known non-legacy Mixer types. + mixerKinds = []string{ + constant.AdapterKind, + constant.AttributeManifestKind, + constant.InstanceKind, + constant.HandlerKind, + constant.RulesKind, + constant.TemplateKind, + } ) func init() { var err error - fakeCreateTimeProto, err = types.TimestampProto(fakeCreateTime) + _, err = types.TimestampProto(fakeCreateTime) if err != nil { panic(err) } } -func typeURLOf(nonLegacyKind string) string { +func collectionOf(nonLegacyKind string) string { for _, u := range kube.Types.All() { if u.Kind == nonLegacyKind { - return u.Target.TypeURL.String() + return u.Target.Collection.String() } } @@ -83,19 +95,21 @@ type testState struct { func createState(t *testing.T) *testState { st := &testState{} - var typeUrls []string + var collections []source.CollectionOptions var kinds []string - m, err := constructMapping([]string{}, kube.Types) + m, err := constructMapping(mixerKinds, kube.Types) if err != nil { t.Fatal(err) } st.mapping = m - for t, k := range st.mapping.typeURLsToKinds { - typeUrls = append(typeUrls, t) + for t, k := range st.mapping.collectionsToKinds { + collections = append(collections, source.CollectionOptions{ + Name: t, + }) kinds = append(kinds, k) } - if st.server, err = mcptest.NewServer(0, typeUrls); err != nil { + if st.server, err = mcptest.NewServer(0, collections); err != nil { t.Fatal(err) } @@ -134,13 +148,13 @@ func TestBackend_HasSynced(t *testing.T) { } b := snapshot.NewInMemoryBuilder() - for typeURL := range st.mapping.typeURLsToKinds { + for typeURL := range st.mapping.collectionsToKinds { b.SetVersion(typeURL, "0") } sn := b.Build() - st.updateWg.Add(len(st.mapping.typeURLsToKinds)) + st.updateWg.Add(len(st.mapping.collectionsToKinds)) st.server.Cache.SetSnapshot(mixerNodeID, sn) st.updateWg.Wait() @@ -154,10 +168,13 @@ func TestBackend_List(t *testing.T) { defer st.close(t) b := snapshot.NewInMemoryBuilder() - _ = b.SetEntry(typeURLOf(constant.RulesKind), "ns1/e1", "v1", fakeCreateTime, r1) - _ = b.SetEntry(typeURLOf(constant.RulesKind), "ns2/e2", "v2", fakeCreateTime, r2) - _ = b.SetEntry(typeURLOf(constant.RulesKind), "e3", "v3", fakeCreateTime, r3) - b.SetVersion(typeURLOf(constant.RulesKind), "type-v1") + _ = b.SetEntry(collectionOf(constant.RulesKind), "ns1/e1", "v1", + fakeCreateTime, fakeLabels, fakeAnnotations, r1) + _ = b.SetEntry(collectionOf(constant.RulesKind), "ns2/e2", "v2", + fakeCreateTime, fakeLabels, fakeAnnotations, r2) + _ = b.SetEntry(collectionOf(constant.RulesKind), "e3", "v3", + fakeCreateTime, fakeLabels, fakeAnnotations, r3) + b.SetVersion(collectionOf(constant.RulesKind), "type-v1") sn := b.Build() st.updateWg.Add(1) @@ -174,9 +191,11 @@ func TestBackend_List(t *testing.T) { }: { Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e1", - Namespace: "ns1", - Revision: "v1", + Name: "e1", + Namespace: "ns1", + Revision: "v1", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "foo"}, }, @@ -187,9 +206,11 @@ func TestBackend_List(t *testing.T) { }: { Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e2", - Namespace: "ns2", - Revision: "v2", + Name: "e2", + Namespace: "ns2", + Revision: "v2", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "bar"}, }, @@ -200,9 +221,11 @@ func TestBackend_List(t *testing.T) { }: { Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e3", - Namespace: "", - Revision: "v3", + Name: "e3", + Namespace: "", + Revision: "v3", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "baz"}, }, @@ -217,8 +240,9 @@ func TestBackend_Get(t *testing.T) { b := snapshot.NewInMemoryBuilder() - _ = b.SetEntry(typeURLOf(constant.RulesKind), "ns1/e1", "v1", fakeCreateTime, r1) - b.SetVersion(typeURLOf(constant.RulesKind), "type-v1") + _ = b.SetEntry(collectionOf(constant.RulesKind), "ns1/e1", "v1", + fakeCreateTime, fakeLabels, fakeAnnotations, r1) + b.SetVersion(collectionOf(constant.RulesKind), "type-v1") sn := b.Build() st.updateWg.Add(1) @@ -233,9 +257,11 @@ func TestBackend_Get(t *testing.T) { expected := &store.BackEndResource{ Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e1", - Namespace: "ns1", - Revision: "v1", + Name: "e1", + Namespace: "ns1", + Revision: "v1", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "foo"}, } @@ -267,10 +293,14 @@ func TestBackend_Watch(t *testing.T) { } b := snapshot.NewInMemoryBuilder() - _ = b.SetEntry(typeURLOf(constant.RulesKind), "ns1/e1", "v1", fakeCreateTime, r1) - _ = b.SetEntry(typeURLOf(constant.RulesKind), "ns2/e2", "v2", fakeCreateTime, r2) - _ = b.SetEntry(typeURLOf(constant.RulesKind), "e3", "v3", fakeCreateTime, r3) - b.SetVersion(typeURLOf(constant.RulesKind), "type-v1") + + _ = b.SetEntry(collectionOf(constant.RulesKind), "ns1/e1", "v1", + fakeCreateTime, fakeLabels, fakeAnnotations, r1) + _ = b.SetEntry(collectionOf(constant.RulesKind), "ns2/e2", "v2", + fakeCreateTime, fakeLabels, fakeAnnotations, r2) + _ = b.SetEntry(collectionOf(constant.RulesKind), "e3", "v3", + fakeCreateTime, fakeLabels, fakeAnnotations, r3) + b.SetVersion(collectionOf(constant.RulesKind), "type-v1") sn := b.Build() @@ -300,9 +330,11 @@ loop: Value: &store.BackEndResource{ Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e1", - Namespace: "ns1", - Revision: "v1", + Name: "e1", + Namespace: "ns1", + Revision: "v1", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "foo"}, }, @@ -313,9 +345,11 @@ loop: Value: &store.BackEndResource{ Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e2", - Namespace: "ns2", - Revision: "v2", + Name: "e2", + Namespace: "ns2", + Revision: "v2", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "bar"}, }, @@ -326,8 +360,10 @@ loop: Value: &store.BackEndResource{ Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e3", - Revision: "v3", + Name: "e3", + Revision: "v3", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "baz"}, }, @@ -341,10 +377,12 @@ loop: // delete ns1/e1 // update ns2/e2 (r2 -> r1) - _ = b.SetEntry(typeURLOf(constant.RulesKind), "ns2/e2", "v4", fakeCreateTime, r1) + _ = b.SetEntry(collectionOf(constant.RulesKind), "ns2/e2", "v4", + fakeCreateTime, fakeLabels, fakeAnnotations, r1) // e3 doesn't change - _ = b.SetEntry(typeURLOf(constant.RulesKind), "e3", "v5", fakeCreateTime, r3) - b.SetVersion(typeURLOf(constant.RulesKind), "type-v2") + _ = b.SetEntry(collectionOf(constant.RulesKind), "e3", "v5", + fakeCreateTime, fakeLabels, fakeAnnotations, r3) + b.SetVersion(collectionOf(constant.RulesKind), "type-v2") sn = b.Build() @@ -377,9 +415,11 @@ loop2: Value: &store.BackEndResource{ Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e2", - Namespace: "ns2", - Revision: "v4", + Name: "e2", + Namespace: "ns2", + Revision: "v4", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "foo"}, // r1's contents }, @@ -391,8 +431,10 @@ loop2: Value: &store.BackEndResource{ Kind: constant.RulesKind, Metadata: store.ResourceMeta{ - Name: "e3", - Revision: "v5", + Name: "e3", + Revision: "v5", + Labels: fakeLabels, + Annotations: fakeAnnotations, }, Spec: map[string]interface{}{"match": "baz"}, }, @@ -400,7 +442,6 @@ loop2: } checkEqual(t, actual, expected) - } func checkEqual(t *testing.T, actual, expected interface{}) { diff --git a/mixer/pkg/config/mcp/conversion.go b/mixer/pkg/config/mcp/conversion.go index a3baf4c3c287..e3d27973e07a 100644 --- a/mixer/pkg/config/mcp/conversion.go +++ b/mixer/pkg/config/mcp/conversion.go @@ -22,106 +22,72 @@ import ( "github.com/gogo/protobuf/jsonpb" "github.com/gogo/protobuf/proto" - "istio.io/istio/galley/pkg/kube" + "istio.io/istio/galley/pkg/runtime/resource" + "istio.io/istio/galley/pkg/source/kube/schema" "istio.io/istio/mixer/pkg/config/store" - "istio.io/istio/mixer/pkg/runtime/config/constant" ) -// Well-known non-legacy Mixer types. -var mixerKinds = map[string]struct{}{ - constant.AdapterKind: {}, - constant.AttributeManifestKind: {}, - constant.InstanceKind: {}, - constant.HandlerKind: {}, - constant.RulesKind: {}, - constant.TemplateKind: {}, -} - -const ( - // MessageName, TypeURL for the LegacyMixerResource wrapper type. - legacyMixerResourceMessageName = "istio.mcp.v1alpha1.extensions.LegacyMixerResource" - legacyMixerResourceTypeURL = "type.googleapis.com/" + legacyMixerResourceMessageName -) - -// mapping between Proto Type Urls and and CRD Kinds. +// mapping between MCP collections and and CRD Kinds. type mapping struct { - // The set of legacy Mixer resource kinds. - legacyKinds map[string]struct{} - - // Bidirectional mapping of type URL & kinds for non-legacy resources. - kindsToTypeURLs map[string]string - typeURLsToKinds map[string]string + // Bidirectional mapping of collections & kinds for non-legacy resources. + kindsToCollections map[string]string + collectionsToKinds map[string]string } -// construct a mapping of kinds and TypeURLs. allKinds is the kind set that was passed +// construct a mapping of kinds and collections. allKinds is the kind set that was passed // as part of backend creation. -func constructMapping(allKinds []string, schema *kube.Schema) (*mapping, error) { +func constructMapping(allKinds []string, schema *schema.Instance) (*mapping, error) { + // The mapping is constructed from the common metadata we have for the Kubernetes. + // Go through Mixer's well-known kinds, and map them to collections. - // Calculate the legacy kinds. - legacyKinds := make(map[string]struct{}) + mixerKindMap := make(map[string]struct{}) for _, k := range allKinds { - if _, ok := mixerKinds[k]; !ok { - legacyKinds[k] = struct{}{} - } + mixerKindMap[k] = struct{}{} } - // The mapping is constructed from the common metadata we have for the Kubernetes. - // Go through Mixer's well-known kinds, and map them to Type URLs. - - // Create a mapping of kind <=> TypeURL for known non-legacy Mixer kinds. - kindToURL := make(map[string]string) - urlToKind := make(map[string]string) + // Create a mapping of kind <=> collection for known non-legacy Mixer kinds. + kindToCollection := make(map[string]string) + collectionToKind := make(map[string]string) for _, spec := range schema.All() { - if _, ok := mixerKinds[spec.Kind]; ok { - kindToURL[spec.Kind] = spec.Target.TypeURL.String() - urlToKind[spec.Target.TypeURL.String()] = spec.Kind + if _, ok := mixerKindMap[spec.Kind]; ok { + kindToCollection[spec.Kind] = spec.Target.Collection.String() + collectionToKind[spec.Target.Collection.String()] = spec.Kind } } - if len(mixerKinds) != len(kindToURL) { - // We couldn't find metadata for some of the well-known Mixer kinds. This shouldn't happen - // and is a fatal error. - var problemKinds []string - for mk := range mixerKinds { - if _, ok := kindToURL[mk]; !ok { - problemKinds = append(problemKinds, mk) - } + var missingKinds []string + for _, mk := range allKinds { + if _, ok := kindToCollection[mk]; !ok { + missingKinds = append(missingKinds, mk) } - - return nil, fmt.Errorf("unable to map some Mixer kinds to TypeURLs: %q", - strings.Join(problemKinds, ",")) + } + // We couldn't find metadata for some of the well-known Mixer kinds. This shouldn't happen + // and is a fatal error. + if len(missingKinds) > 0 { + return nil, fmt.Errorf("unable to map some Mixer kinds to collections: %q", + strings.Join(missingKinds, ",")) } return &mapping{ - legacyKinds: legacyKinds, - kindsToTypeURLs: kindToURL, - typeURLsToKinds: urlToKind, + kindsToCollections: kindToCollection, + collectionsToKinds: collectionToKind, }, nil } -// typeURLs returns all TypeURLs that should be requested from the MCP server. -func (m *mapping) typeURLs() []string { - result := make([]string, 0, len(m.typeURLsToKinds)+1) - for u := range m.typeURLsToKinds { +// collections returns all collections that should be requested from the MCP server. +func (m *mapping) collections() []string { + result := make([]string, 0, len(m.collectionsToKinds)+1) + for u := range m.collectionsToKinds { result = append(result, u) } - result = append(result, legacyMixerResourceTypeURL) - return result } -func (m *mapping) kind(typeURL string) string { - return m.typeURLsToKinds[typeURL] -} - -func isLegacyTypeURL(url string) bool { - return url == legacyMixerResourceTypeURL +func (m *mapping) kind(collection string) string { + return m.collectionsToKinds[collection] } func toKey(kind string, resourceName string) store.Key { - // TODO: This is a dependency on the name format. For the short term, we will parse the resource name, - // assuming it is in the ns/name format. For the long term, we should update the code to stop it from - // depending on namespaces. ns := "" localName := resourceName if idx := strings.LastIndex(resourceName, "/"); idx != -1 { @@ -136,7 +102,9 @@ func toKey(kind string, resourceName string) store.Key { } } -func toBackendResource(key store.Key, resource proto.Message, version string) (*store.BackEndResource, error) { +func toBackendResource(key store.Key, labels resource.Labels, annotations resource.Annotations, resource proto.Message, + version string) (*store.BackEndResource, error) { + marshaller := jsonpb.Marshaler{} jsonData, err := marshaller.MarshalToString(resource) if err != nil { @@ -151,9 +119,11 @@ func toBackendResource(key store.Key, resource proto.Message, version string) (* return &store.BackEndResource{ Kind: key.Kind, Metadata: store.ResourceMeta{ - Name: key.Name, - Namespace: key.Namespace, - Revision: version, + Name: key.Name, + Namespace: key.Namespace, + Revision: version, + Labels: labels, + Annotations: annotations, }, Spec: spec, }, nil diff --git a/mixer/pkg/config/store/queue.go b/mixer/pkg/config/store/queue.go index c102ebfce915..8a1288460eb0 100644 --- a/mixer/pkg/config/store/queue.go +++ b/mixer/pkg/config/store/queue.go @@ -52,10 +52,13 @@ func (q *eventQueue) convertValue(ev BackendEvent) (Event, error) { if err = convert(ev.Key, ev.Value.Spec, pbSpec); err != nil { return Event{}, err } - return Event{Key: ev.Key, Type: ev.Type, Value: &Resource{ - Metadata: ev.Value.Metadata, - Spec: pbSpec, - }}, nil + return Event{ + Key: ev.Key, + Type: ev.Type, + Value: &Resource{ + Metadata: ev.Value.Metadata, + Spec: pbSpec, + }}, nil } func (q *eventQueue) run() { diff --git a/mixer/pkg/config/store/store.go b/mixer/pkg/config/store/store.go index 47f04b5d6538..6c0c414fac47 100644 --- a/mixer/pkg/config/store/store.go +++ b/mixer/pkg/config/store/store.go @@ -77,7 +77,11 @@ type BackEndResource struct { // Key returns the key of the resource in the store. func (ber *BackEndResource) Key() Key { - return Key{Kind: ber.Kind, Name: ber.Metadata.Name, Namespace: ber.Metadata.Namespace} + return Key{ + Kind: ber.Kind, + Name: ber.Metadata.Name, + Namespace: ber.Metadata.Namespace, + } } // Resource represents a resources with converted spec. @@ -230,7 +234,10 @@ func (s *store) Get(key Key) (*Resource, error) { if err = convert(key, obj.Spec, pbSpec); err != nil { return nil, err } - return &Resource{Metadata: obj.Metadata, Spec: pbSpec}, nil + return &Resource{ + Metadata: obj.Metadata, + Spec: pbSpec, + }, nil } // List returns the whole mapping from key to resource specs in the store. diff --git a/mixer/pkg/runtime/dispatcher/session.go b/mixer/pkg/runtime/dispatcher/session.go index fe98c34413d8..3dfb2cf7269a 100644 --- a/mixer/pkg/runtime/dispatcher/session.go +++ b/mixer/pkg/runtime/dispatcher/session.go @@ -233,6 +233,9 @@ func (s *session) dispatch() error { log.Warnf("Failed to evaluate header value: %v", verr) continue } + if hop.Value == "" { + continue + } } if s.checkResult.RouteDirective == nil { diff --git a/mixer/pkg/runtime/testing/data/data.go b/mixer/pkg/runtime/testing/data/data.go index dc6244f7de71..d88fbf0681a7 100644 --- a/mixer/pkg/runtime/testing/data/data.go +++ b/mixer/pkg/runtime/testing/data/data.go @@ -502,6 +502,9 @@ spec: - name: user values: - sample.output.value + - name: empty-header + values: + - '""' ` // RuleCheckOutput2 is a testing rule for check output template with multiple outputs diff --git a/mixer/pkg/server/interceptor.go b/mixer/pkg/server/interceptor.go new file mode 100644 index 000000000000..03e98f52d393 --- /dev/null +++ b/mixer/pkg/server/interceptor.go @@ -0,0 +1,104 @@ +package server + +import ( + "context" + "strings" + + "github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc" + opentracing "github.com/opentracing/opentracing-go" + "github.com/opentracing/opentracing-go/ext" + "github.com/opentracing/opentracing-go/log" + "google.golang.org/grpc" + "google.golang.org/grpc/metadata" +) + +var ( + // Morally a const: + componentTag = opentracing.Tag{Key: string(ext.Component), Value: "istio-mixer"} + + defaultNoopTracer = opentracing.NoopTracer{} +) + +type metadataReaderWriter struct { + metadata.MD +} + +func (w metadataReaderWriter) Set(key, val string) { + // The GRPC HPACK implementation rejects any uppercase keys here. + // + // As such, since the HTTP_HEADERS format is case-insensitive anyway, we + // blindly lowercase the key (which is guaranteed to work in the + // Inject/Extract sense per the OpenTracing spec). + key = strings.ToLower(key) + w.MD[key] = append(w.MD[key], val) +} + +func (w metadataReaderWriter) ForeachKey(handler func(key, val string) error) error { + for k, vals := range w.MD { + for _, v := range vals { + if err := handler(k, v); err != nil { + return err + } + } + } + + return nil +} + +// TracingServerInterceptor copy of the GRPC tracing interceptor to enable switching +// between the supplied tracer and NoopTracer depending upon whether the request is +// being sampled. +func TracingServerInterceptor(tracer opentracing.Tracer) grpc.UnaryServerInterceptor { + return func( + ctx context.Context, + req interface{}, + info *grpc.UnaryServerInfo, + handler grpc.UnaryHandler, + ) (resp interface{}, err error) { + md, ok := metadata.FromIncomingContext(ctx) + if !ok { + md = metadata.New(nil) + } + + t := tracer + var spanContext opentracing.SpanContext + + if !isSampled(md) { + t = defaultNoopTracer + } else if spanContext, err = t.Extract(opentracing.HTTPHeaders, metadataReaderWriter{md}); err == opentracing.ErrSpanContextNotFound { + t = defaultNoopTracer + } + + serverSpan := t.StartSpan( + info.FullMethod, + ext.RPCServerOption(spanContext), + componentTag, + ) + defer serverSpan.Finish() + + ctx = opentracing.ContextWithSpan(ctx, serverSpan) + resp, err = handler(ctx, req) + if err != nil { + otgrpc.SetSpanTags(serverSpan, err, false) + serverSpan.LogFields(log.String("event", "error"), log.String("message", err.Error())) + } + return resp, err + } +} + +func isSampled(md metadata.MD) bool { + for _, val := range md.Get("x-b3-sampled") { + if val == "1" || strings.EqualFold(val, "true") { + return true + } + } + + // allow for compressed header too + for _, val := range md.Get("b3") { + if val == "1" || strings.HasSuffix(val, "-1") || strings.Contains(val, "-1-") { + return true + } + } + + return false +} diff --git a/mixer/pkg/server/interceptor_test.go b/mixer/pkg/server/interceptor_test.go new file mode 100644 index 000000000000..61255c06ac52 --- /dev/null +++ b/mixer/pkg/server/interceptor_test.go @@ -0,0 +1,156 @@ +// Copyright 2019 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package server + +import ( + "context" + "testing" + + opentracing "github.com/opentracing/opentracing-go" + "github.com/opentracing/opentracing-go/mocktracer" + "github.com/stretchr/testify/assert" + "google.golang.org/grpc" + "google.golang.org/grpc/metadata" +) + +func TestNoB3SampledFalse(t *testing.T) { + assert.False(t, isSampled(metadata.New(nil))) +} + +func TestB3SampledTrue(t *testing.T) { + assert.True(t, isSampled(metadata.New(map[string]string{"x-b3-sampled": "1"}))) +} + +func TestB3SampledTrueAgain(t *testing.T) { + assert.True(t, isSampled(metadata.New(map[string]string{"x-b3-sampled": "true"}))) +} + +func TestB3SampledFalse(t *testing.T) { + assert.False(t, isSampled(metadata.New(map[string]string{"x-b3-sampled": "0"}))) +} + +func TestB3SampledCaseFalse(t *testing.T) { + assert.False(t, isSampled(metadata.New(map[string]string{"X-B3-SAMPLED": "0"}))) +} + +func TestB3True(t *testing.T) { + assert.True(t, isSampled(metadata.New(map[string]string{"b3": "1"}))) +} + +func TestB3False(t *testing.T) { + assert.False(t, isSampled(metadata.New(map[string]string{"b3": "0"}))) +} + +func TestB3WithTraceParentIdFalse(t *testing.T) { + assert.False(t, isSampled(metadata.New(map[string]string{"b3": "80f198ee56343ba864fe8b2a57d3eff7-e457b5a2e4d86bd1-0"}))) +} + +func TestB3WithTraceParentAndSpanIdFalse(t *testing.T) { + assert.False(t, isSampled(metadata.New(map[string]string{"b3": "80f198ee56343ba864fe8b2a57d3eff7-e457b5a2e4d86bd1-0-05e3ac9a4f6e3b90"}))) +} + +func TestB3CaseFalse(t *testing.T) { + assert.False(t, isSampled(metadata.New(map[string]string{"B3": "0"}))) +} + +func TestSampledSpan(t *testing.T) { + tracer := mocktracer.New() + // Need to define a valid span context as otherwise the tracer would + // return error opentracing.ErrSpanContextNotFound + spanContext := mocktracer.MockSpanContext{ + TraceID: 1, + SpanID: 2, + Sampled: true, + } + interceptor := TracingServerInterceptor(tracer) + + // Need to define a B3 header to indicate also that sampling is enabled + md := metadata.MD{ + "b3": []string{"1"}, + } + ctx := metadata.NewIncomingContext(context.Background(), md) + + mdWriter := metadataReaderWriter{md} + tracer.Inject(spanContext, opentracing.HTTPHeaders, mdWriter) + + info := &grpc.UnaryServerInfo{ + FullMethod: "mymethod", + } + interceptor(ctx, nil, info, func(ctx context.Context, req interface{}) (interface{}, error) { + return nil, nil + }) + assert.Len(t, tracer.FinishedSpans(), 1) +} + +func TestErrSpanContextNotFound(t *testing.T) { + tracer := mocktracer.New() + interceptor := TracingServerInterceptor(tracer) + + // Need to define a B3 header to indicate also that sampling is enabled + md := metadata.MD{ + "b3": []string{"1"}, + } + ctx := metadata.NewIncomingContext(context.Background(), md) + + info := &grpc.UnaryServerInfo{ + FullMethod: "mymethod", + } + interceptor(ctx, nil, info, func(ctx context.Context, req interface{}) (interface{}, error) { + return nil, nil + }) + assert.Len(t, tracer.FinishedSpans(), 0) +} + +func TestNotSampledSpan(t *testing.T) { + tracer := mocktracer.New() + // Need to define a valid span context as otherwise the tracer would + // return error opentracing.ErrSpanContextNotFound + spanContext := mocktracer.MockSpanContext{ + TraceID: 1, + SpanID: 2, + Sampled: true, + } + interceptor := TracingServerInterceptor(tracer) + + // Need to define a B3 header to indicate also that sampling is disabled + md := metadata.MD{ + "b3": []string{"0"}, + } + ctx := metadata.NewIncomingContext(context.Background(), md) + + mdWriter := metadataReaderWriter{md} + tracer.Inject(spanContext, opentracing.HTTPHeaders, mdWriter) + + info := &grpc.UnaryServerInfo{ + FullMethod: "mymethod", + } + interceptor(ctx, nil, info, func(ctx context.Context, req interface{}) (interface{}, error) { + return nil, nil + }) + assert.Len(t, tracer.FinishedSpans(), 0) +} + +func TestNoSpanContext(t *testing.T) { + tracer := mocktracer.New() + interceptor := TracingServerInterceptor(tracer) + + info := &grpc.UnaryServerInfo{ + FullMethod: "mymethod", + } + interceptor(context.Background(), nil, info, func(ctx context.Context, req interface{}) (interface{}, error) { + return nil, nil + }) + assert.Len(t, tracer.FinishedSpans(), 0) +} diff --git a/mixer/pkg/server/server.go b/mixer/pkg/server/server.go index 602547ddf821..4a5dde19fe98 100644 --- a/mixer/pkg/server/server.go +++ b/mixer/pkg/server/server.go @@ -23,7 +23,6 @@ import ( "time" "github.com/gogo/protobuf/proto" - "github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc" ot "github.com/opentracing/opentracing-go" oprometheus "github.com/prometheus/client_golang/prometheus" "go.opencensus.io/exporter/prometheus" @@ -69,6 +68,7 @@ type Server struct { livenessProbe probe.Controller readinessProbe probe.Controller *probe.Probe + configStore store.Store } type listenFunc func(network string, address string) (net.Listener, error) @@ -151,7 +151,7 @@ func newServer(a *Args, p *patchTable) (server *Server, err error) { if err != nil { return nil, fmt.Errorf("unable to setup tracing") } - grpcOptions = append(grpcOptions, grpc.UnaryInterceptor(otgrpc.OpenTracingServerInterceptor(ot.GlobalTracer()))) + grpcOptions = append(grpcOptions, grpc.UnaryInterceptor(TracingServerInterceptor(ot.GlobalTracer()))) } // get the network stuff setup @@ -206,7 +206,7 @@ func newServer(a *Args, p *patchTable) (server *Server, err error) { if err := st.WaitForSynced(30 * time.Second); err != nil { return nil, err } - + s.configStore = st log.Info("Starting runtime config watch...") rt = p.newRuntime(st, templateMap, adapterMap, a.ConfigDefaultNamespace, s.gp, s.adapterGP, a.TracingOptions.TracingEnabled()) @@ -321,6 +321,11 @@ func (s *Server) Close() error { s.controlZ.Close() } + if s.configStore != nil { + s.configStore.Stop() + s.configStore = nil + } + if s.checkCache != nil { _ = s.checkCache.Close() } diff --git a/mixer/pkg/server/server_test.go b/mixer/pkg/server/server_test.go index f243f18ca090..08124ff882f6 100644 --- a/mixer/pkg/server/server_test.go +++ b/mixer/pkg/server/server_test.go @@ -310,9 +310,6 @@ func TestErrors(t *testing.T) { s.Close() t.Errorf("Got success, expecting error") } - - // cleanup - configStore.Stop() }) } } diff --git a/mixer/test/client/env/envoy.go b/mixer/test/client/env/envoy.go index ea59875853ba..ebb3b209eb7f 100644 --- a/mixer/test/client/env/envoy.go +++ b/mixer/test/client/env/envoy.go @@ -48,7 +48,6 @@ func (s *TestSetup) NewEnvoy() (*Envoy, error) { baseID := "" args := []string{"-c", confPath, - "--v2-config-only", "--drain-time-s", "1", "--allow-unknown-fields"} if s.stress { @@ -78,6 +77,9 @@ func (s *TestSetup) NewEnvoy() (*Envoy, error) { cmd := exec.Command(envoyPath, args...) cmd.Stderr = os.Stderr cmd.Stdout = os.Stdout + if s.Dir != "" { + cmd.Dir = s.Dir + } return &Envoy{ cmd: cmd, ports: s.ports, diff --git a/mixer/test/client/env/http_client.go b/mixer/test/client/env/http_client.go index 22256aa61e96..bfd09bc2e827 100644 --- a/mixer/test/client/env/http_client.go +++ b/mixer/test/client/env/http_client.go @@ -169,6 +169,6 @@ func WaitForPort(port uint16) { // IsPortUsed checks if a port is used func IsPortUsed(port uint16) bool { serverPort := fmt.Sprintf("localhost:%v", port) - _, err := net.Dial("tcp", serverPort) + _, err := net.DialTimeout("tcp", serverPort, 100*time.Millisecond) return err == nil } diff --git a/mixer/test/client/env/ports.go b/mixer/test/client/env/ports.go index f5dcc35e8873..2870b485e66e 100644 --- a/mixer/test/client/env/ports.go +++ b/mixer/test/client/env/ports.go @@ -60,6 +60,7 @@ const ( PilotMCPTest RbacGlobalPermissiveTest RbacPolicyPermissiveTest + GatewayTest // The number of total tests. has to be the last one. maxTestNum diff --git a/mixer/test/client/env/setup.go b/mixer/test/client/env/setup.go index 763ffaaf0c26..cb1a91f0b50e 100644 --- a/mixer/test/client/env/setup.go +++ b/mixer/test/client/env/setup.go @@ -69,6 +69,9 @@ type TestSetup struct { // expected source.uid attribute at the mixer gRPC metadata mixerSourceUID string + + // Dir is the working dir for envoy + Dir string } // NewTestSetup creates a new test setup diff --git a/mixer/test/client/gateway/gateway_test.go b/mixer/test/client/gateway/gateway_test.go new file mode 100644 index 000000000000..daead9d075eb --- /dev/null +++ b/mixer/test/client/gateway/gateway_test.go @@ -0,0 +1,327 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package client_test + +import ( + "encoding/base64" + "fmt" + "net" + "testing" + + v2 "github.com/envoyproxy/go-control-plane/envoy/api/v2" + "github.com/envoyproxy/go-control-plane/envoy/api/v2/core" + "github.com/envoyproxy/go-control-plane/envoy/api/v2/listener" + "github.com/envoyproxy/go-control-plane/envoy/api/v2/route" + hcm "github.com/envoyproxy/go-control-plane/envoy/config/filter/network/http_connection_manager/v2" + discovery "github.com/envoyproxy/go-control-plane/envoy/service/discovery/v2" + "github.com/envoyproxy/go-control-plane/pkg/cache" + xds "github.com/envoyproxy/go-control-plane/pkg/server" + "github.com/envoyproxy/go-control-plane/pkg/util" + "google.golang.org/grpc" + + meshconfig "istio.io/api/mesh/v1alpha1" + mixerpb "istio.io/api/mixer/v1" + "istio.io/istio/mixer/test/client/env" + "istio.io/istio/pilot/pkg/model" + "istio.io/istio/pilot/pkg/networking/plugin" + "istio.io/istio/pilot/pkg/networking/plugin/mixer" + pilotutil "istio.io/istio/pilot/pkg/networking/util" +) + +const ( + envoyConf = ` +admin: + access_log_path: {{.AccessLogPath}} + address: + socket_address: + address: 127.0.0.1 + port_value: {{.Ports.AdminPort}} +node: + id: id + cluster: unknown + metadata: + # these two must come together and they need to be set + NODE_UID: pod.ns + NODE_NAMESPACE: ns +dynamic_resources: + lds_config: { ads: {} } + ads_config: + api_type: GRPC + grpc_services: + envoy_grpc: + cluster_name: xds +static_resources: + clusters: + - name: xds + http2_protocol_options: {} + connect_timeout: 5s + type: STATIC + hosts: + - socket_address: + address: 127.0.0.1 + port_value: {{.Ports.DiscoveryPort}} + - name: "outbound|||svc.ns3" + connect_timeout: 5s + type: STATIC + hosts: + - socket_address: + address: 127.0.0.1 + port_value: {{.Ports.BackendPort}} + - name: "outbound|9091||mixer_server" + http2_protocol_options: {} + connect_timeout: 5s + type: STATIC + hosts: + - socket_address: + address: 127.0.0.1 + port_value: {{.Ports.MixerPort}} +` + + // TODO(mandarjog): source.uid below should be in-mesh-app + + checkAttributesOkOutbound = ` +{ + "connection.mtls": false, + "origin.ip": "[127 0 0 1]", + "context.protocol": "http", + "context.reporter.kind": "outbound", + "context.reporter.uid": "kubernetes://pod.ns", + "destination.service.host": "svc.ns3", + "destination.service.name": "svc", + "destination.service.namespace": "ns3", + "destination.service.uid": "istio://ns3/services/svc", + "source.uid": "kubernetes://pod.ns", + "source.namespace": "ns", + "request.headers": { + ":method": "GET", + ":path": "/echo", + ":authority": "*", + "x-forwarded-proto": "http", + "x-request-id": "*" + }, + "request.host": "*", + "request.path": "/echo", + "request.time": "*", + "request.useragent": "Go-http-client/1.1", + "request.method": "GET", + "request.scheme": "http", + "request.url_path": "/echo" +} +` + reportAttributesOkOutbound = ` +{ + "connection.mtls": false, + "origin.ip": "[127 0 0 1]", + "context.protocol": "http", + "context.proxy_error_code": "-", + "context.reporter.kind": "outbound", + "context.reporter.uid": "kubernetes://pod.ns", + "destination.ip": "[127 0 0 1]", + "destination.port": "*", + "destination.service.host": "svc.ns3", + "destination.service.name": "svc", + "destination.service.namespace": "ns3", + "destination.service.uid": "istio://ns3/services/svc", + "source.uid": "kubernetes://pod.ns", + "source.namespace": "ns", + "check.cache_hit": false, + "quota.cache_hit": false, + "request.headers": { + ":method": "GET", + ":path": "/echo", + ":authority": "*", + "x-forwarded-proto": "http", + "x-istio-attributes": "-", + "x-request-id": "*" + }, + "request.host": "*", + "request.path": "/echo", + "request.time": "*", + "request.useragent": "Go-http-client/1.1", + "request.method": "GET", + "request.scheme": "http", + "request.size": 0, + "request.total_size": "*", + "request.url_path": "/echo", + "response.time": "*", + "response.size": 0, + "response.duration": "*", + "response.code": 200, + "response.headers": { + "date": "*", + "content-length": "0", + ":status": "200", + "server": "envoy" + }, + "response.total_size": "*" +}` +) + +func TestGateway(t *testing.T) { + s := env.NewTestSetup(env.GatewayTest, t) + s.EnvoyTemplate = envoyConf + grpcServer := grpc.NewServer() + lis, err := net.Listen("tcp", fmt.Sprintf(":%d", s.Ports().DiscoveryPort)) + if err != nil { + t.Fatal(err) + } + + snapshots := cache.NewSnapshotCache(true, mock{}, nil) + snapshots.SetSnapshot(id, makeSnapshot(s, t)) + server := xds.NewServer(snapshots, nil) + discovery.RegisterAggregatedDiscoveryServiceServer(grpcServer, server) + go func() { + _ = grpcServer.Serve(lis) + }() + defer grpcServer.GracefulStop() + + s.SetMixerSourceUID("pod.ns") + + if err := s.SetUp(); err != nil { + t.Fatalf("Failed to setup test: %v", err) + } + defer s.TearDown() + + s.WaitEnvoyReady() + + // Issues a GET echo request with 0 size body, forward some random source.uid + attrs := mixerpb.Attributes{ + Attributes: map[string]*mixerpb.Attributes_AttributeValue{ + "source.uid": &mixerpb.Attributes_AttributeValue{Value: &mixerpb.Attributes_AttributeValue_StringValue{ + StringValue: "in-mesh-app", + }}, + }, + } + out, _ := attrs.Marshal() + headers := map[string]string{ + "x-istio-attributes": base64.StdEncoding.EncodeToString(out), + } + if _, _, err := env.HTTPGetWithHeaders(fmt.Sprintf("http://localhost:%d/echo", s.Ports().ClientProxyPort), headers); err != nil { + t.Errorf("Failed in request: %v", err) + } + s.VerifyCheck("http-outbound", checkAttributesOkOutbound) + s.VerifyReport("http", reportAttributesOkOutbound) +} + +type mock struct{} + +func (mock) ID(*core.Node) string { + return id +} +func (mock) GetProxyServiceInstances(_ *model.Proxy) ([]*model.ServiceInstance, error) { + return nil, nil +} +func (mock) GetService(_ model.Hostname) (*model.Service, error) { return nil, nil } +func (mock) InstancesByPort(_ model.Hostname, _ int, _ model.LabelsCollection) ([]*model.ServiceInstance, error) { + return nil, nil +} +func (mock) ManagementPorts(_ string) model.PortList { return nil } +func (mock) Services() ([]*model.Service, error) { return nil, nil } +func (mock) WorkloadHealthCheckInfo(_ string) model.ProbeList { return nil } + +const ( + id = "id" +) + +var ( + svc = model.Service{ + Hostname: "svc.ns3", + Attributes: model.ServiceAttributes{ + Name: "svc", + Namespace: "ns3", + UID: "istio://ns3/services/svc", + }, + } + mesh = &model.Environment{ + Mesh: &meshconfig.MeshConfig{ + MixerCheckServer: "mixer_server:9091", + MixerReportServer: "mixer_server:9091", + }, + ServiceDiscovery: mock{}, + } + pushContext = model.PushContext{ + ServiceByHostname: map[model.Hostname]*model.Service{ + model.Hostname("svc.ns3"): &svc, + }, + } + clientParams = plugin.InputParams{ + ListenerProtocol: plugin.ListenerProtocolHTTP, + Env: mesh, + Node: &model.Proxy{ + ID: "pod.ns", + Type: model.Router, + }, + Service: &svc, + Push: &pushContext, + } +) + +func makeRoute(cluster string) *v2.RouteConfiguration { + return &v2.RouteConfiguration{ + Name: cluster, + VirtualHosts: []route.VirtualHost{{ + Name: cluster, + Domains: []string{"*"}, + Routes: []route.Route{{ + Match: route.RouteMatch{PathSpecifier: &route.RouteMatch_Prefix{Prefix: "/"}}, + Action: &route.Route_Route{Route: &route.RouteAction{ + ClusterSpecifier: &route.RouteAction_Cluster{Cluster: cluster}, + }}, + }}, + }}, + } +} + +func makeListener(port uint16, route string) (*v2.Listener, *hcm.HttpConnectionManager) { + return &v2.Listener{ + Name: route, + Address: core.Address{Address: &core.Address_SocketAddress{SocketAddress: &core.SocketAddress{ + Address: "127.0.0.1", + PortSpecifier: &core.SocketAddress_PortValue{PortValue: uint32(port)}}}}, + }, &hcm.HttpConnectionManager{ + CodecType: hcm.AUTO, + StatPrefix: route, + RouteSpecifier: &hcm.HttpConnectionManager_Rds{ + Rds: &hcm.Rds{RouteConfigName: route, ConfigSource: core.ConfigSource{ + ConfigSourceSpecifier: &core.ConfigSource_Ads{Ads: &core.AggregatedConfigSource{}}, + }}, + }, + HttpFilters: []*hcm.HttpFilter{{Name: util.Router}}, + } +} + +func makeSnapshot(s *env.TestSetup, t *testing.T) cache.Snapshot { + clientListener, clientManager := makeListener(s.Ports().ClientProxyPort, "outbound|||svc.ns3") + clientRoute := makeRoute("outbound|||svc.ns3") + + p := mixer.NewPlugin() + + clientMutable := plugin.MutableObjects{Listener: clientListener, FilterChains: []plugin.FilterChain{{}}} + if err := p.OnOutboundListener(&clientParams, &clientMutable); err != nil { + t.Error(err) + } + clientManager.HttpFilters = append(clientMutable.FilterChains[0].HTTP, clientManager.HttpFilters...) + clientListener.FilterChains = []listener.FilterChain{{Filters: []listener.Filter{{ + Name: util.HTTPConnectionManager, + ConfigType: &listener.Filter_Config{Config: pilotutil.MessageToStruct(clientManager)}, + }}}} + + p.OnOutboundRouteConfiguration(&clientParams, clientRoute) + + return cache.Snapshot{ + Routes: cache.NewResources("http", []cache.Resource{clientRoute}), + Listeners: cache.NewResources("http", []cache.Resource{clientListener}), + } +} diff --git a/mixer/test/client/pilotplugin/pilotplugin_test.go b/mixer/test/client/pilotplugin/pilotplugin_test.go index 3909a40b7756..2068e658b90b 100644 --- a/mixer/test/client/pilotplugin/pilotplugin_test.go +++ b/mixer/test/client/pilotplugin/pilotplugin_test.go @@ -336,7 +336,7 @@ var ( Env: mesh, Node: &model.Proxy{ ID: "pod1.ns2", - Type: model.Sidecar, + Type: model.SidecarProxy, }, ServiceInstance: &model.ServiceInstance{Service: &svc}, Push: &pushContext, @@ -346,7 +346,7 @@ var ( Env: mesh, Node: &model.Proxy{ ID: "pod2.ns2", - Type: model.Sidecar, + Type: model.SidecarProxy, }, Service: &svc, Push: &pushContext, diff --git a/mixer/test/client/pilotplugin_mtls/pilotplugin_mtls_test.go b/mixer/test/client/pilotplugin_mtls/pilotplugin_mtls_test.go index 453db5a6beae..b147d4d0a490 100644 --- a/mixer/test/client/pilotplugin_mtls/pilotplugin_mtls_test.go +++ b/mixer/test/client/pilotplugin_mtls/pilotplugin_mtls_test.go @@ -351,7 +351,7 @@ var ( Env: mesh, Node: &model.Proxy{ ID: "pod1.ns2", - Type: model.Sidecar, + Type: model.SidecarProxy, }, ServiceInstance: &model.ServiceInstance{Service: &svc}, Push: &pushContext, @@ -361,7 +361,7 @@ var ( Env: mesh, Node: &model.Proxy{ ID: "pod2.ns2", - Type: model.Sidecar, + Type: model.SidecarProxy, }, Service: &svc, Push: &pushContext, diff --git a/mixer/test/client/pilotplugin_tcp/pilotplugin_tcp_test.go b/mixer/test/client/pilotplugin_tcp/pilotplugin_tcp_test.go index 9021d3fcf430..7035dc45cd8c 100644 --- a/mixer/test/client/pilotplugin_tcp/pilotplugin_tcp_test.go +++ b/mixer/test/client/pilotplugin_tcp/pilotplugin_tcp_test.go @@ -213,7 +213,7 @@ var ( Env: mesh, Node: &model.Proxy{ ID: "pod1.ns1", - Type: model.Sidecar, + Type: model.SidecarProxy, }, ServiceInstance: &model.ServiceInstance{Service: &svc}, Push: &pushContext, @@ -223,7 +223,7 @@ var ( Env: mesh, Node: &model.Proxy{ ID: "pod2.ns2", - Type: model.Sidecar, + Type: model.SidecarProxy, }, Service: &svc, Push: &pushContext, diff --git a/mixer/test/client/route_directive/route_directive_test.go b/mixer/test/client/route_directive/route_directive_test.go index 979f86788424..90c8f1051ed0 100644 --- a/mixer/test/client/route_directive/route_directive_test.go +++ b/mixer/test/client/route_directive/route_directive_test.go @@ -20,6 +20,7 @@ import ( "log" "net/http" "reflect" + "strings" "testing" "time" @@ -30,6 +31,9 @@ import ( // signifies a certain map has to be empty const mustBeEmpty = "MUST_BE_EMPTY" +// request body +const requestBody = "HELLO WORLD" + var expectedStats = map[string]int{ "http_mixer_filter.total_check_calls": 12, "http_mixer_filter.total_remote_check_calls": 6, @@ -85,8 +89,9 @@ func TestRouteDirective(t *testing.T) { }, response: http.Header{ "Server": []string{"envoy"}, - "Content-Length": []string{"0"}, + "Content-Length": []string{fmt.Sprintf("%d", len(requestBody))}, }, + body: requestBody, }, { desc: "request header operations", path: "/request", @@ -111,6 +116,7 @@ func TestRouteDirective(t *testing.T) { response: http.Header{ "X-Istio-Request": nil, }, + body: requestBody, }, { desc: "response header operations", path: "/response", @@ -133,8 +139,9 @@ func TestRouteDirective(t *testing.T) { }, response: http.Header{ "X-Istio-Response": []string{"value", "value2"}, - "Content-Length": []string{"0"}, + "Content-Length": nil, }, + body: requestBody, }, { desc: "combine operations", path: "/combine", @@ -145,6 +152,7 @@ func TestRouteDirective(t *testing.T) { }, request: http.Header{"Istio-Request": []string{"test"}}, response: http.Header{"Istio-Response": []string{"case"}}, + body: requestBody, }, { desc: "direct response", path: "/direct", @@ -177,7 +185,7 @@ func TestRouteDirective(t *testing.T) { for _, cs := range testCases { t.Run(cs.desc, func(t *testing.T) { s.SetMixerRouteDirective(cs.directive) - req, err := http.NewRequest(cs.method, fmt.Sprintf("http://localhost:%d%s", s.Ports().ServerProxyPort, cs.path), nil) + req, err := http.NewRequest(cs.method, fmt.Sprintf("http://localhost:%d%s", s.Ports().ServerProxyPort, cs.path), strings.NewReader(requestBody)) if err != nil { t.Fatal(err) } @@ -200,7 +208,7 @@ func TestRouteDirective(t *testing.T) { // run the queries again to exercise caching for _, cs := range testCases { s.SetMixerRouteDirective(cs.directive) - req, _ := http.NewRequest(cs.method, fmt.Sprintf("http://localhost:%d%s", s.Ports().ServerProxyPort, cs.path), nil) + req, _ := http.NewRequest(cs.method, fmt.Sprintf("http://localhost:%d%s", s.Ports().ServerProxyPort, cs.path), strings.NewReader(requestBody)) _, _ = client.Do(req) } diff --git a/pilot/OWNERS b/pilot/OWNERS index d5f11319838e..5b992b551e72 100644 --- a/pilot/OWNERS +++ b/pilot/OWNERS @@ -10,3 +10,4 @@ approvers: - rshriram - vadimeisenbergibm - zackbutcher + - ymesika diff --git a/pilot/cmd/pilot-agent/main.go b/pilot/cmd/pilot-agent/main.go index 6119dae3c70b..ab896c0496d8 100644 --- a/pilot/cmd/pilot-agent/main.go +++ b/pilot/cmd/pilot-agent/main.go @@ -27,6 +27,8 @@ import ( "text/template" "time" + "istio.io/istio/pkg/spiffe" + "github.com/gogo/protobuf/types" "github.com/spf13/cobra" "github.com/spf13/cobra/doc" @@ -89,7 +91,7 @@ var ( return err } log.Infof("Version %s", version.Info.String()) - role.Type = model.Sidecar + role.Type = model.SidecarProxy if len(args) > 0 { role.Type = model.NodeType(args[0]) if !model.IsApplicationNodeType(role.Type) { @@ -240,6 +242,7 @@ var ( opts := make(map[string]string) opts["PodName"] = os.Getenv("POD_NAME") opts["PodNamespace"] = os.Getenv("POD_NAMESPACE") + opts["MixerSubjectAltName"] = envoy.GetMixerSAN(opts["PodNamespace"]) // protobuf encoding of IP_ADDRESS type opts["PodIP"] = base64.StdEncoding.EncodeToString(net.ParseIP(os.Getenv("INSTANCE_IP"))) @@ -286,7 +289,9 @@ var ( go statusServer.Run(ctx) } + log.Infof("PilotSAN %#v", pilotSAN) envoyProxy := envoy.NewProxy(proxyConfig, role.ServiceNode(), proxyLogLevel, pilotSAN, role.IPAddresses) + agent := proxy.NewAgent(envoyProxy, proxy.DefaultRetry) watcher := envoy.NewWatcher(certs, agent.ConfigCh()) @@ -316,8 +321,10 @@ func getPilotSAN(domain string, ns string) []string { pilotTrustDomain = domain } } - pilotSAN = envoy.GetPilotSAN(pilotTrustDomain, ns) + spiffe.SetTrustDomain(pilotTrustDomain) + pilotSAN = append(pilotSAN, envoy.GetPilotSAN(ns)) } + log.Infof("PilotSAN %#v", pilotSAN) return pilotSAN } diff --git a/pilot/cmd/pilot-agent/status/ready/probe_test.go b/pilot/cmd/pilot-agent/status/ready/probe_test.go index 26c93db77627..34b59c01a9d5 100644 --- a/pilot/cmd/pilot-agent/status/ready/probe_test.go +++ b/pilot/cmd/pilot-agent/status/ready/probe_test.go @@ -15,12 +15,12 @@ package ready import ( - . "github.com/onsi/gomega" - "net" "net/http" "net/http/httptest" "testing" + + . "github.com/onsi/gomega" ) var probe = Probe{AdminPort: 1234} diff --git a/pilot/cmd/pilot-discovery/main.go b/pilot/cmd/pilot-discovery/main.go index 94901a365597..c18230964198 100644 --- a/pilot/cmd/pilot-discovery/main.go +++ b/pilot/cmd/pilot-discovery/main.go @@ -19,6 +19,8 @@ import ( "os" "time" + "istio.io/istio/pkg/spiffe" + "github.com/spf13/cobra" "github.com/spf13/cobra/doc" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" @@ -60,6 +62,9 @@ var ( return err } + spiffe.SetTrustDomain(spiffe.DetermineTrustDomain(serverArgs.Config.ControllerOptions.TrustDomain, + serverArgs.Config.ControllerOptions.DomainSuffix, hasKubeRegistry())) + // Create the stop channel for all of the servers. stop := make(chan struct{}) @@ -80,11 +85,21 @@ var ( } ) +// when we run on k8s, the default trust domain is 'cluster.local', otherwise it is the empty string +func hasKubeRegistry() bool { + for _, r := range serverArgs.Service.Registries { + if serviceregistry.ServiceRegistry(r) == serviceregistry.KubernetesRegistry { + return true + } + } + return false +} + func init() { discoveryCmd.PersistentFlags().StringSliceVar(&serverArgs.Service.Registries, "registries", []string{string(serviceregistry.KubernetesRegistry)}, - fmt.Sprintf("Comma separated list of platform service registries to read from (choose one or more from {%s, %s, %s})", - serviceregistry.KubernetesRegistry, serviceregistry.ConsulRegistry, serviceregistry.MockRegistry)) + fmt.Sprintf("Comma separated list of platform service registries to read from (choose one or more from {%s, %s, %s, %s})", + serviceregistry.KubernetesRegistry, serviceregistry.ConsulRegistry, serviceregistry.MCPRegistry, serviceregistry.MockRegistry)) discoveryCmd.PersistentFlags().StringVar(&serverArgs.Config.ClusterRegistriesNamespace, "clusterRegistriesNamespace", metav1.NamespaceAll, "Namespace for ConfigMap which stores clusters configs") discoveryCmd.PersistentFlags().StringVar(&serverArgs.Config.KubeConfig, "kubeconfig", "", @@ -120,6 +135,8 @@ func init() { "Controller resync interval") discoveryCmd.PersistentFlags().StringVar(&serverArgs.Config.ControllerOptions.DomainSuffix, "domain", "cluster.local", "DNS domain suffix") + discoveryCmd.PersistentFlags().StringVar(&serverArgs.Config.ControllerOptions.TrustDomain, "trust-domain", "", + "The domain serves to identify the system with spiffe") discoveryCmd.PersistentFlags().StringVar(&serverArgs.Service.Consul.ServerURL, "consulserverURL", "", "URL for the Consul server") discoveryCmd.PersistentFlags().DurationVar(&serverArgs.Service.Consul.Interval, "consulserverInterval", 2*time.Second, diff --git a/pilot/docker/envoy_pilot.yaml.tmpl b/pilot/docker/envoy_pilot.yaml.tmpl index 9e95ac555440..8181c4959969 100644 --- a/pilot/docker/envoy_pilot.yaml.tmpl +++ b/pilot/docker/envoy_pilot.yaml.tmpl @@ -56,7 +56,7 @@ static_resources: trusted_ca: filename: /etc/certs/root-cert.pem verify_subject_alt_name: - - spiffe://cluster.local/ns/{{ .PodNamespace }}/sa/istio-mixer-service-account + - {{ .MixerSubjectAltName }} {{- end }} type: STRICT_DNS listeners: diff --git a/pilot/docker/envoy_policy.yaml.tmpl b/pilot/docker/envoy_policy.yaml.tmpl index b132ca31c53d..631cca387ee5 100644 --- a/pilot/docker/envoy_policy.yaml.tmpl +++ b/pilot/docker/envoy_policy.yaml.tmpl @@ -69,7 +69,7 @@ static_resources: trusted_ca: filename: /etc/certs/root-cert.pem verify_subject_alt_name: - - spiffe://cluster.local/ns/{{ .PodNamespace }}/sa/istio-mixer-service-account + - {{ .MixerSubjectAltName }} {{- end }} type: STRICT_DNS listeners: diff --git a/pilot/pkg/bootstrap/server.go b/pilot/pkg/bootstrap/server.go index 2baf9413bcd9..5164685bc10f 100644 --- a/pilot/pkg/bootstrap/server.go +++ b/pilot/pkg/bootstrap/server.go @@ -74,9 +74,11 @@ import ( istiokeepalive "istio.io/istio/pkg/keepalive" kubelib "istio.io/istio/pkg/kube" "istio.io/istio/pkg/log" - mcpclient "istio.io/istio/pkg/mcp/client" + "istio.io/istio/pkg/mcp/client" "istio.io/istio/pkg/mcp/configz" "istio.io/istio/pkg/mcp/creds" + "istio.io/istio/pkg/mcp/monitoring" + "istio.io/istio/pkg/mcp/sink" "istio.io/istio/pkg/version" ) @@ -214,6 +216,9 @@ func NewServer(args PilotArgs) (*Server, error) { if args.Namespace == "" { args.Namespace = os.Getenv("POD_NAMESPACE") } + if args.KeepaliveOptions == nil { + args.KeepaliveOptions = istiokeepalive.DefaultOption() + } if args.Config.ClusterRegistriesNamespace == "" { if args.Namespace != "" { args.Config.ClusterRegistriesNamespace = args.Namespace @@ -230,31 +235,31 @@ func NewServer(args PilotArgs) (*Server, error) { // Apply the arguments to the configuration. if err := s.initKubeClient(&args); err != nil { - return nil, err + return nil, fmt.Errorf("kube client: %v", err) } if err := s.initMesh(&args); err != nil { - return nil, err + return nil, fmt.Errorf("mesh: %v", err) } if err := s.initMeshNetworks(&args); err != nil { - return nil, err + return nil, fmt.Errorf("mesh networks: %v", err) } if err := s.initMixerSan(&args); err != nil { - return nil, err + return nil, fmt.Errorf("mixer san: %v", err) } if err := s.initConfigController(&args); err != nil { - return nil, err + return nil, fmt.Errorf("config controller: %v", err) } if err := s.initServiceControllers(&args); err != nil { - return nil, err + return nil, fmt.Errorf("service controllers: %v", err) } if err := s.initDiscoveryService(&args); err != nil { - return nil, err + return nil, fmt.Errorf("discovery service: %v", err) } if err := s.initMonitor(&args); err != nil { - return nil, err + return nil, fmt.Errorf("monitor: %v", err) } if err := s.initClusterRegistries(&args); err != nil { - return nil, err + return nil, fmt.Errorf("cluster registries: %v", err) } if args.CtrlZOptions != nil { @@ -385,6 +390,7 @@ func (s *Server) initMesh(args *PilotArgs) error { log.Infof("mesh configuration sources have changed") //TODO Need to re-create or reload initConfigController() } + s.mesh = mesh s.EnvoyXdsServer.ConfigUpdate(true) } }) @@ -454,10 +460,10 @@ func (s *Server) initMeshNetworks(args *PilotArgs) error { // initMixerSan configures the mixerSAN configuration item. The mesh must already have been configured. func (s *Server) initMixerSan(args *PilotArgs) error { if s.mesh == nil { - return fmt.Errorf("the mesh has not been configured before configuring mixer san") + return fmt.Errorf("the mesh has not been configured before configuring mixer spiffe") } if s.mesh.DefaultConfig.ControlPlaneAuthPolicy == meshconfig.AuthenticationPolicy_MUTUAL_TLS { - s.mixerSAN = envoy.GetMixerSAN(args.Config.ControllerOptions.DomainSuffix, args.Namespace) + s.mixerSAN = []string{envoy.GetMixerSAN(args.Namespace)} } return nil } @@ -494,20 +500,27 @@ func (c *mockController) Run(<-chan struct{}) {} func (s *Server) initMCPConfigController(args *PilotArgs) error { clientNodeID := "" - supportedTypes := make([]string, len(model.IstioConfigTypes)) + collections := make([]sink.CollectionOptions, len(model.IstioConfigTypes)) for i, model := range model.IstioConfigTypes { - supportedTypes[i] = fmt.Sprintf("type.googleapis.com/%s", model.MessageName) + collections[i] = sink.CollectionOptions{ + Name: model.Collection, + } } options := coredatamodel.Options{ DomainSuffix: args.Config.ControllerOptions.DomainSuffix, + ClearDiscoveryServerCache: func() { + s.EnvoyXdsServer.ConfigUpdate(true) + }, } ctx, cancel := context.WithCancel(context.Background()) - var clients []*mcpclient.Client + var clients []*client.Client var conns []*grpc.ClientConn var configStores []model.ConfigStoreCache + reporter := monitoring.NewStatsContext("pilot/mcp/sink") + for _, configSource := range s.mesh.ConfigSources { url, err := url.Parse(configSource.Address) if err != nil { @@ -600,7 +613,13 @@ func (s *Server) initMCPConfigController(args *PilotArgs) error { } cl := mcpapi.NewAggregatedMeshConfigServiceClient(conn) mcpController := coredatamodel.NewController(options) - mcpClient := mcpclient.New(cl, supportedTypes, mcpController, clientNodeID, map[string]string{}, mcpclient.NewStatsContext("pilot")) + sinkOptions := &sink.Options{ + CollectionOptions: collections, + Updater: mcpController, + ID: clientNodeID, + Reporter: reporter, + } + mcpClient := client.New(cl, sinkOptions) configz.Register(mcpClient) clients = append(clients, mcpClient) @@ -659,7 +678,13 @@ func (s *Server) initMCPConfigController(args *PilotArgs) error { } cl := mcpapi.NewAggregatedMeshConfigServiceClient(conn) mcpController := coredatamodel.NewController(options) - mcpClient := mcpclient.New(cl, supportedTypes, mcpController, clientNodeID, map[string]string{}, mcpclient.NewStatsContext("pilot")) + sinkOptions := &sink.Options{ + CollectionOptions: collections, + Updater: mcpController, + ID: clientNodeID, + Reporter: reporter, + } + mcpClient := client.New(cl, sinkOptions) configz.Register(mcpClient) clients = append(clients, mcpClient) @@ -692,6 +717,8 @@ func (s *Server) initMCPConfigController(args *PilotArgs) error { for _, conn := range conns { _ = conn.Close() // nolint: errcheck } + + reporter.Close() }() return nil @@ -850,6 +877,8 @@ func (s *Server) initServiceControllers(args *PilotArgs) error { if err := s.initConsulRegistry(serviceControllers, args); err != nil { return err } + case serviceregistry.MCPRegistry: + log.Infof("no-op: get service info from MCP ServiceEntries.") default: return fmt.Errorf("service registry %s is not supported", r) } @@ -997,7 +1026,7 @@ func (s *Server) initDiscoveryService(args *PilotArgs) error { if args.DiscoveryOptions.SecureGrpcAddr != "" { // create secure grpc server if err := s.initSecureGrpcServer(args.KeepaliveOptions); err != nil { - return err + return fmt.Errorf("secure grpc server: %s", err) } // create secure grpc listener secureGrpcListener, err := net.Listen("tcp", args.DiscoveryOptions.SecureGrpcAddr) @@ -1143,8 +1172,10 @@ func (s *Server) grpcServerOptions(options *istiokeepalive.Options) []grpc.Serve grpc.UnaryInterceptor(middleware.ChainUnaryServer(interceptors...)), grpc.MaxConcurrentStreams(uint32(maxStreams)), grpc.KeepaliveParams(keepalive.ServerParameters{ - Time: options.Time, - Timeout: options.Timeout, + Time: options.Time, + Timeout: options.Timeout, + MaxConnectionAge: options.MaxServerConnectionAge, + MaxConnectionAgeGrace: options.MaxServerConnectionAgeGrace, }), } diff --git a/pilot/pkg/config/coredatamodel/controller.go b/pilot/pkg/config/coredatamodel/controller.go index ac51d7c3784e..b1a168318c56 100644 --- a/pilot/pkg/config/coredatamodel/controller.go +++ b/pilot/pkg/config/coredatamodel/controller.go @@ -25,7 +25,7 @@ import ( "istio.io/istio/pilot/pkg/model" "istio.io/istio/pkg/log" - mcpclient "istio.io/istio/pkg/mcp/client" + "istio.io/istio/pkg/mcp/sink" ) var errUnsupported = errors.New("this operation is not supported by mcp controller") @@ -34,12 +34,13 @@ var errUnsupported = errors.New("this operation is not supported by mcp controll // MCP Updater and ServiceDiscovery type CoreDataModel interface { model.ConfigStoreCache - mcpclient.Updater + sink.Updater } // Options stores the configurable attributes of a Control type Options struct { - DomainSuffix string + DomainSuffix string + ClearDiscoveryServerCache func() } // Controller is a temporary storage for the changes received @@ -47,10 +48,10 @@ type Options struct { type Controller struct { configStoreMu sync.RWMutex // keys [type][namespace][name] - configStore map[string]map[string]map[string]model.Config - descriptorsByMessageName map[string]model.ProtoSchema - options Options - eventHandlers map[string][]func(model.Config, model.Event) + configStore map[string]map[string]map[string]*model.Config + descriptorsByCollection map[string]model.ProtoSchema + options Options + eventHandlers map[string][]func(model.Config, model.Event) syncedMu sync.Mutex synced map[string]bool @@ -61,19 +62,19 @@ func NewController(options Options) CoreDataModel { descriptorsByMessageName := make(map[string]model.ProtoSchema, len(model.IstioConfigTypes)) synced := make(map[string]bool) for _, descriptor := range model.IstioConfigTypes { - // don't register duplicate descriptors for the same message name, e.g. auth policy - if _, ok := descriptorsByMessageName[descriptor.MessageName]; !ok { - descriptorsByMessageName[descriptor.MessageName] = descriptor - synced[descriptor.MessageName] = false + // don't register duplicate descriptors for the same collection + if _, ok := descriptorsByMessageName[descriptor.Collection]; !ok { + descriptorsByMessageName[descriptor.Collection] = descriptor + synced[descriptor.Collection] = false } } return &Controller{ - configStore: make(map[string]map[string]map[string]model.Config), - options: options, - descriptorsByMessageName: descriptorsByMessageName, - eventHandlers: make(map[string][]func(model.Config, model.Event)), - synced: synced, + configStore: make(map[string]map[string]map[string]*model.Config), + options: options, + descriptorsByCollection: descriptorsByMessageName, + eventHandlers: make(map[string][]func(model.Config, model.Event)), + synced: synced, } } @@ -102,38 +103,37 @@ func (c *Controller) List(typ, namespace string) (out []model.Config, err error) // we replace the entire sub-map for _, byNamespace := range byType { for _, config := range byNamespace { - out = append(out, config) + out = append(out, *config) } } return out, nil } for _, config := range byType[namespace] { - out = append(out, config) + out = append(out, *config) } return out, nil } // Apply receives changes from MCP server and creates the // corresponding config -func (c *Controller) Apply(change *mcpclient.Change) error { - messagename := extractMessagename(change.TypeURL) - descriptor, ok := c.descriptorsByMessageName[messagename] +func (c *Controller) Apply(change *sink.Change) error { + descriptor, ok := c.descriptorsByCollection[change.Collection] if !ok { - return fmt.Errorf("apply type not supported %s", messagename) + return fmt.Errorf("apply type not supported %s", change.Collection) } schema, valid := c.ConfigDescriptor().GetByType(descriptor.Type) if !valid { - return fmt.Errorf("descriptor type not supported %s", messagename) + return fmt.Errorf("descriptor type not supported %s", change.Collection) } c.syncedMu.Lock() - c.synced[messagename] = true + c.synced[change.Collection] = true c.syncedMu.Unlock() // innerStore is [namespace][name] - innerStore := make(map[string]map[string]model.Config) + innerStore := make(map[string]map[string]*model.Config) for _, obj := range change.Objects { namespace, name := extractNameNamespace(obj.Metadata.Name) @@ -145,24 +145,20 @@ func (c *Controller) Apply(change *mcpclient.Change) error { } } - // adjust the type name for mesh-scoped resources - typ := descriptor.Type - if namespace == "" && descriptor.Type == model.AuthenticationPolicy.Type { - typ = model.AuthenticationMeshPolicy.Type - } - - conf := model.Config{ + conf := &model.Config{ ConfigMeta: model.ConfigMeta{ - Type: typ, + Type: descriptor.Type, Group: descriptor.Group, Version: descriptor.Version, Name: name, Namespace: namespace, ResourceVersion: obj.Metadata.Version, CreationTimestamp: createTime, + Labels: obj.Metadata.Labels, + Annotations: obj.Metadata.Annotations, Domain: c.options.DomainSuffix, }, - Spec: obj.Resource, + Spec: obj.Body, } if err := schema.Validate(conf.Name, conf.Namespace, conf.Spec); err != nil { @@ -173,71 +169,23 @@ func (c *Controller) Apply(change *mcpclient.Change) error { if ok { namedConfig[conf.Name] = conf } else { - innerStore[conf.Namespace] = map[string]model.Config{ + innerStore[conf.Namespace] = map[string]*model.Config{ conf.Name: conf, } } } - // de-mux namespace and cluster-scoped authentication policy from the same - // type_url stream. - const clusterScopedNamespace = "" - meshTypeInnerStore := make(map[string]map[string]model.Config) - var meshType string - if descriptor.Type == model.AuthenticationPolicy.Type { - meshType = model.AuthenticationMeshPolicy.Type - meshTypeInnerStore[clusterScopedNamespace] = innerStore[clusterScopedNamespace] - delete(innerStore, clusterScopedNamespace) - } - - var prevStore map[string]map[string]model.Config + var prevStore map[string]map[string]*model.Config c.configStoreMu.Lock() prevStore = c.configStore[descriptor.Type] c.configStore[descriptor.Type] = innerStore - if meshType != "" { - c.configStore[meshType] = meshTypeInnerStore - } c.configStoreMu.Unlock() - dispatch := func(model model.Config, event model.Event) {} - if handlers, ok := c.eventHandlers[descriptor.Type]; ok { - dispatch = func(model model.Config, event model.Event) { - log.Debugf("MCP event dispatch: key=%v event=%v", model.Key(), event.String()) - for _, handler := range handlers { - handler(model, event) - } - } - } - - // add/update - for namespace, byName := range innerStore { - for name, config := range byName { - if prevByNamespace, ok := prevStore[namespace]; ok { - if prevConfig, ok := prevByNamespace[name]; ok { - if config.ResourceVersion != prevConfig.ResourceVersion { - dispatch(config, model.EventUpdate) - } - } else { - dispatch(config, model.EventAdd) - } - } else { - dispatch(config, model.EventAdd) - } - } - } - - // remove - for namespace, prevByName := range prevStore { - for name, prevConfig := range prevByName { - if byNamespace, ok := innerStore[namespace]; ok { - if _, ok := byNamespace[name]; !ok { - dispatch(prevConfig, model.EventDelete) - } - } else { - dispatch(prevConfig, model.EventDelete) - } - } + if descriptor.Type == model.ServiceEntry.Type { + c.serviceEntryEvents(innerStore, prevStore) + } else { + c.options.ClearDiscoveryServerCache() } return nil @@ -262,16 +210,16 @@ func (c *Controller) HasSynced() bool { return true } +// RegisterEventHandler registers a handler using the type as a key +func (c *Controller) RegisterEventHandler(typ string, handler func(model.Config, model.Event)) { + c.eventHandlers[typ] = append(c.eventHandlers[typ], handler) +} + // Run is not implemented func (c *Controller) Run(stop <-chan struct{}) { log.Warnf("Run: %s", errUnsupported) } -// RegisterEventHandler is not implemented -func (c *Controller) RegisterEventHandler(typ string, handler func(model.Config, model.Event)) { - c.eventHandlers[typ] = append(c.eventHandlers[typ], handler) -} - // Get is not implemented func (c *Controller) Get(typ, name, namespace string) *model.Config { log.Warnf("get %s", errUnsupported) @@ -295,6 +243,48 @@ func (c *Controller) Delete(typ, name, namespace string) error { return errUnsupported } +func (c *Controller) serviceEntryEvents(currentStore, prevStore map[string]map[string]*model.Config) { + dispatch := func(model model.Config, event model.Event) {} + if handlers, ok := c.eventHandlers[model.ServiceEntry.Type]; ok { + dispatch = func(model model.Config, event model.Event) { + log.Debugf("MCP event dispatch: key=%v event=%v", model.Key(), event.String()) + for _, handler := range handlers { + handler(model, event) + } + } + } + + // add/update + for namespace, byName := range currentStore { + for name, config := range byName { + if prevByNamespace, ok := prevStore[namespace]; ok { + if prevConfig, ok := prevByNamespace[name]; ok { + if config.ResourceVersion != prevConfig.ResourceVersion { + dispatch(*config, model.EventUpdate) + } + } else { + dispatch(*config, model.EventAdd) + } + } else { + dispatch(*config, model.EventAdd) + } + } + } + + // remove + for namespace, prevByName := range prevStore { + for name, prevConfig := range prevByName { + if byNamespace, ok := currentStore[namespace]; ok { + if _, ok := byNamespace[name]; !ok { + dispatch(*prevConfig, model.EventDelete) + } + } else { + dispatch(*prevConfig, model.EventDelete) + } + } + } +} + func extractNameNamespace(metadataName string) (string, string) { segments := strings.Split(metadataName, "/") if len(segments) == 2 { @@ -302,10 +292,3 @@ func extractNameNamespace(metadataName string) (string, string) { } return "", segments[0] } - -func extractMessagename(typeURL string) string { - if slash := strings.LastIndex(typeURL, "/"); slash >= 0 { - return typeURL[slash+1:] - } - return typeURL -} diff --git a/pilot/pkg/config/coredatamodel/controller_test.go b/pilot/pkg/config/coredatamodel/controller_test.go index 35408516b169..2ebe1e491c79 100644 --- a/pilot/pkg/config/coredatamodel/controller_test.go +++ b/pilot/pkg/config/coredatamodel/controller_test.go @@ -29,7 +29,7 @@ import ( networking "istio.io/api/networking/v1alpha3" "istio.io/istio/pilot/pkg/config/coredatamodel" "istio.io/istio/pilot/pkg/model" - mcpclient "istio.io/istio/pkg/mcp/client" + "istio.io/istio/pkg/mcp/sink" ) var ( @@ -87,11 +87,75 @@ var ( }, } + serviceEntry = &networking.ServiceEntry{ + Hosts: []string{"example.com"}, + Ports: []*networking.Port{ + { + Name: "http", + Number: 7878, + Protocol: "http", + }, + }, + Location: networking.ServiceEntry_MESH_INTERNAL, + Resolution: networking.ServiceEntry_STATIC, + Endpoints: []*networking.ServiceEntry_Endpoint{ + { + Address: "127.0.0.1", + Ports: map[string]uint32{ + "http": 4433, + }, + Labels: map[string]string{"label": "random-label"}, + }, + }, + } + testControllerOptions = coredatamodel.Options{ - DomainSuffix: "cluster.local", + DomainSuffix: "cluster.local", + ClearDiscoveryServerCache: func() {}, } ) +func TestOptions(t *testing.T) { + g := gomega.NewGomegaWithT(t) + var cacheCleared bool + testControllerOptions.ClearDiscoveryServerCache = func() { + cacheCleared = true + } + controller := coredatamodel.NewController(testControllerOptions) + + message := convertToResource(g, model.ServiceEntry.MessageName, []proto.Message{serviceEntry}) + change := convert( + []proto.Message{message[0]}, + []string{"service-bar"}, + model.ServiceEntry.Collection, + model.ServiceEntry.MessageName) + + err := controller.Apply(change) + g.Expect(err).ToNot(gomega.HaveOccurred()) + + c, err := controller.List(model.ServiceEntry.Type, "") + g.Expect(c).ToNot(gomega.BeNil()) + g.Expect(err).ToNot(gomega.HaveOccurred()) + g.Expect(c[0].Domain).To(gomega.Equal(testControllerOptions.DomainSuffix)) + g.Expect(cacheCleared).To(gomega.Equal(false)) + + message = convertToResource(g, model.Gateway.MessageName, []proto.Message{gateway}) + change = convert( + []proto.Message{message[0]}, + []string{"gateway-foo"}, + model.Gateway.Collection, + model.Gateway.MessageName) + + err = controller.Apply(change) + g.Expect(err).ToNot(gomega.HaveOccurred()) + + c, err = controller.List(model.Gateway.Type, "") + g.Expect(c).ToNot(gomega.BeNil()) + g.Expect(err).ToNot(gomega.HaveOccurred()) + g.Expect(c[0].Domain).To(gomega.Equal(testControllerOptions.DomainSuffix)) + g.Expect(cacheCleared).To(gomega.Equal(true)) +} + func TestHasSynced(t *testing.T) { t.Skip("Pending: https://github.com/istio/istio/issues/7947") g := gomega.NewGomegaWithT(t) @@ -131,12 +195,12 @@ func TestListAllNameSpace(t *testing.T) { g := gomega.NewGomegaWithT(t) controller := coredatamodel.NewController(testControllerOptions) - messages := convertToEnvelope(g, model.Gateway.MessageName, []proto.Message{gateway, gateway2, gateway3}) + messages := convertToResource(g, model.Gateway.MessageName, []proto.Message{gateway, gateway2, gateway3}) message, message2, message3 := messages[0], messages[1], messages[2] change := convert( []proto.Message{message, message2, message3}, []string{"namespace1/some-gateway1", "default/some-other-gateway", "some-other-gateway3"}, - model.Gateway.MessageName) + model.Gateway.Collection, model.Gateway.MessageName) err := controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -165,13 +229,13 @@ func TestListSpecificNameSpace(t *testing.T) { g := gomega.NewGomegaWithT(t) controller := coredatamodel.NewController(testControllerOptions) - messages := convertToEnvelope(g, model.Gateway.MessageName, []proto.Message{gateway, gateway2, gateway3}) + messages := convertToResource(g, model.Gateway.MessageName, []proto.Message{gateway, gateway2, gateway3}) message, message2, message3 := messages[0], messages[1], messages[2] change := convert( []proto.Message{message, message2, message3}, []string{"namespace1/some-gateway1", "default/some-other-gateway", "namespace1/some-other-gateway3"}, - model.Gateway.MessageName) + model.Gateway.Collection, model.Gateway.MessageName) err := controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -196,8 +260,9 @@ func TestApplyInvalidType(t *testing.T) { g := gomega.NewGomegaWithT(t) controller := coredatamodel.NewController(testControllerOptions) - message := convertToEnvelope(g, model.Gateway.MessageName, []proto.Message{gateway}) - change := convert([]proto.Message{message[0]}, []string{"some-gateway"}, "bad-type") + message := convertToResource(g, model.Gateway.MessageName, []proto.Message{gateway}) + change := convert([]proto.Message{message[0]}, []string{"some-gateway"}, + "bad-collection", "bad-type") err := controller.Apply(change) g.Expect(err).To(gomega.HaveOccurred()) @@ -226,7 +291,8 @@ func TestApplyValidTypeWithNoBaseURL(t *testing.T) { message, err := makeMessage(marshaledGateway, model.Gateway.MessageName) g.Expect(err).ToNot(gomega.HaveOccurred()) - change := convert([]proto.Message{message}, []string{"some-gateway"}, model.Gateway.MessageName) + change := convert([]proto.Message{message}, []string{"some-gateway"}, + model.Gateway.Collection, model.Gateway.MessageName) err = controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -246,9 +312,10 @@ func TestApplyMetadataNameIncludesNamespace(t *testing.T) { g := gomega.NewGomegaWithT(t) controller := coredatamodel.NewController(testControllerOptions) - message := convertToEnvelope(g, model.Gateway.MessageName, []proto.Message{gateway}) + message := convertToResource(g, model.Gateway.MessageName, []proto.Message{gateway}) - change := convert([]proto.Message{message[0]}, []string{"istio-namespace/some-gateway"}, model.Gateway.MessageName) + change := convert([]proto.Message{message[0]}, []string{"istio-namespace/some-gateway"}, + model.Gateway.Collection, model.Gateway.MessageName) err := controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -264,9 +331,9 @@ func TestApplyMetadataNameWithoutNamespace(t *testing.T) { g := gomega.NewGomegaWithT(t) controller := coredatamodel.NewController(testControllerOptions) - message := convertToEnvelope(g, model.Gateway.MessageName, []proto.Message{gateway}) + message := convertToResource(g, model.Gateway.MessageName, []proto.Message{gateway}) - change := convert([]proto.Message{message[0]}, []string{"some-gateway"}, model.Gateway.MessageName) + change := convert([]proto.Message{message[0]}, []string{"some-gateway"}, model.Gateway.Collection, model.Gateway.MessageName) err := controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -282,8 +349,8 @@ func TestApplyChangeNoObjects(t *testing.T) { g := gomega.NewGomegaWithT(t) controller := coredatamodel.NewController(testControllerOptions) - message := convertToEnvelope(g, model.Gateway.MessageName, []proto.Message{gateway}) - change := convert([]proto.Message{message[0]}, []string{"some-gateway"}, model.Gateway.MessageName) + message := convertToResource(g, model.Gateway.MessageName, []proto.Message{gateway}) + change := convert([]proto.Message{message[0]}, []string{"some-gateway"}, model.Gateway.Collection, model.Gateway.MessageName) err := controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -294,7 +361,7 @@ func TestApplyChangeNoObjects(t *testing.T) { g.Expect(c[0].Type).To(gomega.Equal(model.Gateway.Type)) g.Expect(c[0].Spec).To(gomega.Equal(message[0])) - change = convert([]proto.Message{}, []string{"some-gateway"}, model.Gateway.MessageName) + change = convert([]proto.Message{}, []string{"some-gateway"}, model.Gateway.Collection, model.Gateway.MessageName) err = controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -303,63 +370,27 @@ func TestApplyChangeNoObjects(t *testing.T) { g.Expect(len(c)).To(gomega.Equal(0)) } -func convert(resources []proto.Message, names []string, responseMessageName string) *mcpclient.Change { - out := new(mcpclient.Change) - out.TypeURL = responseMessageName - for i, res := range resources { - out.Objects = append(out.Objects, - &mcpclient.Object{ - TypeURL: responseMessageName, - Metadata: &mcpapi.Metadata{ - Name: names[i], - }, - Resource: res, - }, - ) - } - return out -} - -func convertToEnvelope(g *gomega.GomegaWithT, messageName string, resources []proto.Message) (messages []proto.Message) { - for _, resource := range resources { - marshaled, err := proto.Marshal(resource) - g.Expect(err).ToNot(gomega.HaveOccurred()) - message, err := makeMessage(marshaled, messageName) - g.Expect(err).ToNot(gomega.HaveOccurred()) - messages = append(messages, message) - } - return messages -} - -func makeMessage(value []byte, responseMessageName string) (proto.Message, error) { - resource := &types.Any{ - TypeUrl: fmt.Sprintf("type.googleapis.com/%s", responseMessageName), - Value: value, - } - - var dynamicAny types.DynamicAny - err := types.UnmarshalAny(resource, &dynamicAny) - if err == nil { - return dynamicAny.Message, nil - } - - return nil, err -} - func TestApplyClusterScopedAuthPolicy(t *testing.T) { g := gomega.NewGomegaWithT(t) controller := coredatamodel.NewController(testControllerOptions) - message0 := convertToEnvelope(g, model.AuthenticationPolicy.MessageName, []proto.Message{authnPolicy0}) - message1 := convertToEnvelope(g, model.AuthenticationMeshPolicy.MessageName, []proto.Message{authnPolicy1}) + message0 := convertToResource(g, model.AuthenticationPolicy.MessageName, []proto.Message{authnPolicy0}) + message1 := convertToResource(g, model.AuthenticationMeshPolicy.MessageName, []proto.Message{authnPolicy1}) change := convert( - []proto.Message{message0[0], message1[0]}, - []string{"bar-namespace/foo", "default"}, - model.AuthenticationPolicy.MessageName) + []proto.Message{message0[0]}, + []string{"bar-namespace/foo"}, + model.AuthenticationPolicy.Collection, model.AuthenticationPolicy.MessageName) err := controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) + change = convert( + []proto.Message{message1[0]}, + []string{"default"}, + model.AuthenticationMeshPolicy.Collection, model.AuthenticationMeshPolicy.MessageName) + err = controller.Apply(change) + g.Expect(err).ToNot(gomega.HaveOccurred()) + c, err := controller.List(model.AuthenticationPolicy.Type, "bar-namespace") g.Expect(err).ToNot(gomega.HaveOccurred()) g.Expect(len(c)).To(gomega.Equal(1)) @@ -378,9 +409,9 @@ func TestApplyClusterScopedAuthPolicy(t *testing.T) { // verify the namespace scoped resource can be deleted change = convert( - []proto.Message{message1[0]}, - []string{"default"}, - model.AuthenticationPolicy.MessageName) + []proto.Message{}, + []string{}, + model.AuthenticationPolicy.Collection, model.AuthenticationPolicy.MessageName) err = controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -392,11 +423,11 @@ func TestApplyClusterScopedAuthPolicy(t *testing.T) { g.Expect(c[0].Type).To(gomega.Equal(model.AuthenticationMeshPolicy.Type)) g.Expect(c[0].Spec).To(gomega.Equal(message1[0])) - // verify the namespace scoped resource can be added and mesh-scoped resource removed in the same batch + // verify the namespace scoped resource can be added and mesh-scoped resource removed change = convert( []proto.Message{message0[0]}, []string{"bar-namespace/foo"}, - model.AuthenticationPolicy.MessageName) + model.AuthenticationPolicy.Collection, model.AuthenticationPolicy.MessageName) err = controller.Apply(change) g.Expect(err).ToNot(gomega.HaveOccurred()) @@ -426,6 +457,7 @@ func TestEventHandler(t *testing.T) { }) typeURL := "type.googleapis.com/istio.networking.v1alpha3.ServiceEntry" + collection := model.ServiceEntry.Collection fakeCreateTime, _ := time.Parse(time.RFC3339, "2006-01-02T15:04:05Z") fakeCreateTimeProto, err := types.TimestampProto(fakeCreateTime) @@ -433,15 +465,17 @@ func TestEventHandler(t *testing.T) { t.Fatalf("Failed to parse create fake create time %v: %v", fakeCreateTime, err) } - makeServiceEntry := func(name, host, version string) *mcpclient.Object { - return &mcpclient.Object{ + makeServiceEntry := func(name, host, version string) *sink.Object { + return &sink.Object{ TypeURL: typeURL, Metadata: &mcpapi.Metadata{ - Name: fmt.Sprintf("default/%s", name), - CreateTime: fakeCreateTimeProto, - Version: version, + Name: fmt.Sprintf("default/%s", name), + CreateTime: fakeCreateTimeProto, + Version: version, + Labels: map[string]string{"lk1": "lv1"}, + Annotations: map[string]string{"ak1": "av1"}, }, - Resource: &networking.ServiceEntry{ + Body: &networking.ServiceEntry{ Hosts: []string{host}, }, } @@ -458,6 +492,8 @@ func TestEventHandler(t *testing.T) { Domain: "cluster.local", ResourceVersion: version, CreationTimestamp: fakeCreateTime, + Labels: map[string]string{"lk1": "lv1"}, + Annotations: map[string]string{"ak1": "av1"}, }, Spec: &networking.ServiceEntry{Hosts: []string{host}}, } @@ -466,14 +502,14 @@ func TestEventHandler(t *testing.T) { // Note: these tests steps are cumulative steps := []struct { name string - change *mcpclient.Change + change *sink.Change want map[model.Event]map[string]model.Config }{ { name: "initial add", - change: &mcpclient.Change{ - TypeURL: typeURL, - Objects: []*mcpclient.Object{ + change: &sink.Change{ + Collection: collection, + Objects: []*sink.Object{ makeServiceEntry("foo", "foo.com", "v0"), }, }, @@ -485,9 +521,9 @@ func TestEventHandler(t *testing.T) { }, { name: "update initial item", - change: &mcpclient.Change{ - TypeURL: typeURL, - Objects: []*mcpclient.Object{ + change: &sink.Change{ + Collection: collection, + Objects: []*sink.Object{ makeServiceEntry("foo", "foo.com", "v1"), }, }, @@ -499,9 +535,9 @@ func TestEventHandler(t *testing.T) { }, { name: "subsequent add", - change: &mcpclient.Change{ - TypeURL: typeURL, - Objects: []*mcpclient.Object{ + change: &sink.Change{ + Collection: collection, + Objects: []*sink.Object{ makeServiceEntry("foo", "foo.com", "v1"), makeServiceEntry("foo1", "foo1.com", "v0"), }, @@ -514,9 +550,9 @@ func TestEventHandler(t *testing.T) { }, { name: "single delete", - change: &mcpclient.Change{ - TypeURL: typeURL, - Objects: []*mcpclient.Object{ + change: &sink.Change{ + Collection: collection, + Objects: []*sink.Object{ makeServiceEntry("foo1", "foo1.com", "v0"), }, }, @@ -528,9 +564,9 @@ func TestEventHandler(t *testing.T) { }, { name: "multiple update and add", - change: &mcpclient.Change{ - TypeURL: typeURL, - Objects: []*mcpclient.Object{ + change: &sink.Change{ + Collection: collection, + Objects: []*sink.Object{ makeServiceEntry("foo1", "foo1.com", "v1"), makeServiceEntry("foo2", "foo2.com", "v0"), makeServiceEntry("foo3", "foo3.com", "v0"), @@ -548,9 +584,9 @@ func TestEventHandler(t *testing.T) { }, { name: "multiple deletes, updates, and adds ", - change: &mcpclient.Change{ - TypeURL: typeURL, - Objects: []*mcpclient.Object{ + change: &sink.Change{ + Collection: collection, + Objects: []*sink.Object{ makeServiceEntry("foo2", "foo2.com", "v1"), makeServiceEntry("foo3", "foo3.com", "v0"), makeServiceEntry("foo4", "foo4.com", "v0"), @@ -593,3 +629,46 @@ func TestEventHandler(t *testing.T) { }) } } + +func convert(resources []proto.Message, names []string, collection, responseMessageName string) *sink.Change { + out := new(sink.Change) + out.Collection = collection + for i, res := range resources { + out.Objects = append(out.Objects, + &sink.Object{ + TypeURL: responseMessageName, + Metadata: &mcpapi.Metadata{ + Name: names[i], + }, + Body: res, + }, + ) + } + return out +} + +func convertToResource(g *gomega.GomegaWithT, messageName string, resources []proto.Message) (messages []proto.Message) { + for _, resource := range resources { + marshaled, err := proto.Marshal(resource) + g.Expect(err).ToNot(gomega.HaveOccurred()) + message, err := makeMessage(marshaled, messageName) + g.Expect(err).ToNot(gomega.HaveOccurred()) + messages = append(messages, message) + } + return messages +} + +func makeMessage(value []byte, responseMessageName string) (proto.Message, error) { + resource := &types.Any{ + TypeUrl: fmt.Sprintf("type.googleapis.com/%s", responseMessageName), + Value: value, + } + + var dynamicAny types.DynamicAny + err := types.UnmarshalAny(resource, &dynamicAny) + if err == nil { + return dynamicAny.Message, nil + } + + return nil, err +} diff --git a/pilot/pkg/config/kube/crd/types.go b/pilot/pkg/config/kube/crd/types.go index 8693a3599336..5cbdad77d9da 100644 --- a/pilot/pkg/config/kube/crd/types.go +++ b/pilot/pkg/config/kube/crd/types.go @@ -95,6 +95,16 @@ var knownTypes = map[string]schemaType{ }, collection: &EnvoyFilterList{}, }, + model.Sidecar.Type: { + schema: model.Sidecar, + object: &Sidecar{ + TypeMeta: meta_v1.TypeMeta{ + Kind: "Sidecar", + APIVersion: apiVersion(&model.Sidecar), + }, + }, + collection: &SidecarList{}, + }, model.HTTPAPISpec.Type: { schema: model.HTTPAPISpec, object: &HTTPAPISpec{ @@ -815,6 +825,109 @@ func (in *EnvoyFilterList) DeepCopyObject() runtime.Object { return nil } +// Sidecar is the generic Kubernetes API object wrapper +type Sidecar struct { + meta_v1.TypeMeta `json:",inline"` + meta_v1.ObjectMeta `json:"metadata"` + Spec map[string]interface{} `json:"spec"` +} + +// GetSpec from a wrapper +func (in *Sidecar) GetSpec() map[string]interface{} { + return in.Spec +} + +// SetSpec for a wrapper +func (in *Sidecar) SetSpec(spec map[string]interface{}) { + in.Spec = spec +} + +// GetObjectMeta from a wrapper +func (in *Sidecar) GetObjectMeta() meta_v1.ObjectMeta { + return in.ObjectMeta +} + +// SetObjectMeta for a wrapper +func (in *Sidecar) SetObjectMeta(metadata meta_v1.ObjectMeta) { + in.ObjectMeta = metadata +} + +// SidecarList is the generic Kubernetes API list wrapper +type SidecarList struct { + meta_v1.TypeMeta `json:",inline"` + meta_v1.ListMeta `json:"metadata"` + Items []Sidecar `json:"items"` +} + +// GetItems from a wrapper +func (in *SidecarList) GetItems() []IstioObject { + out := make([]IstioObject, len(in.Items)) + for i := range in.Items { + out[i] = &in.Items[i] + } + return out +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *Sidecar) DeepCopyInto(out *Sidecar) { + *out = *in + out.TypeMeta = in.TypeMeta + in.ObjectMeta.DeepCopyInto(&out.ObjectMeta) + out.Spec = in.Spec +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Sidecar. +func (in *Sidecar) DeepCopy() *Sidecar { + if in == nil { + return nil + } + out := new(Sidecar) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *Sidecar) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + + return nil +} + +// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. +func (in *SidecarList) DeepCopyInto(out *SidecarList) { + *out = *in + out.TypeMeta = in.TypeMeta + out.ListMeta = in.ListMeta + if in.Items != nil { + in, out := &in.Items, &out.Items + *out = make([]Sidecar, len(*in)) + for i := range *in { + (*in)[i].DeepCopyInto(&(*out)[i]) + } + } +} + +// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SidecarList. +func (in *SidecarList) DeepCopy() *SidecarList { + if in == nil { + return nil + } + out := new(SidecarList) + in.DeepCopyInto(out) + return out +} + +// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object. +func (in *SidecarList) DeepCopyObject() runtime.Object { + if c := in.DeepCopy(); c != nil { + return c + } + + return nil +} + // HTTPAPISpec is the generic Kubernetes API object wrapper type HTTPAPISpec struct { meta_v1.TypeMeta `json:",inline"` diff --git a/pilot/pkg/kube/inject/testdata/webhook/daemonset.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/daemonset.yaml.injected index 4355b0af664a..9065254fb4e4 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/daemonset.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/daemonset.yaml.injected @@ -25,6 +25,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig-multi.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig-multi.yaml.injected index 536234cb6618..432df4ba80fb 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig-multi.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig-multi.yaml.injected @@ -28,6 +28,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig.yaml.injected index e10283c9699c..7f920e0df79f 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/deploymentconfig.yaml.injected @@ -28,6 +28,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/frontend.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/frontend.yaml.injected index f113f931ec9e..fa6767257e9c 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/frontend.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/frontend.yaml.injected @@ -30,6 +30,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/hello-config-map-name.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/hello-config-map-name.yaml.injected index 99f4a9acced9..e2404a4519d2 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/hello-config-map-name.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/hello-config-map-name.yaml.injected @@ -26,6 +26,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/hello-multi.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/hello-multi.yaml.injected index c32d5e6b132e..f543ffb1f550 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/hello-multi.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/hello-multi.yaml.injected @@ -27,6 +27,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath @@ -170,6 +172,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/hello-probes.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/hello-probes.yaml.injected index 421741371107..48c58f2d7150 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/hello-probes.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/hello-probes.yaml.injected @@ -46,6 +46,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/job.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/job.yaml.injected index 62b3a2822627..213cfd96fe38 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/job.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/job.yaml.injected @@ -24,6 +24,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/list-frontend.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/list-frontend.yaml.injected index f113f931ec9e..fa6767257e9c 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/list-frontend.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/list-frontend.yaml.injected @@ -30,6 +30,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/list.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/list.yaml.injected index c32d5e6b132e..f543ffb1f550 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/list.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/list.yaml.injected @@ -27,6 +27,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath @@ -170,6 +172,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/multi-init.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/multi-init.yaml.injected index 5a0d0482e794..324783d5f7f8 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/multi-init.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/multi-init.yaml.injected @@ -26,6 +26,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/replicaset.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/replicaset.yaml.injected index b70ce38be244..3fc1807b172a 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/replicaset.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/replicaset.yaml.injected @@ -24,6 +24,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/replicationcontroller.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/replicationcontroller.yaml.injected index 8599c3c25eaa..b52130cb5bc6 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/replicationcontroller.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/replicationcontroller.yaml.injected @@ -25,6 +25,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/resource_annotations.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/resource_annotations.yaml.injected index e6f3eef83d00..7990e373726d 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/resource_annotations.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/resource_annotations.yaml.injected @@ -26,6 +26,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/statefulset.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/statefulset.yaml.injected index 6bbe30557539..ae713a81b320 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/statefulset.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/statefulset.yaml.injected @@ -29,6 +29,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/status_annotations.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/status_annotations.yaml.injected index 4e97f00c9346..366fc7f3cf0a 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/status_annotations.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/status_annotations.yaml.injected @@ -29,6 +29,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-empty-includes.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-empty-includes.yaml.injected index fcb85b7a2885..b8e65479034c 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-empty-includes.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-empty-includes.yaml.injected @@ -28,6 +28,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-wildcards.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-wildcards.yaml.injected index 89c9c0010762..497ceb698ab4 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-wildcards.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations-wildcards.yaml.injected @@ -28,6 +28,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations.yaml.injected index e29c2cdb8799..b5f796b0711e 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/traffic-annotations.yaml.injected @@ -28,6 +28,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/kube/inject/testdata/webhook/user-volume.yaml.injected b/pilot/pkg/kube/inject/testdata/webhook/user-volume.yaml.injected index 25f199b6bb77..6ae0262aec0d 100644 --- a/pilot/pkg/kube/inject/testdata/webhook/user-volume.yaml.injected +++ b/pilot/pkg/kube/inject/testdata/webhook/user-volume.yaml.injected @@ -28,6 +28,8 @@ spec: - args: - proxy - sidecar + - --domain + - $(POD_NAMESPACE).svc.cluster.local - --configPath - /etc/istio/proxy - --binaryPath diff --git a/pilot/pkg/model/config.go b/pilot/pkg/model/config.go index 44b95400de0e..fe7214a6f26f 100644 --- a/pilot/pkg/model/config.go +++ b/pilot/pkg/model/config.go @@ -26,6 +26,7 @@ import ( authn "istio.io/api/authentication/v1alpha1" mccpb "istio.io/api/mixer/v1/config/client" networking "istio.io/api/networking/v1alpha3" + "istio.io/istio/galley/pkg/metadata" "istio.io/istio/pilot/pkg/model/test" ) @@ -217,6 +218,9 @@ type ProtoSchema struct { // Validate configuration as a protobuf message assuming the object is an // instance of the expected message type Validate func(name, namespace string, config proto.Message) error + + // MCP collection for this configuration resource schema + Collection string } // Types lists all known types in the config schema @@ -337,6 +341,7 @@ var ( Version: "v1alpha3", MessageName: "istio.networking.v1alpha3.VirtualService", Validate: ValidateVirtualService, + Collection: metadata.VirtualService.Collection.String(), } // Gateway describes a gateway (how a proxy is exposed on the network) @@ -347,6 +352,7 @@ var ( Version: "v1alpha3", MessageName: "istio.networking.v1alpha3.Gateway", Validate: ValidateGateway, + Collection: metadata.Gateway.Collection.String(), } // ServiceEntry describes service entries @@ -357,6 +363,7 @@ var ( Version: "v1alpha3", MessageName: "istio.networking.v1alpha3.ServiceEntry", Validate: ValidateServiceEntry, + Collection: metadata.ServiceEntry.Collection.String(), } // DestinationRule describes destination rules @@ -367,6 +374,7 @@ var ( Version: "v1alpha3", MessageName: "istio.networking.v1alpha3.DestinationRule", Validate: ValidateDestinationRule, + Collection: metadata.DestinationRule.Collection.String(), } // EnvoyFilter describes additional envoy filters to be inserted by Pilot @@ -377,6 +385,18 @@ var ( Version: "v1alpha3", MessageName: "istio.networking.v1alpha3.EnvoyFilter", Validate: ValidateEnvoyFilter, + Collection: metadata.EnvoyFilter.Collection.String(), + } + + // Sidecar describes the listeners associated with sidecars in a namespace + Sidecar = ProtoSchema{ + Type: "sidecar", + Plural: "sidecars", + Group: "networking", + Version: "v1alpha3", + MessageName: "istio.networking.v1alpha3.Sidecar", + Validate: ValidateSidecar, + Collection: metadata.Sidecar.Collection.String(), } // HTTPAPISpec describes an HTTP API specification. @@ -387,6 +407,7 @@ var ( Version: istioAPIVersion, MessageName: "istio.mixer.v1.config.client.HTTPAPISpec", Validate: ValidateHTTPAPISpec, + Collection: metadata.HTTPAPISpec.Collection.String(), } // HTTPAPISpecBinding describes an HTTP API specification binding. @@ -397,6 +418,7 @@ var ( Version: istioAPIVersion, MessageName: "istio.mixer.v1.config.client.HTTPAPISpecBinding", Validate: ValidateHTTPAPISpecBinding, + Collection: metadata.HTTPAPISpecBinding.Collection.String(), } // QuotaSpec describes an Quota specification. @@ -407,6 +429,7 @@ var ( Version: istioAPIVersion, MessageName: "istio.mixer.v1.config.client.QuotaSpec", Validate: ValidateQuotaSpec, + Collection: metadata.QuotaSpec.Collection.String(), } // QuotaSpecBinding describes an Quota specification binding. @@ -417,6 +440,7 @@ var ( Version: istioAPIVersion, MessageName: "istio.mixer.v1.config.client.QuotaSpecBinding", Validate: ValidateQuotaSpecBinding, + Collection: metadata.QuotaSpecBinding.Collection.String(), } // AuthenticationPolicy describes an authentication policy. @@ -427,6 +451,7 @@ var ( Version: "v1alpha1", MessageName: "istio.authentication.v1alpha1.Policy", Validate: ValidateAuthenticationPolicy, + Collection: metadata.Policy.Collection.String(), } // AuthenticationMeshPolicy describes an authentication policy at mesh level. @@ -438,6 +463,7 @@ var ( Version: "v1alpha1", MessageName: "istio.authentication.v1alpha1.Policy", Validate: ValidateAuthenticationPolicy, + Collection: metadata.MeshPolicy.Collection.String(), } // ServiceRole describes an RBAC service role. @@ -448,6 +474,7 @@ var ( Version: "v1alpha1", MessageName: "istio.rbac.v1alpha1.ServiceRole", Validate: ValidateServiceRole, + Collection: metadata.ServiceRole.Collection.String(), } // ServiceRoleBinding describes an RBAC service role. @@ -459,6 +486,7 @@ var ( Version: "v1alpha1", MessageName: "istio.rbac.v1alpha1.ServiceRoleBinding", Validate: ValidateServiceRoleBinding, + Collection: metadata.ServiceRoleBinding.Collection.String(), } // RbacConfig describes the mesh level RBAC config. @@ -471,6 +499,7 @@ var ( Version: "v1alpha1", MessageName: "istio.rbac.v1alpha1.RbacConfig", Validate: ValidateRbacConfig, + Collection: metadata.RbacConfig.Collection.String(), } // ClusterRbacConfig describes the cluster level RBAC config. @@ -482,6 +511,7 @@ var ( Version: "v1alpha1", MessageName: "istio.rbac.v1alpha1.RbacConfig", Validate: ValidateClusterRbacConfig, + Collection: metadata.ClusterRbacConfig.Collection.String(), } // IstioConfigTypes lists all Istio config types with schemas and validation @@ -491,6 +521,7 @@ var ( ServiceEntry, DestinationRule, EnvoyFilter, + Sidecar, HTTPAPISpec, HTTPAPISpecBinding, QuotaSpec, diff --git a/pilot/pkg/model/context.go b/pilot/pkg/model/context.go index 09060aecd931..2f0fafa5a0b4 100644 --- a/pilot/pkg/model/context.go +++ b/pilot/pkg/model/context.go @@ -19,7 +19,6 @@ import ( "net" "strconv" "strings" - "sync" "time" "github.com/gogo/protobuf/types" @@ -102,25 +101,16 @@ type Proxy struct { // Metadata key-value pairs extending the Node identifier Metadata map[string]string - // mutex control access to mutable fields in the Proxy. On-demand will modify the - // list of services based on calls from envoy. - mutex sync.RWMutex - - // serviceDependencies, if set, controls the list of outbound listeners and routes - // for which the proxy will receive configurations. If nil, the proxy will get config - // for all visible services. - // The list will be populated either from explicit declarations or using 'on-demand' - // feature, before generation takes place. Each node may have a different list, based on - // the requests handled by envoy. - serviceDependencies []*Service + // the sidecarScope associated with the proxy + SidecarScope *SidecarScope } // NodeType decides the responsibility of the proxy serves in the mesh type NodeType string const ( - // Sidecar type is used for sidecar proxies in the application containers - Sidecar NodeType = "sidecar" + // SidecarProxy type is used for sidecar proxies in the application containers + SidecarProxy NodeType = "sidecar" // Ingress type is used for cluster ingress proxies Ingress NodeType = "ingress" @@ -132,7 +122,7 @@ const ( // IsApplicationNodeType verifies that the NodeType is one of the declared constants in the model func IsApplicationNodeType(nType NodeType) bool { switch nType { - case Sidecar, Ingress, Router: + case SidecarProxy, Ingress, Router: return true default: return false @@ -240,7 +230,7 @@ func ParseServiceNodeWithMetadata(s string, metadata map[string]string) (*Proxy, out.Type = NodeType(parts[0]) switch out.Type { - case Sidecar, Ingress, Router: + case SidecarProxy, Ingress, Router: default: return out, fmt.Errorf("invalid node type (valid types: ingress, sidecar, router in the service node %q", s) } @@ -260,7 +250,7 @@ func ParseServiceNodeWithMetadata(s string, metadata map[string]string) (*Proxy, } // Does query from ingress or router have to carry valid IP address? - if len(out.IPAddresses) == 0 && out.Type == Sidecar { + if len(out.IPAddresses) == 0 && out.Type == SidecarProxy { return out, fmt.Errorf("no valid IP address in the service node id or metadata") } @@ -277,10 +267,18 @@ func GetProxyConfigNamespace(proxy *Proxy) string { } // First look for ISTIO_META_CONFIG_NAMESPACE - if configNamespace, found := proxy.Metadata["CONFIG_NAMESPACE"]; found { + // All newer proxies (from Istio 1.1 onwards) are supposed to supply this + if configNamespace, found := proxy.Metadata[NodeConfigNamespace]; found { return configNamespace } + // if not found, for backward compatibility, extract the namespace from + // the proxy domain. this is a k8s specific hack and should be enabled + parts := strings.Split(proxy.DNSDomain, ".") + if len(parts) > 1 { // k8s will have namespace. + return parts[0] + } + return "" } @@ -354,21 +352,23 @@ func DefaultProxyConfig() meshconfig.ProxyConfig { func DefaultMeshConfig() meshconfig.MeshConfig { config := DefaultProxyConfig() return meshconfig.MeshConfig{ - MixerCheckServer: "", - MixerReportServer: "", - DisablePolicyChecks: false, - PolicyCheckFailOpen: false, - ProxyListenPort: 15001, - ConnectTimeout: types.DurationProto(1 * time.Second), - IngressClass: "istio", - IngressControllerMode: meshconfig.MeshConfig_STRICT, - EnableTracing: true, - AccessLogFile: "/dev/stdout", - AccessLogEncoding: meshconfig.MeshConfig_TEXT, - DefaultConfig: &config, - SdsUdsPath: "", - EnableSdsTokenMount: false, - TrustDomain: "", + MixerCheckServer: "", + MixerReportServer: "", + DisablePolicyChecks: false, + PolicyCheckFailOpen: false, + SidecarToTelemetrySessionAffinity: false, + ProxyListenPort: 15001, + ConnectTimeout: types.DurationProto(1 * time.Second), + IngressClass: "istio", + IngressControllerMode: meshconfig.MeshConfig_STRICT, + EnableTracing: true, + AccessLogFile: "/dev/stdout", + AccessLogEncoding: meshconfig.MeshConfig_TEXT, + DefaultConfig: &config, + SdsUdsPath: "", + EnableSdsTokenMount: false, + TrustDomain: "", + OutboundTrafficPolicy: &meshconfig.MeshConfig_OutboundTrafficPolicy{Mode: meshconfig.MeshConfig_OutboundTrafficPolicy_REGISTRY_ONLY}, } } @@ -455,3 +455,57 @@ func parseIPAddresses(s string) ([]string, error) { func isValidIPAddress(ip string) bool { return net.ParseIP(ip) != nil } + +// Pile all node metadata constants here +const ( + + // NodeMetadataNetwork defines the network the node belongs to. It is an optional metadata, + // set at injection time. When set, the Endpoints returned to a note and not on same network + // will be replaced with the gateway defined in the settings. + NodeMetadataNetwork = "NETWORK" + + // NodeMetadataInterceptionMode is the name of the metadata variable that carries info about + // traffic interception mode at the proxy + NodeMetadataInterceptionMode = "INTERCEPTION_MODE" + + // NodeConfigNamespace is the name of the metadata variable that carries info about + // the config namespace associated with the proxy + NodeConfigNamespace = "CONFIG_NAMESPACE" +) + +// TrafficInterceptionMode indicates how traffic to/from the workload is captured and +// sent to Envoy. This should not be confused with the CaptureMode in the API that indicates +// how the user wants traffic to be intercepted for the listener. TrafficInterceptionMode is +// always derived from the Proxy metadata +type TrafficInterceptionMode string + +const ( + // InterceptionNone indicates that the workload is not using IPtables for traffic interception + InterceptionNone TrafficInterceptionMode = "NONE" + + // InterceptionTproxy implies traffic intercepted by IPtables with TPROXY mode + InterceptionTproxy TrafficInterceptionMode = "TPROXY" + + // InterceptionRedirect implies traffic intercepted by IPtables with REDIRECT mode + // This is our default mode + InterceptionRedirect TrafficInterceptionMode = "REDIRECT" +) + +// GetInterceptionMode extracts the interception mode associated with the proxy +// from the proxy metadata +func (node *Proxy) GetInterceptionMode() TrafficInterceptionMode { + if node == nil { + return InterceptionRedirect + } + + switch node.Metadata[NodeMetadataInterceptionMode] { + case "TPROXY": + return InterceptionTproxy + case "REDIRECT": + return InterceptionRedirect + case "NONE": + return InterceptionNone + } + + return InterceptionRedirect +} diff --git a/pilot/pkg/model/context_test.go b/pilot/pkg/model/context_test.go index d7eae6109133..4595db0a0b2f 100644 --- a/pilot/pkg/model/context_test.go +++ b/pilot/pkg/model/context_test.go @@ -44,7 +44,7 @@ func TestServiceNode(t *testing.T) { }, { in: &model.Proxy{ - Type: model.Sidecar, + Type: model.SidecarProxy, ID: "random", IPAddresses: []string{"10.3.3.3", "10.4.4.4", "10.5.5.5", "10.6.6.6"}, DNSDomain: "local", diff --git a/pilot/pkg/model/push_context.go b/pilot/pkg/model/push_context.go index 9ca9caf59ed9..256439cdafbf 100644 --- a/pilot/pkg/model/push_context.go +++ b/pilot/pkg/model/push_context.go @@ -17,14 +17,12 @@ package model import ( "encoding/json" "sort" - "strings" "sync" "time" "github.com/prometheus/client_golang/prometheus" networking "istio.io/api/networking/v1alpha3" - "istio.io/istio/pkg/features/pilot" ) // PushContext tracks the status of a push - metrics and errors. @@ -60,6 +58,9 @@ type PushContext struct { privateDestRuleByHostByNamespace map[string]map[Hostname]*combinedDestinationRule publicDestRuleHosts []Hostname publicDestRuleByHost map[Hostname]*combinedDestinationRule + + // sidecars for each namespace + sidecarsByNamespace map[string][]*SidecarScope ////////// END //////// // The following data is either a global index or used in the inbound path. @@ -280,6 +281,7 @@ func NewPushContext() *PushContext { publicDestRuleHosts: []Hostname{}, privateDestRuleByHostByNamespace: map[string]map[Hostname]*combinedDestinationRule{}, privateDestRuleHostsByNamespace: map[string][]Hostname{}, + sidecarsByNamespace: map[string][]*SidecarScope{}, ServiceByHostname: map[Hostname]*Service{}, ProxyStatus: map[string]map[string]ProxyPushStatus{}, @@ -322,8 +324,37 @@ func (ps *PushContext) UpdateMetrics() { } } +// SetSidecarScope identifies the sidecar scope object associated with this +// proxy and updates the proxy Node. This is a convenience hack so that +// callers can simply call push.Services(node) while the implementation of +// push.Services can return the set of services from the proxyNode's +// sidecar scope or from the push context's set of global services. Similar +// logic applies to push.VirtualServices and push.DestinationRule. The +// short cut here is useful only for CDS and parts of RDS generation code. +// +// Listener generation code will still use the SidecarScope object directly +// as it needs the set of services for each listener port. +func (ps *PushContext) SetSidecarScope(proxy *Proxy) { + instances, err := ps.Env.GetProxyServiceInstances(proxy) + if err != nil { + log.Errorf("failed to get service proxy service instances: %v", err) + // TODO: fallback to node metadata labels + return + } + + proxy.SidecarScope = ps.getSidecarScope(proxy, instances) +} + // Services returns the list of services that are visible to a Proxy in a given config namespace func (ps *PushContext) Services(proxy *Proxy) []*Service { + // If proxy has a sidecar scope that is user supplied, then get the services from the sidecar scope + // sidecarScope.config is nil if there is no sidecar scope for the namespace + // TODO: This is a temporary gate until the sidecar implementation is stable. Once its stable, remove the + // config != nil check + if proxy != nil && proxy.SidecarScope != nil && proxy.SidecarScope.Config != nil && proxy.Type == SidecarProxy { + return proxy.SidecarScope.Services() + } + out := []*Service{} // First add private services @@ -341,43 +372,9 @@ func (ps *PushContext) Services(proxy *Proxy) []*Service { return out } -// UpdateNodeIsolation will update per-node data holding visible services and configs for the node. -// It is called: -// - on connect -// - on config change events (full push) -// - TODO: on-demand events from Envoy -func (ps *PushContext) UpdateNodeIsolation(proxy *Proxy) { - // For now Router (Gateway) is not using the isolation - the Gateway already has explicit - // bindings. - if pilot.NetworkScopes != "" && proxy.Type == Sidecar { - // Add global namespaces. This may be loaded from mesh config ( after the API is stable and - // reviewed ), or from an env variable. - adminNs := strings.Split(pilot.NetworkScopes, ",") - globalDeps := map[string]bool{} - for _, ns := range adminNs { - globalDeps[ns] = true - } - - proxy.mutex.RLock() - defer proxy.mutex.RUnlock() - res := []*Service{} - for _, s := range ps.publicServices { - serviceNamespace := s.Attributes.Namespace - if serviceNamespace == "" { - res = append(res, s) - } else if globalDeps[serviceNamespace] || serviceNamespace == proxy.ConfigNamespace { - res = append(res, s) - } - } - res = append(res, ps.privateServicesByNamespace[proxy.ConfigNamespace]...) - proxy.serviceDependencies = res - - // TODO: read Gateways,NetworkScopes/etc to populate additional entries - } -} - // VirtualServices lists all virtual services bound to the specified gateways -// This replaces store.VirtualServices +// This replaces store.VirtualServices. Used only by the gateways +// Sidecars use the egressListener.VirtualServices(). func (ps *PushContext) VirtualServices(proxy *Proxy, gateways map[string]bool) []Config { configs := make([]Config, 0) out := make([]Config, 0) @@ -419,8 +416,82 @@ func (ps *PushContext) VirtualServices(proxy *Proxy, gateways map[string]bool) [ return out } +// getSidecarScope returns a SidecarScope object associated with the +// proxy. The SidecarScope object is a semi-processed view of the service +// registry, and config state associated with the sidecar crd. The scope contains +// a set of inbound and outbound listeners, services/configs per listener, +// etc. The sidecar scopes are precomputed in the initSidecarContext +// function based on the Sidecar API objects in each namespace. If there is +// no sidecar api object, a default sidecarscope is assigned to the +// namespace which enables connectivity to all services in the mesh. +// +// Callers can check if the sidecarScope is from user generated object or not +// by checking the sidecarScope.Config field, that contains the user provided config +func (ps *PushContext) getSidecarScope(proxy *Proxy, proxyInstances []*ServiceInstance) *SidecarScope { + + var workloadLabels LabelsCollection + for _, w := range proxyInstances { + workloadLabels = append(workloadLabels, w.Labels) + } + + // Find the most specific matching sidecar config from the proxy's + // config namespace If none found, construct a sidecarConfig on the fly + // that allows the sidecar to talk to any namespace (the default + // behavior in the absence of sidecars). + if sidecars, ok := ps.sidecarsByNamespace[proxy.ConfigNamespace]; ok { + // TODO: logic to merge multiple sidecar resources + // Currently we assume that there will be only one sidecar config for a namespace. + var defaultSidecar *SidecarScope + for _, wrapper := range sidecars { + if wrapper.Config != nil { + sidecar := wrapper.Config.Spec.(*networking.Sidecar) + // if there is no workload selector, the config applies to all workloads + // if there is a workload selector, check for matching workload labels + if sidecar.GetWorkloadSelector() != nil { + workloadSelector := Labels(sidecar.GetWorkloadSelector().GetLabels()) + if !workloadLabels.IsSupersetOf(workloadSelector) { + continue + } + return wrapper + } + defaultSidecar = wrapper + continue + } + // Not sure when this can heppn (Config = nil ?) + if defaultSidecar != nil { + return defaultSidecar // still return the valid one + } + return wrapper + } + if defaultSidecar != nil { + return defaultSidecar // still return the valid one + } + } + + return DefaultSidecarScopeForNamespace(ps, proxy.ConfigNamespace) +} + +// GetAllSidecarScopes returns a map of namespace and the set of SidecarScope +// object associated with the namespace. This will be used by the CDS code to +// precompute CDS output for each sidecar scope. Since we have a default sidecarscope +// for namespaces that dont explicitly have one, we are guaranteed to +// have the CDS output cached for every namespace/sidecar scope combo. +func (ps *PushContext) GetAllSidecarScopes() map[string][]*SidecarScope { + return ps.sidecarsByNamespace +} + // DestinationRule returns a destination rule for a service name in a given domain. func (ps *PushContext) DestinationRule(proxy *Proxy, hostname Hostname) *Config { + // If proxy has a sidecar scope that is user supplied, then get the destination rules from the sidecar scope + // sidecarScope.config is nil if there is no sidecar scope for the namespace + // TODO: This is a temporary gate until the sidecar implementation is stable. Once its stable, remove the + // config != nil check + if proxy != nil && proxy.SidecarScope != nil && proxy.SidecarScope.Config != nil && proxy.Type == SidecarProxy { + // If there is a sidecar scope for this proxy, return the destination rule + // from the sidecar scope. + return proxy.SidecarScope.DestinationRule(hostname) + } + if proxy == nil { for ns, privateDestHosts := range ps.privateDestRuleHostsByNamespace { if host, ok := MostSpecificHostMatch(hostname, privateDestHosts); ok { @@ -452,6 +523,8 @@ func (ps *PushContext) SubsetToLabels(subsetName string, hostname Hostname) Labe return nil } + // TODO: This code is incorrect as a proxy with sidecarScope could have a different + // destination rule than the default one. EDS should be computed per sidecar scope config := ps.DestinationRule(nil, hostname) if config == nil { return nil @@ -496,6 +569,11 @@ func (ps *PushContext) InitContext(env *Environment) error { return err } + // Must be initialized in the end + if err = ps.InitSidecarScopes(env); err != nil { + return err + } + // TODO: everything else that is used in config generation - the generation // should not have any deps on config store. ps.initDone = true @@ -540,6 +618,10 @@ func (ps *PushContext) initVirtualServices(env *Environment) error { return err } + // TODO(rshriram): parse each virtual service and maintain a map of the + // virtualservice name, the list of registry hosts in the VS and non + // registry DNS names in the VS. This should cut down processing in + // the RDS code. See separateVSHostsAndServices in route/route.go sortConfigByCreationTime(vservices) // convert all shortnames in virtual services into FQDNs @@ -613,6 +695,47 @@ func (ps *PushContext) initVirtualServices(env *Environment) error { return nil } +// InitSidecarScopes synthesizes Sidecar CRDs into objects called +// SidecarScope. The SidecarScope object is a semi-processed view of the +// service registry, and config state associated with the sidecar CRD. The +// scope contains a set of inbound and outbound listeners, services/configs +// per listener, etc. The sidecar scopes are precomputed based on the +// Sidecar API objects in each namespace. If there is no sidecar api object +// for a namespace, a default sidecarscope is assigned to the namespace +// which enables connectivity to all services in the mesh. +// +// When proxies connect to Pilot, we identify the sidecar scope associated +// with the proxy and derive listeners/routes/clusters based on the sidecar +// scope. +func (ps *PushContext) InitSidecarScopes(env *Environment) error { + sidecarConfigs, err := env.List(Sidecar.Type, NamespaceAll) + if err != nil { + return err + } + + sortConfigByCreationTime(sidecarConfigs) + + ps.sidecarsByNamespace = make(map[string][]*SidecarScope) + for _, sidecarConfig := range sidecarConfigs { + // TODO: add entries with workloadSelectors first before adding namespace-wide entries + sidecarConfig := sidecarConfig + ps.sidecarsByNamespace[sidecarConfig.Namespace] = append(ps.sidecarsByNamespace[sidecarConfig.Namespace], + ConvertToSidecarScope(ps, &sidecarConfig)) + } + + // prebuild default sidecar scopes for other namespaces that dont have a sidecar CRD object. + // Workloads in these namespaces can reach any service in the mesh - the default istio behavior + // The DefaultSidecarScopeForNamespace function represents this behavior. + for _, s := range ps.ServiceByHostname { + ns := s.Attributes.Namespace + if len(ps.sidecarsByNamespace[ns]) == 0 { + ps.sidecarsByNamespace[ns] = []*SidecarScope{DefaultSidecarScopeForNamespace(ps, ns)} + } + } + + return nil +} + // Split out of DestinationRule expensive conversions - once per push. func (ps *PushContext) initDestinationRules(env *Environment) error { configs, err := env.List(DestinationRule.Type, NamespaceAll) diff --git a/pilot/pkg/model/service.go b/pilot/pkg/model/service.go index e2f6c42271fa..e4494f509238 100644 --- a/pilot/pkg/model/service.go +++ b/pilot/pkg/model/service.go @@ -35,7 +35,6 @@ import ( authn "istio.io/api/authentication/v1alpha1" networking "istio.io/api/networking/v1alpha3" - "istio.io/istio/pkg/features/pilot" ) // Hostname describes a (possibly wildcarded) hostname @@ -110,6 +109,10 @@ const ( // IstioDefaultConfigNamespace constant for default namespace IstioDefaultConfigNamespace = "default" + + // AZLabel indicates the region/zone of an instance. It is used if the native + // registry doesn't provide one. + AZLabel = "istio-az" ) // Port represents a network port where a service is listening for @@ -370,7 +373,7 @@ func (si *ServiceInstance) GetLocality() string { if si.Endpoint.Locality != "" { return si.Endpoint.Locality } - return si.Labels[pilot.AZLabel] + return si.Labels[AZLabel] } // IstioEndpoint has the information about a single address+port for a specific diff --git a/pilot/pkg/model/sidecar.go b/pilot/pkg/model/sidecar.go new file mode 100644 index 000000000000..9e004c4bec16 --- /dev/null +++ b/pilot/pkg/model/sidecar.go @@ -0,0 +1,389 @@ +// Copyright 2019 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package model + +import ( + "strings" + + xdsapi "github.com/envoyproxy/go-control-plane/envoy/api/v2" + + networking "istio.io/api/networking/v1alpha3" +) + +const ( + wildcardNamespace = "*" + wildcardService = Hostname("*") +) + +// SidecarScope is a wrapper over the Sidecar resource with some +// preprocessed data to determine the list of services, virtualServices, +// and destinationRules that are accessible to a given +// sidecar. Precomputing the list of services, virtual services, dest rules +// for a sidecar improves performance as we no longer need to compute this +// list for every sidecar. We simply have to match a sidecar to a +// SidecarScope. Note that this is not the same as public/private scoped +// services. The list of services seen by every sidecar scope (namespace +// wide or per workload) depends on the imports, the listeners, and other +// settings. +// +// Every proxy workload of SidecarProxy type will always map to a +// SidecarScope object. If the proxy's namespace does not have a user +// specified Sidecar CRD, we will construct one that has a catch all egress +// listener that imports every public service/virtualService in the mesh. +type SidecarScope struct { + // The crd itself. Can be nil if we are constructing the default + // sidecar scope + Config *Config + + // Set of egress listeners, and their associated services. A sidecar + // scope should have either ingress/egress listeners or both. For + // every proxy workload that maps to a sidecar API object (or the + // default object), we will go through every egress listener in the + // object and process the Envoy listener or RDS based on the imported + // services/virtual services in that listener. + EgressListeners []*IstioEgressListenerWrapper + + // HasCustomIngressListeners is a convenience variable that if set to + // true indicates that the config object has one or more listeners. + // If set to false, networking code should derive the inbound + // listeners from the proxy service instances + HasCustomIngressListeners bool + + // Union of services imported across all egress listeners for use by CDS code. + // Right now, we include all the ports in these services. + // TODO: Trim the ports in the services to only those referred to by the + // egress listeners. + services []*Service + + // Destination rules imported across all egress listeners. This + // contains the computed set based on public/private destination rules + // as well as the inherited ones, in addition to the wildcard matches + // such as *.com applying to foo.bar.com. Each hostname in this map + // corresponds to a service in the services array above. When computing + // CDS, we simply have to find the matching service and return the + // destination rule. + destinationRules map[Hostname]*Config + + // XDSOutboundClusters is the CDS output for sidecars that map to this + // sidecarScope object. Contains the outbound clusters only + XDSOutboundClusters []*xdsapi.Cluster +} + +// IstioEgressListenerWrapper is a wrapper for +// networking.IstioEgressListener object. The wrapper provides performance +// optimizations as it allows us to precompute and store the list of +// services/virtualServices that apply to this listener. +type IstioEgressListenerWrapper struct { + // The actual IstioEgressListener api object from the Config. It can be + // nil if this is for the default sidecar scope. + IstioListener *networking.IstioEgressListener + + // A preprocessed form of networking.IstioEgressListener.hosts field. + // The hosts field has entries of the form namespace/dnsName. For + // example ns1/*, */*, */foo.tcp.com, etc. This map preprocesses all + // these string fields into a map of namespace and services. + listenerHosts map[string]Hostname + + // List of services imported by this egress listener extracted from the + // listenerHosts above. This will be used by LDS and RDS code when + // building the set of virtual hosts or the tcp filterchain matches for + // a given listener port. Two listeners, on user specified ports or + // unix domain sockets could have completely different sets of + // services. So a global list of services per sidecar scope will be + // incorrect. Hence the per listener set of services. + services []*Service + + // List of virtual services imported by this egress listener extracted + // from the listenerHosts above. As with per listener services, this + // will be used by RDS code to compute the virtual host configs for + // http listeners, as well as by TCP/TLS filter code to compute the + // service routing configs and the filter chain matches. We need a + // virtualService set per listener and not one per sidecarScope because + // each listener imports an independent set of virtual services. + // Listener 1 could import a public virtual service for serviceA from + // namespace A that has some path rewrite, while listener2 could import + // a private virtual service for serviceA from the local namespace, + // with a different path rewrite or no path rewrites. + virtualServices []Config +} + +// DefaultSidecarScope is a sidecar scope object with a default catch all egress listener +// that matches the default Istio behavior: a sidecar has listeners for all services in the mesh +// We use this scope when the user has not set any sidecar Config for a given config namespace. +func DefaultSidecarScopeForNamespace(ps *PushContext, configNamespace string) *SidecarScope { + dummyNode := Proxy{ + ConfigNamespace: configNamespace, + } + + defaultEgressListener := &IstioEgressListenerWrapper{ + listenerHosts: map[string]Hostname{wildcardNamespace: wildcardService}, + } + defaultEgressListener.services = ps.Services(&dummyNode) + + meshGateway := map[string]bool{IstioMeshGateway: true} + defaultEgressListener.virtualServices = ps.VirtualServices(&dummyNode, meshGateway) + + out := &SidecarScope{ + EgressListeners: []*IstioEgressListenerWrapper{defaultEgressListener}, + services: defaultEgressListener.services, + destinationRules: make(map[Hostname]*Config), + } + + // Now that we have all the services that sidecars using this scope (in + // this config namespace) will see, identify all the destinationRules + // that these services need + for _, s := range out.services { + out.destinationRules[s.Hostname] = ps.DestinationRule(&dummyNode, s.Hostname) + } + + return out +} + +// ConvertToSidecarScope converts from Sidecar config to SidecarScope object +func ConvertToSidecarScope(ps *PushContext, sidecarConfig *Config) *SidecarScope { + r := sidecarConfig.Spec.(*networking.Sidecar) + + var out *SidecarScope + + // If there are no egress listeners but only ingress listeners, then infer from + // environment. This is same as the default egress listener setup above + if r.Egress == nil || len(r.Egress) == 0 { + out = DefaultSidecarScopeForNamespace(ps, sidecarConfig.Namespace) + } else { + out = &SidecarScope{} + + out.EgressListeners = make([]*IstioEgressListenerWrapper, 0) + for _, e := range r.Egress { + out.EgressListeners = append(out.EgressListeners, convertIstioListenerToWrapper(ps, sidecarConfig, e)) + } + + // Now collect all the imported services across all egress listeners in + // this sidecar crd. This is needed to generate CDS output + out.services = make([]*Service, 0) + servicesAdded := make(map[string]struct{}) + dummyNode := Proxy{ + ConfigNamespace: sidecarConfig.Namespace, + } + + for _, listener := range out.EgressListeners { + for _, s := range listener.services { + // TODO: port merging when each listener generates a partial service + if _, found := servicesAdded[string(s.Hostname)]; !found { + servicesAdded[string(s.Hostname)] = struct{}{} + out.services = append(out.services, s) + } + } + } + + // Now that we have all the services that sidecars using this scope (in + // this config namespace) will see, identify all the destinationRules + // that these services need + out.destinationRules = make(map[Hostname]*Config) + for _, s := range out.services { + out.destinationRules[s.Hostname] = ps.DestinationRule(&dummyNode, s.Hostname) + } + } + + out.Config = sidecarConfig + if len(r.Ingress) > 0 { + out.HasCustomIngressListeners = true + } + + return out +} + +func convertIstioListenerToWrapper(ps *PushContext, sidecarConfig *Config, + istioListener *networking.IstioEgressListener) *IstioEgressListenerWrapper { + + out := &IstioEgressListenerWrapper{ + IstioListener: istioListener, + listenerHosts: make(map[string]Hostname), + } + + if istioListener.Hosts != nil { + for _, h := range istioListener.Hosts { + parts := strings.SplitN(h, "/", 2) + out.listenerHosts[parts[0]] = Hostname(parts[1]) + } + } + + dummyNode := Proxy{ + ConfigNamespace: sidecarConfig.Namespace, + } + + out.services = out.selectServices(ps.Services(&dummyNode)) + meshGateway := map[string]bool{IstioMeshGateway: true} + out.virtualServices = out.selectVirtualServices(ps.VirtualServices(&dummyNode, meshGateway)) + + return out +} + +// Services returns the list of services imported across all egress listeners by this +// Sidecar config +func (sc *SidecarScope) Services() []*Service { + if sc == nil { + return nil + } + + return sc.services +} + +// DestinationRule returns the destination rule applicable for a given hostname +// used by CDS code +func (sc *SidecarScope) DestinationRule(hostname Hostname) *Config { + if sc == nil { + return nil + } + + return sc.destinationRules[hostname] +} + +// GetEgressListenerForRDS returns the egress listener corresponding to +// the listener port or the bind address or the catch all listener +func (sc *SidecarScope) GetEgressListenerForRDS(port int, bind string) *IstioEgressListenerWrapper { + if sc == nil { + return nil + } + + for _, e := range sc.EgressListeners { + // We hit a catchall listener. This is the last listener in the list of listeners + // return as is + if e.IstioListener == nil || e.IstioListener.Port == nil { + return e + } + + // Check if the ports match + // for unix domain sockets (i.e. port == 0), check if the bind is equal to the routeName + if int(e.IstioListener.Port.Number) == port { + if port == 0 { // unix domain socket + if e.IstioListener.Bind == bind { + return e + } + // no match.. continue searching + continue + } + // this is a non-zero port match + return e + } + } + + // This should never be reached unless user explicitly set an empty array for egress + // listeners which we actually forbid + return nil +} + +// Services returns the list of services imported by this egress listener +func (ilw *IstioEgressListenerWrapper) Services() []*Service { + if ilw == nil { + return nil + } + + return ilw.services +} + +// VirtualServices returns the list of virtual services imported by this +// egress listener +func (ilw *IstioEgressListenerWrapper) VirtualServices() []Config { + if ilw == nil { + return nil + } + + return ilw.virtualServices +} + +// Given a list of virtual services visible to this namespace, +// selectVirtualServices returns the list of virtual services that are +// applicable to this egress listener, based on the hosts field specified +// in the API. This code is called only once during the construction of the +// listener wrapper. The parent object (sidecarScope) and its listeners are +// constructed only once and reused for every sidecar that selects this +// sidecarScope object. Selection is based on labels at the moment. +func (ilw *IstioEgressListenerWrapper) selectVirtualServices(virtualServices []Config) []Config { + importedVirtualServices := make([]Config, 0) + for _, c := range virtualServices { + configNamespace := c.Namespace + rule := c.Spec.(*networking.VirtualService) + + // Check if there is an explicit import of form ns/* or ns/host + if hostMatch, nsFound := ilw.listenerHosts[configNamespace]; nsFound { + // Check if the hostnames match per usual hostname matching rules + hostFound := false + for _, h := range rule.Hosts { + // TODO: This is a bug. VirtualServices can have many hosts + // while the user might be importing only a single host + // We need to generate a new VirtualService with just the matched host + if hostMatch.Matches(Hostname(h)) { + importedVirtualServices = append(importedVirtualServices, c) + hostFound = true + break + } + } + if hostFound { + break + } + } + + // Check if there is an import of form */host or */* + if hostMatch, wnsFound := ilw.listenerHosts[wildcardNamespace]; wnsFound { + // Check if the hostnames match per usual hostname matching rules + for _, h := range rule.Hosts { + // TODO: This is a bug. VirtualServices can have many hosts + // while the user might be importing only a single host + // We need to generate a new VirtualService with just the matched host + if hostMatch.Matches(Hostname(h)) { + importedVirtualServices = append(importedVirtualServices, c) + break + } + } + } + } + + return importedVirtualServices +} + +// selectServices returns the list of services selected through the hosts field +// in the egress portion of the Sidecar config +func (ilw *IstioEgressListenerWrapper) selectServices(services []*Service) []*Service { + + importedServices := make([]*Service, 0) + for _, s := range services { + configNamespace := s.Attributes.Namespace + // Check if there is an explicit import of form ns/* or ns/host + if hostMatch, nsFound := ilw.listenerHosts[configNamespace]; nsFound { + // Check if the hostnames match per usual hostname matching rules + if hostMatch.Matches(s.Hostname) { + // TODO: See if the service's ports match. + // If there is a listener port for this Listener, then + // check if the service has a port of same value. + // If not, check if the service has a single port - and choose that port + // if service has multiple ports none of which match the listener port, check if there is + // a virtualService with match Port + importedServices = append(importedServices, s) + continue + } + // hostname didn't match. Check if its imported as */host + } + + // Check if there is an import of form */host or */* + if hostMatch, wnsFound := ilw.listenerHosts[wildcardNamespace]; wnsFound { + // Check if the hostnames match per usual hostname matching rules + if hostMatch.Matches(s.Hostname) { + importedServices = append(importedServices, s) + } + } + } + + return importedServices +} diff --git a/pilot/pkg/model/validation.go b/pilot/pkg/model/validation.go index 1ec10b1fe83d..eb8968ebb077 100644 --- a/pilot/pkg/model/validation.go +++ b/pilot/pkg/model/validation.go @@ -396,10 +396,16 @@ func ValidateUnixAddress(addr string) error { if len(addr) == 0 { return errors.New("unix address must not be empty") } + + // Allow unix abstract domain sockets whose names start with @ + if strings.HasPrefix(addr, "@") { + return nil + } + // Note that we use path, not path/filepath even though a domain socket path is a file path. We don't want the // Pilot output to depend on which OS Pilot is run on, so we always use Unix-style forward slashes. - if !path.IsAbs(addr) { - return fmt.Errorf("%s is not an absolute path", addr) + if !path.IsAbs(addr) || strings.HasSuffix(addr, "/") { + return fmt.Errorf("%s is not an absolute path to a file", addr) } return nil } @@ -564,6 +570,172 @@ func ValidateEnvoyFilter(name, namespace string, msg proto.Message) (errs error) return } +// ValidateSidecar checks sidecar config supplied by user +func ValidateSidecar(name, namespace string, msg proto.Message) (errs error) { + rule, ok := msg.(*networking.Sidecar) + if !ok { + return fmt.Errorf("cannot cast to Sidecar") + } + + if rule.WorkloadSelector != nil { + if rule.WorkloadSelector.GetLabels() == nil { + errs = appendErrors(errs, fmt.Errorf("sidecar: workloadSelector cannot have empty labels")) + } + } + + // TODO: pending discussion on API default behavior. + if len(rule.Ingress) == 0 && len(rule.Egress) == 0 { + return fmt.Errorf("sidecar: missing ingress/egress") + } + + portMap := make(map[uint32]struct{}) + udsMap := make(map[string]struct{}) + for _, i := range rule.Ingress { + if i.Port == nil { + errs = appendErrors(errs, fmt.Errorf("sidecar: port is required for ingress listeners")) + continue + } + + bind := i.GetBind() + captureMode := i.GetCaptureMode() + errs = appendErrors(errs, validateSidecarPortBindAndCaptureMode(i.Port, bind, captureMode)) + + if i.Port.Number == 0 { + if _, found := udsMap[bind]; found { + errs = appendErrors(errs, fmt.Errorf("sidecar: unix domain socket values for listeners must be unique")) + } + udsMap[bind] = struct{}{} + } else { + if _, found := portMap[i.Port.Number]; found { + errs = appendErrors(errs, fmt.Errorf("sidecar: ports on IP bound listeners must be unique")) + } + portMap[i.Port.Number] = struct{}{} + } + + if len(i.DefaultEndpoint) == 0 { + errs = appendErrors(errs, fmt.Errorf("sidecar: default endpoint must be set for all ingress listeners")) + } else { + if strings.HasPrefix(i.DefaultEndpoint, UnixAddressPrefix) { + errs = appendErrors(errs, ValidateUnixAddress(strings.TrimPrefix(i.DefaultEndpoint, UnixAddressPrefix))) + } else { + // format should be 127.0.0.1:port or :port + parts := strings.Split(i.DefaultEndpoint, ":") + if len(parts) < 2 { + errs = appendErrors(errs, fmt.Errorf("sidecar: defaultEndpoint must be of form 127.0.0.1:")) + } else { + if len(parts[0]) > 0 && parts[0] != "127.0.0.1" { + errs = appendErrors(errs, fmt.Errorf("sidecar: defaultEndpoint must be of form 127.0.0.1:")) + } + + port, err := strconv.Atoi(parts[1]) + if err != nil { + errs = appendErrors(errs, fmt.Errorf("sidecar: defaultEndpoint port (%s) is not a number: %v", parts[1], err)) + } else { + errs = appendErrors(errs, ValidatePort(port)) + } + } + } + } + } + + // TODO: complete bind address+port or UDS uniqueness across ingress and egress + // after the whole listener implementation is complete + portMap = make(map[uint32]struct{}) + udsMap = make(map[string]struct{}) + catchAllEgressListenerFound := false + for index, i := range rule.Egress { + // there can be only one catch all egress listener with empty port, and it should be the last listener. + if i.Port == nil { + if !catchAllEgressListenerFound { + if index == len(rule.Egress)-1 { + catchAllEgressListenerFound = true + } else { + errs = appendErrors(errs, fmt.Errorf("sidecar: the egress listener with empty port should be the last listener in the list")) + } + } else { + errs = appendErrors(errs, fmt.Errorf("sidecar: egress can have only one listener with empty port")) + continue + } + } else { + bind := i.GetBind() + captureMode := i.GetCaptureMode() + errs = appendErrors(errs, validateSidecarPortBindAndCaptureMode(i.Port, bind, captureMode)) + + if i.Port.Number == 0 { + if _, found := udsMap[bind]; found { + errs = appendErrors(errs, fmt.Errorf("sidecar: unix domain socket values for listeners must be unique")) + } + udsMap[bind] = struct{}{} + } else { + if _, found := portMap[i.Port.Number]; found { + errs = appendErrors(errs, fmt.Errorf("sidecar: ports on IP bound listeners must be unique")) + } + portMap[i.Port.Number] = struct{}{} + } + } + + // validate that the hosts field is a slash separated value + // of form ns1/host, or */host, or */*, or ns1/*, or ns1/*.example.com + if len(i.Hosts) == 0 { + errs = appendErrors(errs, fmt.Errorf("sidecar: egress listener must contain at least one host")) + } else { + for _, host := range i.Hosts { + parts := strings.SplitN(host, "/", 2) + if len(parts) != 2 { + errs = appendErrors(errs, fmt.Errorf("sidecar: host must be of form namespace/dnsName")) + continue + } + + if len(parts[0]) == 0 || len(parts[1]) == 0 { + errs = appendErrors(errs, fmt.Errorf("sidecar: config namespace and dnsName in host entry cannot be empty")) + } + + // short name hosts are not allowed + if parts[1] != "*" && !strings.Contains(parts[1], ".") { + errs = appendErrors(errs, fmt.Errorf("sidecar: short names (non FQDN) are not allowed")) + } + + errs = appendErrors(errs, ValidateWildcardDomain(parts[1])) + } + } + } + + return +} + +func validateSidecarPortBindAndCaptureMode(port *networking.Port, bind string, + captureMode networking.CaptureMode) (errs error) { + + // Handle Unix domain sockets + if port.Number == 0 { + // require bind to be a unix domain socket + errs = appendErrors(errs, + validatePortName(port.Name), + validateProtocol(port.Protocol)) + + if !strings.HasPrefix(bind, UnixAddressPrefix) { + errs = appendErrors(errs, fmt.Errorf("sidecar: ports with 0 value must have a unix domain socket bind address")) + } else { + errs = appendErrors(errs, ValidateUnixAddress(strings.TrimPrefix(bind, UnixAddressPrefix))) + } + + if captureMode != networking.CaptureMode_DEFAULT && captureMode != networking.CaptureMode_NONE { + errs = appendErrors(errs, fmt.Errorf("sidecar: captureMode must be DEFAULT/NONE for unix domain socket listeners")) + } + } else { + errs = appendErrors(errs, + validatePortName(port.Name), + validateProtocol(port.Protocol), + ValidatePort(int(port.Number))) + + if len(bind) != 0 { + errs = appendErrors(errs, ValidateIPv4Address(bind)) + } + } + + return +} + func validateTrafficPolicy(policy *networking.TrafficPolicy) error { if policy == nil { return nil @@ -1821,7 +1993,10 @@ func validateHTTPRetry(retries *networking.HTTPRetry) (errs error) { if retries.RetryOn != "" { retryOnPolicies := strings.Split(retries.RetryOn, ",") for _, policy := range retryOnPolicies { - if !supportedRetryOnPolicies[policy] { + // Try converting it to an integer to see if it's a valid HTTP status code. + i, _ := strconv.Atoi(policy) + + if http.StatusText(i) == "" && !supportedRetryOnPolicies[policy] { errs = appendErrors(errs, fmt.Errorf("%q is not a valid retryOn policy", policy)) } } diff --git a/pilot/pkg/model/validation_test.go b/pilot/pkg/model/validation_test.go index 27740bca6f20..798e3507bf11 100644 --- a/pilot/pkg/model/validation_test.go +++ b/pilot/pkg/model/validation_test.go @@ -1735,7 +1735,12 @@ func TestValidateHTTPRetry(t *testing.T) { {name: "valid default", in: &networking.HTTPRetry{ Attempts: 10, }, valid: true}, - {name: "bad attempts", in: &networking.HTTPRetry{ + {name: "valid http status retryOn", in: &networking.HTTPRetry{ + Attempts: 10, + PerTryTimeout: &types.Duration{Seconds: 2}, + RetryOn: "503,connect-failure", + }, valid: true}, + {name: "invalid attempts", in: &networking.HTTPRetry{ Attempts: -1, PerTryTimeout: &types.Duration{Seconds: 2}, }, valid: false}, @@ -1747,11 +1752,16 @@ func TestValidateHTTPRetry(t *testing.T) { Attempts: 10, PerTryTimeout: &types.Duration{Nanos: 999}, }, valid: false}, - {name: "non supported retryOn", in: &networking.HTTPRetry{ + {name: "invalid policy retryOn", in: &networking.HTTPRetry{ Attempts: 10, PerTryTimeout: &types.Duration{Seconds: 2}, RetryOn: "5xx,invalid policy", }, valid: false}, + {name: "invalid http status retryOn", in: &networking.HTTPRetry{ + Attempts: 10, + PerTryTimeout: &types.Duration{Seconds: 2}, + RetryOn: "600,connect-failure", + }, valid: false}, } for _, tc := range testCases { @@ -3705,3 +3715,284 @@ func TestValidateMixerService(t *testing.T) { }) } } + +func TestValidateSidecar(t *testing.T) { + tests := []struct { + name string + in *networking.Sidecar + valid bool + }{ + {"empty ingress and egress", &networking.Sidecar{}, false}, + {"default", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Hosts: []string{"*/*"}, + }, + }, + }, true}, + {"bad egress host 1", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Hosts: []string{"*"}, + }, + }, + }, false}, + {"bad egress host 2", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Hosts: []string{"/"}, + }, + }, + }, false}, + {"empty egress host", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Hosts: []string{}, + }, + }, + }, false}, + {"multiple wildcard egress", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Hosts: []string{ + "*/foo.com", + }, + }, + { + Hosts: []string{ + "ns1/bar.com", + }, + }, + }, + }, false}, + {"wildcard egress not in end", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Hosts: []string{ + "*/foo.com", + }, + }, + { + Port: &networking.Port{ + Protocol: "http", + Number: 8080, + Name: "h8080", + }, + Hosts: []string{ + "ns1/bar.com", + }, + }, + }, + }, false}, + {"invalid Port", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Protocol: "http1", + Number: 1000000, + Name: "", + }, + Hosts: []string{ + "ns1/bar.com", + }, + }, + }, + }, false}, + {"UDS bind", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 0, + Name: "uds", + }, + Hosts: []string{ + "ns1/bar.com", + }, + Bind: "unix:///@foo/bar/com", + }, + }, + }, true}, + {"UDS bind 2", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 0, + Name: "uds", + }, + Hosts: []string{ + "ns1/bar.com", + }, + Bind: "unix:///foo/bar/com", + }, + }, + }, true}, + {"invalid bind", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 0, + Name: "uds", + }, + Hosts: []string{ + "ns1/bar.com", + }, + Bind: "foobar:///@foo/bar/com", + }, + }, + }, false}, + {"invalid capture mode with uds bind", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 0, + Name: "uds", + }, + Hosts: []string{ + "ns1/bar.com", + }, + Bind: "unix:///@foo/bar/com", + CaptureMode: networking.CaptureMode_IPTABLES, + }, + }, + }, false}, + {"duplicate UDS bind", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 0, + Name: "uds", + }, + Hosts: []string{ + "ns1/bar.com", + }, + Bind: "unix:///@foo/bar/com", + }, + { + Port: &networking.Port{ + Protocol: "http", + Number: 0, + Name: "uds", + }, + Hosts: []string{ + "ns1/bar.com", + }, + Bind: "unix:///@foo/bar/com", + }, + }, + }, false}, + {"duplicate ports", &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 90, + Name: "foo", + }, + Hosts: []string{ + "ns1/bar.com", + }, + }, + { + Port: &networking.Port{ + Protocol: "tcp", + Number: 90, + Name: "tcp", + }, + Hosts: []string{ + "ns2/bar.com", + }, + }, + }, + }, false}, + {"ingress without port", &networking.Sidecar{ + Ingress: []*networking.IstioIngressListener{ + { + DefaultEndpoint: "127.0.0.1:110", + }, + }, + }, false}, + {"ingress with duplicate ports", &networking.Sidecar{ + Ingress: []*networking.IstioIngressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 90, + Name: "foo", + }, + DefaultEndpoint: "127.0.0.1:110", + }, + { + Port: &networking.Port{ + Protocol: "tcp", + Number: 90, + Name: "bar", + }, + DefaultEndpoint: "127.0.0.1:110", + }, + }, + }, false}, + {"ingress without default endpoint", &networking.Sidecar{ + Ingress: []*networking.IstioIngressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 90, + Name: "foo", + }, + }, + }, + }, false}, + {"ingress with invalid default endpoint IP", &networking.Sidecar{ + Ingress: []*networking.IstioIngressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 90, + Name: "foo", + }, + DefaultEndpoint: "1.1.1.1:90", + }, + }, + }, false}, + {"ingress with invalid default endpoint uds", &networking.Sidecar{ + Ingress: []*networking.IstioIngressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 90, + Name: "foo", + }, + DefaultEndpoint: "unix:///", + }, + }, + }, false}, + {"ingress with invalid default endpoint port", &networking.Sidecar{ + Ingress: []*networking.IstioIngressListener{ + { + Port: &networking.Port{ + Protocol: "http", + Number: 90, + Name: "foo", + }, + DefaultEndpoint: "127.0.0.1:hi", + }, + }, + }, false}, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + err := ValidateSidecar("foo", "bar", tt.in) + if err == nil && !tt.valid { + t.Fatalf("ValidateSidecar(%v) = true, wanted false", tt.in) + } else if err != nil && tt.valid { + t.Fatalf("ValidateSidecar(%v) = %v, wanted true", tt.in, err) + } + }) + } +} diff --git a/pilot/pkg/networking/core/v1alpha3/cluster.go b/pilot/pkg/networking/core/v1alpha3/cluster.go index 552593e4b088..196130f71222 100644 --- a/pilot/pkg/networking/core/v1alpha3/cluster.go +++ b/pilot/pkg/networking/core/v1alpha3/cluster.go @@ -17,11 +17,13 @@ package v1alpha3 import ( "fmt" "path" + "strconv" + "strings" "time" - v2 "github.com/envoyproxy/go-control-plane/envoy/api/v2" + apiv2 "github.com/envoyproxy/go-control-plane/envoy/api/v2" "github.com/envoyproxy/go-control-plane/envoy/api/v2/auth" - v2_cluster "github.com/envoyproxy/go-control-plane/envoy/api/v2/cluster" + v2Cluster "github.com/envoyproxy/go-control-plane/envoy/api/v2/cluster" "github.com/envoyproxy/go-control-plane/envoy/api/v2/core" "github.com/envoyproxy/go-control-plane/envoy/api/v2/endpoint" envoy_type "github.com/envoyproxy/go-control-plane/envoy/type" @@ -37,38 +39,63 @@ import ( const ( // DefaultLbType set to round robin DefaultLbType = networking.LoadBalancerSettings_ROUND_ROBIN + // ManagementClusterHostname indicates the hostname used for building inbound clusters for management ports ManagementClusterHostname = "mgmtCluster" ) +var ( + defaultInboundCircuitBreakerThresholds = v2Cluster.CircuitBreakers_Thresholds{} + defaultOutboundCircuitBreakerThresholds = v2Cluster.CircuitBreakers_Thresholds{ + // DefaultMaxRetries specifies the default for the Envoy circuit breaker parameter max_retries. This + // defines the maximum number of parallel retries a given Envoy will allow to the upstream cluster. Envoy defaults + // this value to 3, however that has shown to be insufficient during periods of pod churn (e.g. rolling updates), + // where multiple endpoints in a cluster are terminated. In these scenarios the circuit breaker can kick + // in before Pilot is able to deliver an updated endpoint list to Envoy, leading to client-facing 503s. + MaxRetries: &types.UInt32Value{Value: 1024}, + } +) + +// GetDefaultCircuitBreakerThresholds returns a copy of the default circuit breaker thresholds for the given traffic direction. +func GetDefaultCircuitBreakerThresholds(direction model.TrafficDirection) *v2Cluster.CircuitBreakers_Thresholds { + if direction == model.TrafficDirectionInbound { + thresholds := defaultInboundCircuitBreakerThresholds + return &thresholds + } + thresholds := defaultOutboundCircuitBreakerThresholds + return &thresholds +} + // TODO: Need to do inheritance of DestRules based on domain suffix match // BuildClusters returns the list of clusters for the given proxy. This is the CDS output // For outbound: Cluster for each service/subset hostname or cidr with SNI set to service hostname // Cluster type based on resolution // For inbound (sidecar only): Cluster for each inbound endpoint port and for each service port -func (configgen *ConfigGeneratorImpl) BuildClusters(env *model.Environment, proxy *model.Proxy, push *model.PushContext) ([]*v2.Cluster, error) { - clusters := make([]*v2.Cluster, 0) +func (configgen *ConfigGeneratorImpl) BuildClusters(env *model.Environment, proxy *model.Proxy, push *model.PushContext) ([]*apiv2.Cluster, error) { + clusters := make([]*apiv2.Cluster, 0) - recomputeOutboundClusters := true - if configgen.CanUsePrecomputedCDS(proxy) { - if configgen.PrecomputedOutboundClusters != nil && configgen.PrecomputedOutboundClusters[proxy.ConfigNamespace] != nil { - clusters = append(clusters, configgen.PrecomputedOutboundClusters[proxy.ConfigNamespace]...) - recomputeOutboundClusters = false - } - } - - if recomputeOutboundClusters { - clusters = append(clusters, configgen.buildOutboundClusters(env, proxy, push)...) - } - - if proxy.Type == model.Sidecar { + switch proxy.Type { + case model.SidecarProxy: instances, err := env.GetProxyServiceInstances(proxy) if err != nil { log.Errorf("failed to get service proxy service instances: %v", err) return nil, err } + sidecarScope := proxy.SidecarScope + recomputeOutboundClusters := true + if configgen.CanUsePrecomputedCDS(proxy) { + if sidecarScope != nil && sidecarScope.XDSOutboundClusters != nil { + clusters = append(clusters, sidecarScope.XDSOutboundClusters...) + recomputeOutboundClusters = false + } + } + + if recomputeOutboundClusters { + clusters = append(clusters, configgen.buildOutboundClusters(env, proxy, push)...) + } + // Let ServiceDiscovery decide which IP and Port are used for management if // there are multiple IPs managementPorts := make([]*model.Port, 0) @@ -76,10 +103,23 @@ func (configgen *ConfigGeneratorImpl) BuildClusters(env *model.Environment, prox managementPorts = append(managementPorts, env.ManagementPorts(ip)...) } clusters = append(clusters, configgen.buildInboundClusters(env, proxy, push, instances, managementPorts)...) - } - if proxy.Type == model.Router && proxy.GetRouterMode() == model.SniDnatRouter { - clusters = append(clusters, configgen.buildOutboundSniDnatClusters(env, proxy, push)...) + default: // Gateways + recomputeOutboundClusters := true + if configgen.CanUsePrecomputedCDS(proxy) { + if configgen.PrecomputedOutboundClustersForGateways != nil && + configgen.PrecomputedOutboundClustersForGateways[proxy.ConfigNamespace] != nil { + clusters = append(clusters, configgen.PrecomputedOutboundClustersForGateways[proxy.ConfigNamespace]...) + recomputeOutboundClusters = false + } + } + + if recomputeOutboundClusters { + clusters = append(clusters, configgen.buildOutboundClusters(env, proxy, push)...) + } + if proxy.Type == model.Router && proxy.GetRouterMode() == model.SniDnatRouter { + clusters = append(clusters, configgen.buildOutboundSniDnatClusters(env, proxy, push)...) + } } // Add a blackhole and passthrough cluster for catching traffic to unresolved routes @@ -92,9 +132,9 @@ func (configgen *ConfigGeneratorImpl) BuildClusters(env *model.Environment, prox // resolves cluster name conflicts. there can be duplicate cluster names if there are conflicting service definitions. // for any clusters that share the same name the first cluster is kept and the others are discarded. -func normalizeClusters(push *model.PushContext, proxy *model.Proxy, clusters []*v2.Cluster) []*v2.Cluster { +func normalizeClusters(push *model.PushContext, proxy *model.Proxy, clusters []*apiv2.Cluster) []*apiv2.Cluster { have := make(map[string]bool) - out := make([]*v2.Cluster, 0, len(clusters)) + out := make([]*apiv2.Cluster, 0, len(clusters)) for _, cluster := range clusters { if !have[cluster.Name] { out = append(out, cluster) @@ -107,8 +147,8 @@ func normalizeClusters(push *model.PushContext, proxy *model.Proxy, clusters []* return out } -func (configgen *ConfigGeneratorImpl) buildOutboundClusters(env *model.Environment, proxy *model.Proxy, push *model.PushContext) []*v2.Cluster { - clusters := make([]*v2.Cluster, 0) +func (configgen *ConfigGeneratorImpl) buildOutboundClusters(env *model.Environment, proxy *model.Proxy, push *model.PushContext) []*apiv2.Cluster { + clusters := make([]*apiv2.Cluster, 0) inputParams := &plugin.InputParams{ Env: env, @@ -117,8 +157,6 @@ func (configgen *ConfigGeneratorImpl) buildOutboundClusters(env *model.Environme } networkView := model.GetNetworkView(proxy) - // NOTE: Proxy can be nil here due to precomputed CDS - // TODO: get rid of precomputed CDS when adding NetworkScopes as precomputed CDS is not useful in that context for _, service := range push.Services(proxy) { config := push.DestinationRule(proxy, service.Hostname) for _, port := range service.Ports { @@ -134,7 +172,7 @@ func (configgen *ConfigGeneratorImpl) buildOutboundClusters(env *model.Environme discoveryType := convertResolution(service.Resolution) clusterName := model.BuildSubsetKey(model.TrafficDirectionOutbound, "", service.Hostname, port.Port) serviceAccounts := env.ServiceAccounts.GetIstioServiceAccounts(service.Hostname, []int{port.Port}) - defaultCluster := buildDefaultCluster(env, clusterName, discoveryType, lbEndpoints) + defaultCluster := buildDefaultCluster(env, clusterName, discoveryType, lbEndpoints, model.TrafficDirectionOutbound) updateEds(defaultCluster) setUpstreamProtocol(defaultCluster, port) @@ -143,7 +181,8 @@ func (configgen *ConfigGeneratorImpl) buildOutboundClusters(env *model.Environme if config != nil { destinationRule := config.Spec.(*networking.DestinationRule) defaultSni := model.BuildDNSSrvSubsetKey(model.TrafficDirectionOutbound, "", service.Hostname, port.Port) - applyTrafficPolicy(env, defaultCluster, destinationRule.TrafficPolicy, port, serviceAccounts, defaultSni, DefaultClusterMode) + applyTrafficPolicy(env, defaultCluster, destinationRule.TrafficPolicy, port, serviceAccounts, + defaultSni, DefaultClusterMode, model.TrafficDirectionOutbound) for _, subset := range destinationRule.Subsets { inputParams.Subset = subset.Name @@ -152,14 +191,16 @@ func (configgen *ConfigGeneratorImpl) buildOutboundClusters(env *model.Environme // clusters with discovery type STATIC, STRICT_DNS or LOGICAL_DNS rely on cluster.hosts field // ServiceEntry's need to filter hosts based on subset.labels in order to perform weighted routing - if discoveryType != v2.Cluster_EDS && len(subset.Labels) != 0 { + if discoveryType != apiv2.Cluster_EDS && len(subset.Labels) != 0 { lbEndpoints = buildLocalityLbEndpoints(env, networkView, service, port.Port, []model.Labels{subset.Labels}) } - subsetCluster := buildDefaultCluster(env, subsetClusterName, discoveryType, lbEndpoints) + subsetCluster := buildDefaultCluster(env, subsetClusterName, discoveryType, lbEndpoints, model.TrafficDirectionOutbound) updateEds(subsetCluster) setUpstreamProtocol(subsetCluster, port) - applyTrafficPolicy(env, subsetCluster, destinationRule.TrafficPolicy, port, serviceAccounts, defaultSni, DefaultClusterMode) - applyTrafficPolicy(env, subsetCluster, subset.TrafficPolicy, port, serviceAccounts, defaultSni, DefaultClusterMode) + applyTrafficPolicy(env, subsetCluster, destinationRule.TrafficPolicy, port, serviceAccounts, defaultSni, + DefaultClusterMode, model.TrafficDirectionOutbound) + applyTrafficPolicy(env, subsetCluster, subset.TrafficPolicy, port, serviceAccounts, defaultSni, + DefaultClusterMode, model.TrafficDirectionOutbound) // call plugins for _, p := range configgen.Plugins { p.OnOutboundCluster(inputParams, subsetCluster) @@ -179,8 +220,8 @@ func (configgen *ConfigGeneratorImpl) buildOutboundClusters(env *model.Environme } // SniDnat clusters do not have any TLS setting, as they simply forward traffic to upstream -func (configgen *ConfigGeneratorImpl) buildOutboundSniDnatClusters(env *model.Environment, proxy *model.Proxy, push *model.PushContext) []*v2.Cluster { - clusters := make([]*v2.Cluster, 0) +func (configgen *ConfigGeneratorImpl) buildOutboundSniDnatClusters(env *model.Environment, proxy *model.Proxy, push *model.PushContext) []*apiv2.Cluster { + clusters := make([]*apiv2.Cluster, 0) networkView := model.GetNetworkView(proxy) @@ -196,27 +237,30 @@ func (configgen *ConfigGeneratorImpl) buildOutboundSniDnatClusters(env *model.En discoveryType := convertResolution(service.Resolution) clusterName := model.BuildDNSSrvSubsetKey(model.TrafficDirectionOutbound, "", service.Hostname, port.Port) - defaultCluster := buildDefaultCluster(env, clusterName, discoveryType, lbEndpoints) + defaultCluster := buildDefaultCluster(env, clusterName, discoveryType, lbEndpoints, model.TrafficDirectionOutbound) defaultCluster.TlsContext = nil updateEds(defaultCluster) clusters = append(clusters, defaultCluster) if config != nil { destinationRule := config.Spec.(*networking.DestinationRule) - applyTrafficPolicy(env, defaultCluster, destinationRule.TrafficPolicy, port, nil, "", SniDnatClusterMode) + applyTrafficPolicy(env, defaultCluster, destinationRule.TrafficPolicy, port, nil, "", + SniDnatClusterMode, model.TrafficDirectionOutbound) for _, subset := range destinationRule.Subsets { subsetClusterName := model.BuildDNSSrvSubsetKey(model.TrafficDirectionOutbound, subset.Name, service.Hostname, port.Port) // clusters with discovery type STATIC, STRICT_DNS or LOGICAL_DNS rely on cluster.hosts field // ServiceEntry's need to filter hosts based on subset.labels in order to perform weighted routing - if discoveryType != v2.Cluster_EDS && len(subset.Labels) != 0 { + if discoveryType != apiv2.Cluster_EDS && len(subset.Labels) != 0 { lbEndpoints = buildLocalityLbEndpoints(env, networkView, service, port.Port, []model.Labels{subset.Labels}) } - subsetCluster := buildDefaultCluster(env, subsetClusterName, discoveryType, lbEndpoints) + subsetCluster := buildDefaultCluster(env, subsetClusterName, discoveryType, lbEndpoints, model.TrafficDirectionOutbound) subsetCluster.TlsContext = nil updateEds(subsetCluster) - applyTrafficPolicy(env, subsetCluster, destinationRule.TrafficPolicy, port, nil, "", SniDnatClusterMode) - applyTrafficPolicy(env, subsetCluster, subset.TrafficPolicy, port, nil, "", SniDnatClusterMode) + applyTrafficPolicy(env, subsetCluster, destinationRule.TrafficPolicy, port, nil, "", + SniDnatClusterMode, model.TrafficDirectionOutbound) + applyTrafficPolicy(env, subsetCluster, subset.TrafficPolicy, port, nil, "", + SniDnatClusterMode, model.TrafficDirectionOutbound) clusters = append(clusters, subsetCluster) } } @@ -226,11 +270,11 @@ func (configgen *ConfigGeneratorImpl) buildOutboundSniDnatClusters(env *model.En return clusters } -func updateEds(cluster *v2.Cluster) { - if cluster.Type != v2.Cluster_EDS { +func updateEds(cluster *apiv2.Cluster) { + if cluster.Type != apiv2.Cluster_EDS { return } - cluster.EdsClusterConfig = &v2.Cluster_EdsClusterConfig{ + cluster.EdsClusterConfig = &apiv2.Cluster_EdsClusterConfig{ ServiceName: cluster.Name, EdsConfig: &core.ConfigSource{ ConfigSourceSpecifier: &core.ConfigSource_Ads{ @@ -290,8 +334,8 @@ func buildLocalityLbEndpoints(env *model.Environment, proxyNetworkView map[strin return util.LocalityLbWeightNormalize(LocalityLbEndpoints) } -func buildInboundLocalityLbEndpoints(port int) []endpoint.LocalityLbEndpoints { - address := util.BuildAddress("127.0.0.1", uint32(port)) +func buildInboundLocalityLbEndpoints(bind string, port int) []endpoint.LocalityLbEndpoints { + address := util.BuildAddress(bind, uint32(port)) lbEndpoint := endpoint.LbEndpoint{ Endpoint: &endpoint.Endpoint{ Address: &address, @@ -305,65 +349,152 @@ func buildInboundLocalityLbEndpoints(port int) []endpoint.LocalityLbEndpoints { } func (configgen *ConfigGeneratorImpl) buildInboundClusters(env *model.Environment, proxy *model.Proxy, - push *model.PushContext, instances []*model.ServiceInstance, managementPorts []*model.Port) []*v2.Cluster { + push *model.PushContext, instances []*model.ServiceInstance, managementPorts []*model.Port) []*apiv2.Cluster { - clusters := make([]*v2.Cluster, 0) - inputParams := &plugin.InputParams{ - Env: env, - Push: push, - Node: proxy, - } + clusters := make([]*apiv2.Cluster, 0) - for _, instance := range instances { - // This cluster name is mainly for stats. - clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, "", instance.Service.Hostname, instance.Endpoint.ServicePort.Port) - localityLbEndpoints := buildInboundLocalityLbEndpoints(instance.Endpoint.Port) - localCluster := buildDefaultCluster(env, clusterName, v2.Cluster_STATIC, localityLbEndpoints) - setUpstreamProtocol(localCluster, instance.Endpoint.ServicePort) - // call plugins - inputParams.ServiceInstance = instance - for _, p := range configgen.Plugins { - p.OnInboundCluster(inputParams, localCluster) - } - - // When users specify circuit breakers, they need to be set on the receiver end - // (server side) as well as client side, so that the server has enough capacity - // (not the defaults) to handle the increased traffic volume - // TODO: This is not foolproof - if instance is part of multiple services listening on same port, - // choice of inbound cluster is arbitrary. So the connection pool settings may not apply cleanly. - config := push.DestinationRule(proxy, instance.Service.Hostname) - if config != nil { - destinationRule := config.Spec.(*networking.DestinationRule) - if destinationRule.TrafficPolicy != nil { - // only connection pool settings make sense on the inbound path. - // upstream TLS settings/outlier detection/load balancer don't apply here. - applyConnectionPool(env, localCluster, destinationRule.TrafficPolicy.ConnectionPool) + // The inbound clusters for a node depends on whether the node has a SidecarScope with inbound listeners + // or not. If the node has a sidecarscope with ingress listeners, we only return clusters corresponding + // to those listeners i.e. clusters made out of the defaultEndpoint field. + // If the node has no sidecarScope and has interception mode set to NONE, then we should skip the inbound + // clusters, because there would be no corresponding inbound listeners + sidecarScope := proxy.SidecarScope + + if sidecarScope == nil || !sidecarScope.HasCustomIngressListeners { + // No user supplied sidecar scope or the user supplied one has no ingress listeners + + // We should not create inbound listeners in NONE mode based on the service instances + // Doing so will prevent the workloads from starting as they would be listening on the same port + // Users are required to provide the sidecar config to define the inbound listeners + if proxy.GetInterceptionMode() == model.InterceptionNone { + return nil + } + + for _, instance := range instances { + pluginParams := &plugin.InputParams{ + Env: env, + Node: proxy, + ServiceInstance: instance, + Port: instance.Endpoint.ServicePort, + Push: push, + Bind: LocalhostAddress, } + localCluster := configgen.buildInboundClusterForPortOrUDS(pluginParams) + clusters = append(clusters, localCluster) } - clusters = append(clusters, localCluster) - } - // Add a passthrough cluster for traffic to management ports (health check ports) - for _, port := range managementPorts { - clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, "", ManagementClusterHostname, port.Port) - localityLbEndpoints := buildInboundLocalityLbEndpoints(port.Port) - mgmtCluster := buildDefaultCluster(env, clusterName, v2.Cluster_STATIC, localityLbEndpoints) - setUpstreamProtocol(mgmtCluster, port) - clusters = append(clusters, mgmtCluster) + // Add a passthrough cluster for traffic to management ports (health check ports) + for _, port := range managementPorts { + clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, port.Name, + ManagementClusterHostname, port.Port) + localityLbEndpoints := buildInboundLocalityLbEndpoints(LocalhostAddress, port.Port) + mgmtCluster := buildDefaultCluster(env, clusterName, apiv2.Cluster_STATIC, localityLbEndpoints, + model.TrafficDirectionInbound) + setUpstreamProtocol(mgmtCluster, port) + clusters = append(clusters, mgmtCluster) + } + } else { + rule := sidecarScope.Config.Spec.(*networking.Sidecar) + for _, ingressListener := range rule.Ingress { + // LDS would have setup the inbound clusters + // as inbound|portNumber|portName|Hostname + listenPort := &model.Port{ + Port: int(ingressListener.Port.Number), + Protocol: model.ParseProtocol(ingressListener.Port.Protocol), + Name: ingressListener.Port.Name, + } + + // When building an inbound cluster for the ingress listener, we take the defaultEndpoint specified + // by the user and parse it into host:port or a unix domain socket + // The default endpoint can be 127.0.0.1:port or :port or unix domain socket + bind := LocalhostAddress + port := 0 + var err error + if strings.HasPrefix(ingressListener.DefaultEndpoint, model.UnixAddressPrefix) { + // this is a UDS endpoint. assign it as is + bind = ingressListener.DefaultEndpoint + } else { + // parse the ip, port. Validation guarantees presence of : + parts := strings.Split(ingressListener.DefaultEndpoint, ":") + if port, err = strconv.Atoi(parts[1]); err != nil { + continue + } + } + + // First create a copy of a service instance + instance := &model.ServiceInstance{ + Endpoint: instances[0].Endpoint, + Service: instances[0].Service, + Labels: instances[0].Labels, + ServiceAccount: instances[0].ServiceAccount, + } + + // Update the values here so that the plugins use the right ports + // uds values + // TODO: all plugins need to be updated to account for the fact that + // the port may be 0 but bind may have a UDS value + // Inboundroute will be different for + instance.Endpoint.Address = bind + instance.Endpoint.ServicePort = listenPort + instance.Endpoint.Port = port + + pluginParams := &plugin.InputParams{ + Env: env, + Node: proxy, + ServiceInstance: instances[0], + Port: listenPort, + Push: push, + Bind: bind, + } + localCluster := configgen.buildInboundClusterForPortOrUDS(pluginParams) + clusters = append(clusters, localCluster) + } } + return clusters } -func convertResolution(resolution model.Resolution) v2.Cluster_DiscoveryType { +func (configgen *ConfigGeneratorImpl) buildInboundClusterForPortOrUDS(pluginParams *plugin.InputParams) *apiv2.Cluster { + instance := pluginParams.ServiceInstance + clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, instance.Endpoint.ServicePort.Name, + instance.Service.Hostname, instance.Endpoint.ServicePort.Port) + localityLbEndpoints := buildInboundLocalityLbEndpoints(pluginParams.Bind, instance.Endpoint.Port) + localCluster := buildDefaultCluster(pluginParams.Env, clusterName, apiv2.Cluster_STATIC, localityLbEndpoints, + model.TrafficDirectionInbound) + setUpstreamProtocol(localCluster, instance.Endpoint.ServicePort) + // call plugins + for _, p := range configgen.Plugins { + p.OnInboundCluster(pluginParams, localCluster) + } + + // When users specify circuit breakers, they need to be set on the receiver end + // (server side) as well as client side, so that the server has enough capacity + // (not the defaults) to handle the increased traffic volume + // TODO: This is not foolproof - if instance is part of multiple services listening on same port, + // choice of inbound cluster is arbitrary. So the connection pool settings may not apply cleanly. + config := pluginParams.Push.DestinationRule(pluginParams.Node, instance.Service.Hostname) + if config != nil { + destinationRule := config.Spec.(*networking.DestinationRule) + if destinationRule.TrafficPolicy != nil { + // only connection pool settings make sense on the inbound path. + // upstream TLS settings/outlier detection/load balancer don't apply here. + applyConnectionPool(pluginParams.Env, localCluster, destinationRule.TrafficPolicy.ConnectionPool, + model.TrafficDirectionInbound) + } + } + return localCluster +} + +func convertResolution(resolution model.Resolution) apiv2.Cluster_DiscoveryType { switch resolution { case model.ClientSideLB: - return v2.Cluster_EDS + return apiv2.Cluster_EDS case model.DNSLB: - return v2.Cluster_STRICT_DNS + return apiv2.Cluster_STRICT_DNS case model.Passthrough: - return v2.Cluster_ORIGINAL_DST + return apiv2.Cluster_ORIGINAL_DST default: - return v2.Cluster_EDS + return apiv2.Cluster_EDS } } @@ -443,14 +574,14 @@ const ( // FIXME: There are too many variables here. Create a clusterOpts struct and stick the values in it, just like // listenerOpts -func applyTrafficPolicy(env *model.Environment, cluster *v2.Cluster, policy *networking.TrafficPolicy, - port *model.Port, serviceAccounts []string, defaultSni string, clusterMode ClusterMode) { +func applyTrafficPolicy(env *model.Environment, cluster *apiv2.Cluster, policy *networking.TrafficPolicy, + port *model.Port, serviceAccounts []string, defaultSni string, clusterMode ClusterMode, direction model.TrafficDirection) { if policy == nil { return } connectionPool, outlierDetection, loadBalancer, tls := SelectTrafficPolicyComponents(policy, port) - applyConnectionPool(env, cluster, connectionPool) + applyConnectionPool(env, cluster, connectionPool, direction) applyOutlierDetection(cluster, outlierDetection) applyLoadBalancer(cluster, loadBalancer) if clusterMode != SniDnatClusterMode { @@ -460,13 +591,12 @@ func applyTrafficPolicy(env *model.Environment, cluster *v2.Cluster, policy *net } // FIXME: there isn't a way to distinguish between unset values and zero values -func applyConnectionPool(env *model.Environment, cluster *v2.Cluster, settings *networking.ConnectionPoolSettings) { +func applyConnectionPool(env *model.Environment, cluster *apiv2.Cluster, settings *networking.ConnectionPoolSettings, direction model.TrafficDirection) { if settings == nil { return } - threshold := &v2_cluster.CircuitBreakers_Thresholds{} - + threshold := GetDefaultCircuitBreakerThresholds(direction) if settings.Http != nil { if settings.Http.Http2MaxRequests > 0 { // Envoy only applies MaxRequests in HTTP/2 clusters @@ -481,7 +611,7 @@ func applyConnectionPool(env *model.Environment, cluster *v2.Cluster, settings * cluster.MaxRequestsPerConnection = &types.UInt32Value{Value: uint32(settings.Http.MaxRequestsPerConnection)} } - // FIXME: zero is a valid value if explicitly set, otherwise we want to use the default value of 3 + // FIXME: zero is a valid value if explicitly set, otherwise we want to use the default if settings.Http.MaxRetries > 0 { threshold.MaxRetries = &types.UInt32Value{Value: uint32(settings.Http.MaxRetries)} } @@ -499,12 +629,12 @@ func applyConnectionPool(env *model.Environment, cluster *v2.Cluster, settings * applyTCPKeepalive(env, cluster, settings) } - cluster.CircuitBreakers = &v2_cluster.CircuitBreakers{ - Thresholds: []*v2_cluster.CircuitBreakers_Thresholds{threshold}, + cluster.CircuitBreakers = &v2Cluster.CircuitBreakers{ + Thresholds: []*v2Cluster.CircuitBreakers_Thresholds{threshold}, } } -func applyTCPKeepalive(env *model.Environment, cluster *v2.Cluster, settings *networking.ConnectionPoolSettings) { +func applyTCPKeepalive(env *model.Environment, cluster *apiv2.Cluster, settings *networking.ConnectionPoolSettings) { var keepaliveProbes uint32 var keepaliveTime *types.Duration var keepaliveInterval *types.Duration @@ -532,7 +662,7 @@ func applyTCPKeepalive(env *model.Environment, cluster *v2.Cluster, settings *ne // If none of the proto fields are set, then an empty tcp_keepalive is set in Envoy. // That would set SO_KEEPALIVE on the socket with OS default values. - upstreamConnectionOptions := &v2.UpstreamConnectionOptions{ + upstreamConnectionOptions := &apiv2.UpstreamConnectionOptions{ TcpKeepalive: &core.TcpKeepalive{}, } @@ -553,12 +683,12 @@ func applyTCPKeepalive(env *model.Environment, cluster *v2.Cluster, settings *ne } // FIXME: there isn't a way to distinguish between unset values and zero values -func applyOutlierDetection(cluster *v2.Cluster, outlier *networking.OutlierDetection) { +func applyOutlierDetection(cluster *apiv2.Cluster, outlier *networking.OutlierDetection) { if outlier == nil { return } - out := &v2_cluster.OutlierDetection{} + out := &v2Cluster.OutlierDetection{} if outlier.BaseEjectionTime != nil { out.BaseEjectionTime = outlier.BaseEjectionTime } @@ -579,13 +709,13 @@ func applyOutlierDetection(cluster *v2.Cluster, outlier *networking.OutlierDetec if outlier.MinHealthPercent > 0 { if cluster.CommonLbConfig == nil { - cluster.CommonLbConfig = &v2.Cluster_CommonLbConfig{} + cluster.CommonLbConfig = &apiv2.Cluster_CommonLbConfig{} } cluster.CommonLbConfig.HealthyPanicThreshold = &envoy_type.Percent{Value: float64(outlier.MinHealthPercent)} } } -func applyLoadBalancer(cluster *v2.Cluster, lb *networking.LoadBalancerSettings) { +func applyLoadBalancer(cluster *apiv2.Cluster, lb *networking.LoadBalancerSettings) { if lb == nil { return } @@ -593,30 +723,30 @@ func applyLoadBalancer(cluster *v2.Cluster, lb *networking.LoadBalancerSettings) // TODO: MAGLEV switch lb.GetSimple() { case networking.LoadBalancerSettings_LEAST_CONN: - cluster.LbPolicy = v2.Cluster_LEAST_REQUEST + cluster.LbPolicy = apiv2.Cluster_LEAST_REQUEST case networking.LoadBalancerSettings_RANDOM: - cluster.LbPolicy = v2.Cluster_RANDOM + cluster.LbPolicy = apiv2.Cluster_RANDOM case networking.LoadBalancerSettings_ROUND_ROBIN: - cluster.LbPolicy = v2.Cluster_ROUND_ROBIN + cluster.LbPolicy = apiv2.Cluster_ROUND_ROBIN case networking.LoadBalancerSettings_PASSTHROUGH: - cluster.LbPolicy = v2.Cluster_ORIGINAL_DST_LB - cluster.Type = v2.Cluster_ORIGINAL_DST + cluster.LbPolicy = apiv2.Cluster_ORIGINAL_DST_LB + cluster.Type = apiv2.Cluster_ORIGINAL_DST } // DO not do if else here. since lb.GetSimple returns a enum value (not pointer). consistentHash := lb.GetConsistentHash() if consistentHash != nil { - cluster.LbPolicy = v2.Cluster_RING_HASH - cluster.LbConfig = &v2.Cluster_RingHashLbConfig_{ - RingHashLbConfig: &v2.Cluster_RingHashLbConfig{ + cluster.LbPolicy = apiv2.Cluster_RING_HASH + cluster.LbConfig = &apiv2.Cluster_RingHashLbConfig_{ + RingHashLbConfig: &apiv2.Cluster_RingHashLbConfig{ MinimumRingSize: &types.UInt64Value{Value: consistentHash.GetMinimumRingSize()}, }, } } } -func applyUpstreamTLSSettings(env *model.Environment, cluster *v2.Cluster, tls *networking.TLSSettings) { +func applyUpstreamTLSSettings(env *model.Environment, cluster *apiv2.Cluster, tls *networking.TLSSettings) { if tls == nil { return } @@ -717,7 +847,7 @@ func applyUpstreamTLSSettings(env *model.Environment, cluster *v2.Cluster, tls * } } -func setUpstreamProtocol(cluster *v2.Cluster, port *model.Port) { +func setUpstreamProtocol(cluster *apiv2.Cluster, port *model.Port) { if port.Protocol.IsHTTP2() { cluster.Http2ProtocolOptions = &core.Http2ProtocolOptions{ // Envoy default value of 100 is too low for data path. @@ -730,53 +860,58 @@ func setUpstreamProtocol(cluster *v2.Cluster, port *model.Port) { // generates a cluster that sends traffic to dummy localport 0 // This cluster is used to catch all traffic to unresolved destinations in virtual service -func buildBlackHoleCluster() *v2.Cluster { - cluster := &v2.Cluster{ +func buildBlackHoleCluster() *apiv2.Cluster { + cluster := &apiv2.Cluster{ Name: util.BlackHoleCluster, - Type: v2.Cluster_STATIC, + Type: apiv2.Cluster_STATIC, ConnectTimeout: 1 * time.Second, - LbPolicy: v2.Cluster_ROUND_ROBIN, + LbPolicy: apiv2.Cluster_ROUND_ROBIN, } return cluster } // generates a cluster that sends traffic to the original destination. // This cluster is used to catch all traffic to unknown listener ports -func buildDefaultPassthroughCluster() *v2.Cluster { - cluster := &v2.Cluster{ +func buildDefaultPassthroughCluster() *apiv2.Cluster { + cluster := &apiv2.Cluster{ Name: util.PassthroughCluster, - Type: v2.Cluster_ORIGINAL_DST, + Type: apiv2.Cluster_ORIGINAL_DST, ConnectTimeout: 1 * time.Second, - LbPolicy: v2.Cluster_ORIGINAL_DST_LB, + LbPolicy: apiv2.Cluster_ORIGINAL_DST_LB, } return cluster } // TODO: supply LbEndpoints or even better, LocalityLbEndpoints here // change all other callsites accordingly -func buildDefaultCluster(env *model.Environment, name string, discoveryType v2.Cluster_DiscoveryType, - localityLbEndpoints []endpoint.LocalityLbEndpoints) *v2.Cluster { - cluster := &v2.Cluster{ +func buildDefaultCluster(env *model.Environment, name string, discoveryType apiv2.Cluster_DiscoveryType, + localityLbEndpoints []endpoint.LocalityLbEndpoints, direction model.TrafficDirection) *apiv2.Cluster { + cluster := &apiv2.Cluster{ Name: name, Type: discoveryType, - LoadAssignment: &v2.ClusterLoadAssignment{ - ClusterName: name, - Endpoints: localityLbEndpoints, - }, } - if discoveryType == v2.Cluster_STRICT_DNS || discoveryType == v2.Cluster_LOGICAL_DNS { - cluster.DnsLookupFamily = v2.Cluster_V4_ONLY + if discoveryType == apiv2.Cluster_STRICT_DNS || discoveryType == apiv2.Cluster_LOGICAL_DNS { + cluster.DnsLookupFamily = apiv2.Cluster_V4_ONLY + } + + if discoveryType == apiv2.Cluster_STATIC || discoveryType == apiv2.Cluster_STRICT_DNS || + discoveryType == apiv2.Cluster_LOGICAL_DNS { + cluster.LoadAssignment = &apiv2.ClusterLoadAssignment{ + ClusterName: name, + Endpoints: localityLbEndpoints, + } } defaultTrafficPolicy := buildDefaultTrafficPolicy(env, discoveryType) - applyTrafficPolicy(env, cluster, defaultTrafficPolicy, nil, nil, "", DefaultClusterMode) + applyTrafficPolicy(env, cluster, defaultTrafficPolicy, nil, nil, "", + DefaultClusterMode, direction) return cluster } -func buildDefaultTrafficPolicy(env *model.Environment, discoveryType v2.Cluster_DiscoveryType) *networking.TrafficPolicy { +func buildDefaultTrafficPolicy(env *model.Environment, discoveryType apiv2.Cluster_DiscoveryType) *networking.TrafficPolicy { lbPolicy := DefaultLbType - if discoveryType == v2.Cluster_ORIGINAL_DST { + if discoveryType == apiv2.Cluster_ORIGINAL_DST { lbPolicy = networking.LoadBalancerSettings_PASSTHROUGH } return &networking.TrafficPolicy{ diff --git a/pilot/pkg/networking/core/v1alpha3/cluster_test.go b/pilot/pkg/networking/core/v1alpha3/cluster_test.go index 3a7a8b8964d7..2c3b5b252d69 100644 --- a/pilot/pkg/networking/core/v1alpha3/cluster_test.go +++ b/pilot/pkg/networking/core/v1alpha3/cluster_test.go @@ -15,12 +15,14 @@ package v1alpha3_test import ( + "fmt" "testing" "time" - v2 "github.com/envoyproxy/go-control-plane/envoy/api/v2" + apiv2 "github.com/envoyproxy/go-control-plane/envoy/api/v2" + "github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/types" - "github.com/onsi/gomega" + . "github.com/onsi/gomega" meshconfig "istio.io/api/mesh/v1alpha1" networking "istio.io/api/networking/v1alpha3" @@ -41,294 +43,296 @@ const ( DestinationRuleTCPKeepaliveSeconds = 21 ) -func TestBuildGatewayClustersWithRingHashLb(t *testing.T) { - g := gomega.NewGomegaWithT(t) - - configgen := core.NewConfigGenerator([]plugin.Plugin{}) - proxy := &model.Proxy{ - ClusterID: "some-cluster-id", - Type: model.Router, - IPAddresses: []string{"6.6.6.6"}, - DNSDomain: "default.example.org", - Metadata: make(map[string]string), +var ( + testMesh = meshconfig.MeshConfig{ + ConnectTimeout: &types.Duration{ + Seconds: 10, + Nanos: 1, + }, } +) - env := buildEnvForClustersWithRingHashLb() - - clusters, err := configgen.BuildClusters(env, proxy, env.PushContext) - g.Expect(err).NotTo(gomega.HaveOccurred()) - - g.Expect(len(clusters)).To(gomega.Equal(3)) - - cluster := clusters[0] - g.Expect(cluster.LbPolicy).To(gomega.Equal(v2.Cluster_RING_HASH)) - g.Expect(cluster.GetRingHashLbConfig().GetMinimumRingSize().GetValue()).To(gomega.Equal(uint64(2))) - g.Expect(cluster.Name).To(gomega.Equal("outbound|8080||*.example.org")) - g.Expect(cluster.Type).To(gomega.Equal(v2.Cluster_EDS)) - g.Expect(cluster.ConnectTimeout).To(gomega.Equal(time.Duration(10000000001))) -} - -func buildEnvForClustersWithRingHashLb() *model.Environment { - serviceDiscovery := &fakes.ServiceDiscovery{} +func TestHTTPCircuitBreakerThresholds(t *testing.T) { + g := NewGomegaWithT(t) - serviceDiscovery.ServicesReturns([]*model.Service{ + directionInfos := []struct { + direction model.TrafficDirection + clusterIndex int + }{ { - Hostname: "*.example.org", - Address: "1.1.1.1", - ClusterVIPs: make(map[string]string), - Ports: model.PortList{ - &model.Port{ - Name: "default", - Port: 8080, - Protocol: model.ProtocolHTTP, - }, - }, - }, - }, nil) - - meshConfig := &meshconfig.MeshConfig{ - ConnectTimeout: &types.Duration{ - Seconds: 10, - Nanos: 1, + direction: model.TrafficDirectionOutbound, + clusterIndex: 0, + }, { + direction: model.TrafficDirectionInbound, + clusterIndex: 1, }, } + settings := []*networking.ConnectionPoolSettings{ + nil, + { + Http: &networking.ConnectionPoolSettings_HTTPSettings{ + Http1MaxPendingRequests: 1, + Http2MaxRequests: 2, + MaxRequestsPerConnection: 3, + MaxRetries: 4, + }, + }} + + for _, directionInfo := range directionInfos { + for _, s := range settings { + settingsName := "default" + if s != nil { + settingsName = "override" + } + testName := fmt.Sprintf("%s-%s", directionInfo.direction, settingsName) + t.Run(testName, func(t *testing.T) { + clusters, err := buildTestClusters("*.example.org", model.SidecarProxy, testMesh, + &networking.DestinationRule{ + Host: "*.example.org", + TrafficPolicy: &networking.TrafficPolicy{ + ConnectionPool: s, + }, + }) + g.Expect(err).NotTo(HaveOccurred()) + g.Expect(len(clusters)).To(Equal(4)) + cluster := clusters[directionInfo.clusterIndex] + g.Expect(len(cluster.CircuitBreakers.Thresholds)).To(Equal(1)) + thresholds := cluster.CircuitBreakers.Thresholds[0] + + if s == nil { + // Assume the correct defaults for this direction. + g.Expect(thresholds).To(Equal(core.GetDefaultCircuitBreakerThresholds(directionInfo.direction))) + } else { + // Verify that the values were set correctly. + g.Expect(thresholds.MaxPendingRequests).To(Not(BeNil())) + g.Expect(thresholds.MaxPendingRequests.Value).To(Equal(uint32(s.Http.Http1MaxPendingRequests))) + g.Expect(thresholds.MaxRequests).To(Not(BeNil())) + g.Expect(thresholds.MaxRequests.Value).To(Equal(uint32(s.Http.Http2MaxRequests))) + g.Expect(cluster.MaxRequestsPerConnection).To(Not(BeNil())) + g.Expect(cluster.MaxRequestsPerConnection.Value).To(Equal(uint32(s.Http.MaxRequestsPerConnection))) + g.Expect(thresholds.MaxRetries).To(Not(BeNil())) + g.Expect(thresholds.MaxRetries.Value).To(Equal(uint32(s.Http.MaxRetries))) + } + }) + } + } +} - ttl := time.Nanosecond * 100 - configStore := &fakes.IstioConfigStore{} +func buildTestClusters(serviceHostname string, nodeType model.NodeType, mesh meshconfig.MeshConfig, + destRule proto.Message) ([]*apiv2.Cluster, error) { + configgen := core.NewConfigGenerator([]plugin.Plugin{}) - env := &model.Environment{ - ServiceDiscovery: serviceDiscovery, - ServiceAccounts: &fakes.ServiceAccounts{}, - IstioConfigStore: configStore, - Mesh: meshConfig, - MixerSAN: []string{}, + serviceDiscovery := &fakes.ServiceDiscovery{} + + servicePort := &model.Port{ + Name: "default", + Port: 8080, + Protocol: model.ProtocolHTTP, + } + service := &model.Service{ + Hostname: model.Hostname(serviceHostname), + Address: "1.1.1.1", + ClusterVIPs: make(map[string]string), + Ports: model.PortList{servicePort}, + } + instance := &model.ServiceInstance{ + Service: service, + Endpoint: model.NetworkEndpoint{ + Address: "192.168.1.1", + Port: 10001, + ServicePort: servicePort, + }, } + serviceDiscovery.ServicesReturns([]*model.Service{service}, nil) + serviceDiscovery.GetProxyServiceInstancesReturns([]*model.ServiceInstance{instance}, nil) - env.PushContext = model.NewPushContext() - env.PushContext.InitContext(env) + env := newTestEnvironment(serviceDiscovery, mesh) env.PushContext.SetDestinationRules([]model.Config{ {ConfigMeta: model.ConfigMeta{ Type: model.DestinationRule.Type, Version: model.DestinationRule.Version, Name: "acme", }, - Spec: &networking.DestinationRule{ - Host: "*.example.org", - TrafficPolicy: &networking.TrafficPolicy{ - LoadBalancer: &networking.LoadBalancerSettings{ - LbPolicy: &networking.LoadBalancerSettings_ConsistentHash{ - ConsistentHash: &networking.LoadBalancerSettings_ConsistentHashLB{ - MinimumRingSize: uint64(2), - HashKey: &networking.LoadBalancerSettings_ConsistentHashLB_HttpCookie{ - HttpCookie: &networking.LoadBalancerSettings_ConsistentHashLB_HTTPCookie{ - Name: "hash-cookie", - Ttl: &ttl, - }, + Spec: destRule, + }}) + + var proxy *model.Proxy + switch nodeType { + case model.SidecarProxy: + proxy = &model.Proxy{ + ClusterID: "some-cluster-id", + Type: model.SidecarProxy, + IPAddresses: []string{"6.6.6.6"}, + DNSDomain: "com", + Metadata: make(map[string]string), + } + case model.Router: + proxy = &model.Proxy{ + ClusterID: "some-cluster-id", + Type: model.Router, + IPAddresses: []string{"6.6.6.6"}, + DNSDomain: "default.example.org", + Metadata: make(map[string]string), + } + default: + panic(fmt.Sprintf("unsupported node type: %v", nodeType)) + } + + return configgen.BuildClusters(env, proxy, env.PushContext) +} + +func TestBuildGatewayClustersWithRingHashLb(t *testing.T) { + g := NewGomegaWithT(t) + + ttl := time.Nanosecond * 100 + clusters, err := buildTestClusters("*.example.org", model.Router, testMesh, + &networking.DestinationRule{ + Host: "*.example.org", + TrafficPolicy: &networking.TrafficPolicy{ + LoadBalancer: &networking.LoadBalancerSettings{ + LbPolicy: &networking.LoadBalancerSettings_ConsistentHash{ + ConsistentHash: &networking.LoadBalancerSettings_ConsistentHashLB{ + MinimumRingSize: uint64(2), + HashKey: &networking.LoadBalancerSettings_ConsistentHashLB_HttpCookie{ + HttpCookie: &networking.LoadBalancerSettings_ConsistentHashLB_HTTPCookie{ + Name: "hash-cookie", + Ttl: &ttl, }, }, }, }, }, }, - }}) + }) + g.Expect(err).NotTo(HaveOccurred()) - return env + g.Expect(len(clusters)).To(Equal(3)) + + cluster := clusters[0] + g.Expect(cluster.LbPolicy).To(Equal(apiv2.Cluster_RING_HASH)) + g.Expect(cluster.GetRingHashLbConfig().GetMinimumRingSize().GetValue()).To(Equal(uint64(2))) + g.Expect(cluster.Name).To(Equal("outbound|8080||*.example.org")) + g.Expect(cluster.Type).To(Equal(apiv2.Cluster_EDS)) + g.Expect(cluster.ConnectTimeout).To(Equal(time.Duration(10000000001))) } -func TestBuildSidecarClustersWithIstioMutualAndSNI(t *testing.T) { - g := gomega.NewGomegaWithT(t) +func newTestEnvironment(serviceDiscovery model.ServiceDiscovery, mesh meshconfig.MeshConfig) *model.Environment { + configStore := &fakes.IstioConfigStore{} - configgen := core.NewConfigGenerator([]plugin.Plugin{}) - proxy := &model.Proxy{ - ClusterID: "some-cluster-id", - Type: model.Sidecar, - IPAddresses: []string{"6.6.6.6"}, - DNSDomain: "com", - Metadata: make(map[string]string), + env := &model.Environment{ + ServiceDiscovery: serviceDiscovery, + ServiceAccounts: &fakes.ServiceAccounts{}, + IstioConfigStore: configStore, + Mesh: &mesh, + MixerSAN: []string{}, } - env := buildEnvForClustersWithIstioMutualWithSNI("foo.com") + env.PushContext = model.NewPushContext() + _ = env.PushContext.InitContext(env) - clusters, err := configgen.BuildClusters(env, proxy, env.PushContext) - g.Expect(err).NotTo(gomega.HaveOccurred()) + return env +} - g.Expect(len(clusters)).To(gomega.Equal(4)) +func TestBuildSidecarClustersWithIstioMutualAndSNI(t *testing.T) { + g := NewGomegaWithT(t) - cluster := clusters[1] - g.Expect(cluster.Name).To(gomega.Equal("outbound|8080|foobar|foo.example.org")) - g.Expect(cluster.TlsContext.GetSni()).To(gomega.Equal("foo.com")) + clusters, err := buildSniTestClusters("foo.com") + g.Expect(err).NotTo(HaveOccurred()) - // Check if SNI values are being automatically populated - env = buildEnvForClustersWithIstioMutualWithSNI("") + g.Expect(len(clusters)).To(Equal(5)) + + cluster := clusters[1] + g.Expect(cluster.Name).To(Equal("outbound|8080|foobar|foo.example.org")) + g.Expect(cluster.TlsContext.GetSni()).To(Equal("foo.com")) - clusters, err = configgen.BuildClusters(env, proxy, env.PushContext) - g.Expect(err).NotTo(gomega.HaveOccurred()) + clusters, err = buildSniTestClusters("") + g.Expect(err).NotTo(HaveOccurred()) - g.Expect(len(clusters)).To(gomega.Equal(4)) + g.Expect(len(clusters)).To(Equal(5)) cluster = clusters[1] - g.Expect(cluster.Name).To(gomega.Equal("outbound|8080|foobar|foo.example.org")) - g.Expect(cluster.TlsContext.GetSni()).To(gomega.Equal("outbound_.8080_.foobar_.foo.example.org")) + g.Expect(cluster.Name).To(Equal("outbound|8080|foobar|foo.example.org")) + g.Expect(cluster.TlsContext.GetSni()).To(Equal("outbound_.8080_.foobar_.foo.example.org")) } -func buildEnvForClustersWithIstioMutualWithSNI(sniValue string) *model.Environment { - serviceDiscovery := &fakes.ServiceDiscovery{} - - serviceDiscovery.ServicesReturns([]*model.Service{ - { - Hostname: "foo.example.org", - Address: "1.1.1.1", - ClusterVIPs: make(map[string]string), - Ports: model.PortList{ - &model.Port{ - Name: "default", - Port: 8080, - Protocol: model.ProtocolHTTP, - }, - }, - }, - }, nil) - - meshConfig := &meshconfig.MeshConfig{ - ConnectTimeout: &types.Duration{ - Seconds: 10, - Nanos: 1, - }, - } - - configStore := &fakes.IstioConfigStore{} - - env := &model.Environment{ - ServiceDiscovery: serviceDiscovery, - ServiceAccounts: &fakes.ServiceAccounts{}, - IstioConfigStore: configStore, - Mesh: meshConfig, - MixerSAN: []string{}, - } - - env.PushContext = model.NewPushContext() - env.PushContext.InitContext(env) - env.PushContext.SetDestinationRules([]model.Config{ - {ConfigMeta: model.ConfigMeta{ - Type: model.DestinationRule.Type, - Version: model.DestinationRule.Version, - Name: "acme", - }, - Spec: &networking.DestinationRule{ - Host: "*.example.org", - Subsets: []*networking.Subset{ - { - Name: "foobar", - Labels: map[string]string{"foo": "bar"}, - TrafficPolicy: &networking.TrafficPolicy{ - PortLevelSettings: []*networking.TrafficPolicy_PortTrafficPolicy{ - { - Port: &networking.PortSelector{ - Port: &networking.PortSelector_Number{Number: 8080}, - }, - Tls: &networking.TLSSettings{ - Mode: networking.TLSSettings_ISTIO_MUTUAL, - Sni: sniValue, - }, +func buildSniTestClusters(sniValue string) ([]*apiv2.Cluster, error) { + return buildTestClusters("foo.example.org", model.SidecarProxy, testMesh, + &networking.DestinationRule{ + Host: "*.example.org", + Subsets: []*networking.Subset{ + { + Name: "foobar", + Labels: map[string]string{"foo": "bar"}, + TrafficPolicy: &networking.TrafficPolicy{ + PortLevelSettings: []*networking.TrafficPolicy_PortTrafficPolicy{ + { + Port: &networking.PortSelector{ + Port: &networking.PortSelector_Number{Number: 8080}, + }, + Tls: &networking.TLSSettings{ + Mode: networking.TLSSettings_ISTIO_MUTUAL, + Sni: sniValue, }, }, }, }, }, }, - }}) - - return env + }) } func TestBuildSidecarClustersWithMeshWideTCPKeepalive(t *testing.T) { - g := gomega.NewGomegaWithT(t) - - configgen := core.NewConfigGenerator([]plugin.Plugin{}) - proxy := &model.Proxy{ - ClusterID: "some-cluster-id", - Type: model.Sidecar, - IPAddresses: []string{"6.6.6.6"}, - DNSDomain: "com", - Metadata: make(map[string]string), - } + g := NewGomegaWithT(t) // Do not set tcp_keepalive anywhere - env := buildEnvForClustersWithTCPKeepalive(None) - clusters, err := configgen.BuildClusters(env, proxy, env.PushContext) - g.Expect(err).NotTo(gomega.HaveOccurred()) - g.Expect(len(clusters)).To(gomega.Equal(4)) + clusters, err := buildTestClustersWithTCPKeepalive(None) + g.Expect(err).NotTo(HaveOccurred()) + g.Expect(len(clusters)).To(Equal(5)) cluster := clusters[1] - g.Expect(cluster.Name).To(gomega.Equal("outbound|8080|foobar|foo.example.org")) + g.Expect(cluster.Name).To(Equal("outbound|8080|foobar|foo.example.org")) // UpstreamConnectionOptions should be nil. TcpKeepalive is the only field in it currently. - g.Expect(cluster.UpstreamConnectionOptions).To(gomega.BeNil()) + g.Expect(cluster.UpstreamConnectionOptions).To(BeNil()) // Set mesh wide default for tcp_keepalive. - env = buildEnvForClustersWithTCPKeepalive(Mesh) - clusters, err = configgen.BuildClusters(env, proxy, env.PushContext) - g.Expect(err).NotTo(gomega.HaveOccurred()) - g.Expect(len(clusters)).To(gomega.Equal(4)) + clusters, err = buildTestClustersWithTCPKeepalive(Mesh) + g.Expect(err).NotTo(HaveOccurred()) + g.Expect(len(clusters)).To(Equal(5)) cluster = clusters[1] - g.Expect(cluster.Name).To(gomega.Equal("outbound|8080|foobar|foo.example.org")) + g.Expect(cluster.Name).To(Equal("outbound|8080|foobar|foo.example.org")) // KeepaliveTime should be set but rest should be nil. - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveProbes).To(gomega.BeNil()) - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveTime.Value).To(gomega.Equal(uint32(MeshWideTCPKeepaliveSeconds))) - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveInterval).To(gomega.BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveProbes).To(BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveTime.Value).To(Equal(uint32(MeshWideTCPKeepaliveSeconds))) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveInterval).To(BeNil()) // Set DestinationRule override for tcp_keepalive. - env = buildEnvForClustersWithTCPKeepalive(DestinationRule) - clusters, err = configgen.BuildClusters(env, proxy, env.PushContext) - g.Expect(err).NotTo(gomega.HaveOccurred()) - g.Expect(len(clusters)).To(gomega.Equal(4)) + clusters, err = buildTestClustersWithTCPKeepalive(DestinationRule) + g.Expect(err).NotTo(HaveOccurred()) + g.Expect(len(clusters)).To(Equal(5)) cluster = clusters[1] - g.Expect(cluster.Name).To(gomega.Equal("outbound|8080|foobar|foo.example.org")) + g.Expect(cluster.Name).To(Equal("outbound|8080|foobar|foo.example.org")) // KeepaliveTime should be set but rest should be nil. - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveProbes).To(gomega.BeNil()) - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveTime.Value).To(gomega.Equal(uint32(DestinationRuleTCPKeepaliveSeconds))) - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveInterval).To(gomega.BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveProbes).To(BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveTime.Value).To(Equal(uint32(DestinationRuleTCPKeepaliveSeconds))) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveInterval).To(BeNil()) // Set DestinationRule override for tcp_keepalive with empty value. - env = buildEnvForClustersWithTCPKeepalive(DestinationRuleForOsDefault) - clusters, err = configgen.BuildClusters(env, proxy, env.PushContext) - g.Expect(err).NotTo(gomega.HaveOccurred()) - g.Expect(len(clusters)).To(gomega.Equal(4)) + clusters, err = buildTestClustersWithTCPKeepalive(DestinationRuleForOsDefault) + g.Expect(err).NotTo(HaveOccurred()) + g.Expect(len(clusters)).To(Equal(5)) cluster = clusters[1] - g.Expect(cluster.Name).To(gomega.Equal("outbound|8080|foobar|foo.example.org")) + g.Expect(cluster.Name).To(Equal("outbound|8080|foobar|foo.example.org")) // TcpKeepalive should be present but with nil values. - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive).NotTo(gomega.BeNil()) - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveProbes).To(gomega.BeNil()) - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveTime).To(gomega.BeNil()) - g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveInterval).To(gomega.BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive).NotTo(BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveProbes).To(BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveTime).To(BeNil()) + g.Expect(cluster.UpstreamConnectionOptions.TcpKeepalive.KeepaliveInterval).To(BeNil()) } -func buildEnvForClustersWithTCPKeepalive(configType ConfigType) *model.Environment { - serviceDiscovery := &fakes.ServiceDiscovery{} - - serviceDiscovery.ServicesReturns([]*model.Service{ - { - Hostname: "foo.example.org", - Address: "1.1.1.1", - ClusterVIPs: make(map[string]string), - Ports: model.PortList{ - &model.Port{ - Name: "default", - Port: 8080, - Protocol: model.ProtocolHTTP, - }, - }, - }, - }, nil) - - meshConfig := &meshconfig.MeshConfig{ - ConnectTimeout: &types.Duration{ - Seconds: 10, - Nanos: 1, - }, - } - +func buildTestClustersWithTCPKeepalive(configType ConfigType) ([]*apiv2.Cluster, error) { // Set mesh wide defaults. + mesh := testMesh if configType != None { - meshConfig.TcpKeepalive = &networking.ConnectionPoolSettings_TCPSettings_TcpKeepalive{ + mesh.TcpKeepalive = &networking.ConnectionPoolSettings_TCPSettings_TcpKeepalive{ Time: &types.Duration{ Seconds: MeshWideTCPKeepaliveSeconds, Nanos: 0, @@ -352,40 +356,22 @@ func buildEnvForClustersWithTCPKeepalive(configType ConfigType) *model.Environme destinationRuleTCPKeepalive = &networking.ConnectionPoolSettings_TCPSettings_TcpKeepalive{} } - configStore := &fakes.IstioConfigStore{} - - env := &model.Environment{ - ServiceDiscovery: serviceDiscovery, - ServiceAccounts: &fakes.ServiceAccounts{}, - IstioConfigStore: configStore, - Mesh: meshConfig, - MixerSAN: []string{}, - } - - env.PushContext = model.NewPushContext() - env.PushContext.InitContext(env) - env.PushContext.SetDestinationRules([]model.Config{ - {ConfigMeta: model.ConfigMeta{ - Type: model.DestinationRule.Type, - Version: model.DestinationRule.Version, - Name: "acme", - }, - Spec: &networking.DestinationRule{ - Host: "*.example.org", - Subsets: []*networking.Subset{ - { - Name: "foobar", - Labels: map[string]string{"foo": "bar"}, - TrafficPolicy: &networking.TrafficPolicy{ - PortLevelSettings: []*networking.TrafficPolicy_PortTrafficPolicy{ - { - Port: &networking.PortSelector{ - Port: &networking.PortSelector_Number{Number: 8080}, - }, - ConnectionPool: &networking.ConnectionPoolSettings{ - Tcp: &networking.ConnectionPoolSettings_TCPSettings{ - TcpKeepalive: destinationRuleTCPKeepalive, - }, + return buildTestClusters("foo.example.org", model.SidecarProxy, mesh, + &networking.DestinationRule{ + Host: "*.example.org", + Subsets: []*networking.Subset{ + { + Name: "foobar", + Labels: map[string]string{"foo": "bar"}, + TrafficPolicy: &networking.TrafficPolicy{ + PortLevelSettings: []*networking.TrafficPolicy_PortTrafficPolicy{ + { + Port: &networking.PortSelector{ + Port: &networking.PortSelector_Number{Number: 8080}, + }, + ConnectionPool: &networking.ConnectionPoolSettings{ + Tcp: &networking.ConnectionPoolSettings_TCPSettings{ + TcpKeepalive: destinationRuleTCPKeepalive, }, }, }, @@ -393,8 +379,5 @@ func buildEnvForClustersWithTCPKeepalive(configType ConfigType) *model.Environme }, }, }, - }, - }) - - return env + }) } diff --git a/pilot/pkg/networking/core/v1alpha3/configgen.go b/pilot/pkg/networking/core/v1alpha3/configgen.go index 1111c6b1b6ca..30066e075afb 100644 --- a/pilot/pkg/networking/core/v1alpha3/configgen.go +++ b/pilot/pkg/networking/core/v1alpha3/configgen.go @@ -26,9 +26,9 @@ import ( type ConfigGeneratorImpl struct { // List of plugins that modify code generated by this config generator Plugins []plugin.Plugin - // List of outbound clusters keyed by configNamespace + // List of outbound clusters keyed by configNamespace. For use by gateways only. // Must be rebuilt for each push epoch - PrecomputedOutboundClusters map[string][]*xdsapi.Cluster + PrecomputedOutboundClustersForGateways map[string][]*xdsapi.Cluster // TODO: add others in future } @@ -42,13 +42,18 @@ func (configgen *ConfigGeneratorImpl) BuildSharedPushState(env *model.Environmen namespaceMap := map[string]struct{}{} clustersByNamespace := map[string][]*xdsapi.Cluster{} + // We have to separate caches. One for the gateways and one for the sidecars. The caches for the sidecar are + // stored in the associated SidecarScope while those for the gateways are stored in here. + // TODO: unify this + + // List of all namespaces in the system services := push.Services(nil) for _, svc := range services { namespaceMap[svc.Attributes.Namespace] = struct{}{} } namespaceMap[""] = struct{}{} - // generate outbound for all namespaces in parallel. + // generate outbound clusters for all namespaces in parallel. wg := &sync.WaitGroup{} mutex := &sync.Mutex{} wg.Add(len(namespaceMap)) @@ -57,6 +62,7 @@ func (configgen *ConfigGeneratorImpl) BuildSharedPushState(env *model.Environmen defer wg.Done() dummyNode := model.Proxy{ ConfigNamespace: ns, + Type: model.Router, } clusters := configgen.buildOutboundClusters(env, &dummyNode, push) mutex.Lock() @@ -66,7 +72,34 @@ func (configgen *ConfigGeneratorImpl) BuildSharedPushState(env *model.Environmen } wg.Wait() - configgen.PrecomputedOutboundClusters = clustersByNamespace + configgen.PrecomputedOutboundClustersForGateways = clustersByNamespace + return configgen.buildSharedPushStateForSidecars(env, push) +} + +func (configgen *ConfigGeneratorImpl) buildSharedPushStateForSidecars(env *model.Environment, push *model.PushContext) error { + sidecarsByNamespace := push.GetAllSidecarScopes() + + // generate outbound for all namespaces in parallel. + wg := &sync.WaitGroup{} + wg.Add(len(sidecarsByNamespace)) + for ns, sidecarScopes := range sidecarsByNamespace { + go func(ns string, sidecarScopes []*model.SidecarScope) { + defer wg.Done() + for _, sc := range sidecarScopes { + dummyNode := model.Proxy{ + Type: model.SidecarProxy, + ConfigNamespace: ns, + SidecarScope: sc, + } + // If there is no user supplied sidecar, then the output stored + // here will be the default CDS output for a given namespace based on public/private + // services and destination rules + sc.XDSOutboundClusters = configgen.buildOutboundClusters(env, &dummyNode, push) + } + }(ns, sidecarScopes) + } + wg.Wait() + return nil } diff --git a/pilot/pkg/networking/core/v1alpha3/envoyfilter.go b/pilot/pkg/networking/core/v1alpha3/envoyfilter.go index f3dc7190af86..803584a0bb46 100644 --- a/pilot/pkg/networking/core/v1alpha3/envoyfilter.go +++ b/pilot/pkg/networking/core/v1alpha3/envoyfilter.go @@ -152,7 +152,7 @@ func listenerMatch(in *plugin.InputParams, listenerIP net.IP, } case networking.EnvoyFilter_ListenerMatch_SIDECAR_INBOUND, networking.EnvoyFilter_ListenerMatch_SIDECAR_OUTBOUND: - if in.Node.Type != model.Sidecar { + if in.Node.Type != model.SidecarProxy { return false } } diff --git a/pilot/pkg/networking/core/v1alpha3/envoyfilter_test.go b/pilot/pkg/networking/core/v1alpha3/envoyfilter_test.go index a69a577c9884..974ccc7bc37c 100644 --- a/pilot/pkg/networking/core/v1alpha3/envoyfilter_test.go +++ b/pilot/pkg/networking/core/v1alpha3/envoyfilter_test.go @@ -27,7 +27,7 @@ func TestListenerMatch(t *testing.T) { inputParams := &plugin.InputParams{ ListenerProtocol: plugin.ListenerProtocolHTTP, Node: &model.Proxy{ - Type: model.Sidecar, + Type: model.SidecarProxy, }, Port: &model.Port{ Name: "http-foo", diff --git a/pilot/pkg/networking/core/v1alpha3/gateway.go b/pilot/pkg/networking/core/v1alpha3/gateway.go index c46e693915d8..4c0f00bc8491 100644 --- a/pilot/pkg/networking/core/v1alpha3/gateway.go +++ b/pilot/pkg/networking/core/v1alpha3/gateway.go @@ -78,7 +78,7 @@ func (configgen *ConfigGeneratorImpl) buildGatewayListeners(env *model.Environme opts := buildListenerOpts{ env: env, proxy: node, - ip: WildcardAddress, + bind: WildcardAddress, port: int(portNumber), bindToPort: true, } @@ -173,7 +173,9 @@ func (configgen *ConfigGeneratorImpl) buildGatewayListeners(env *model.Environme } func (configgen *ConfigGeneratorImpl) buildGatewayHTTPRouteConfig(env *model.Environment, node *model.Proxy, push *model.PushContext, - proxyInstances []*model.ServiceInstance, services []*model.Service, routeName string) (*xdsapi.RouteConfiguration, error) { + proxyInstances []*model.ServiceInstance, routeName string) (*xdsapi.RouteConfiguration, error) { + + services := push.Services(node) // collect workload labels var workloadLabels model.LabelsCollection diff --git a/pilot/pkg/networking/core/v1alpha3/httproute.go b/pilot/pkg/networking/core/v1alpha3/httproute.go index 9190d5b8dda3..d8df65401e0b 100644 --- a/pilot/pkg/networking/core/v1alpha3/httproute.go +++ b/pilot/pkg/networking/core/v1alpha3/httproute.go @@ -38,13 +38,11 @@ func (configgen *ConfigGeneratorImpl) BuildHTTPRoutes(env *model.Environment, no return nil, err } - services := push.Services(node) - switch node.Type { - case model.Sidecar: - return configgen.buildSidecarOutboundHTTPRouteConfig(env, node, push, proxyInstances, services, routeName), nil + case model.SidecarProxy: + return configgen.buildSidecarOutboundHTTPRouteConfig(env, node, push, proxyInstances, routeName), nil case model.Router, model.Ingress: - return configgen.buildGatewayHTTPRouteConfig(env, node, push, proxyInstances, services, routeName) + return configgen.buildGatewayHTTPRouteConfig(env, node, push, proxyInstances, routeName) } return nil, nil } @@ -54,10 +52,15 @@ func (configgen *ConfigGeneratorImpl) BuildHTTPRoutes(env *model.Environment, no func (configgen *ConfigGeneratorImpl) buildSidecarInboundHTTPRouteConfig(env *model.Environment, node *model.Proxy, push *model.PushContext, instance *model.ServiceInstance) *xdsapi.RouteConfiguration { - clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, "", + // In case of unix domain sockets, the service port will be 0. So use the port name to distinguish the + // inbound listeners that a user specifies in Sidecar. Otherwise, all inbound clusters will be the same. + // We use the port name as the subset in the inbound cluster for differentiation. Its fine to use port + // names here because the inbound clusters are not referred to anywhere in the API, unlike the outbound + // clusters and these are static endpoint clusters used only for sidecar (proxy -> app) + clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, instance.Endpoint.ServicePort.Name, instance.Service.Hostname, instance.Endpoint.ServicePort.Port) traceOperation := fmt.Sprintf("%s:%d/*", instance.Service.Hostname, instance.Endpoint.ServicePort.Port) - defaultRoute := istio_route.BuildDefaultHTTPRoute(clusterName, traceOperation) + defaultRoute := istio_route.BuildDefaultHTTPInboundRoute(clusterName, traceOperation) inboundVHost := route.VirtualHost{ Name: fmt.Sprintf("%s|http|%d", model.TrafficDirectionInbound, instance.Endpoint.ServicePort.Port), @@ -89,35 +92,63 @@ func (configgen *ConfigGeneratorImpl) buildSidecarInboundHTTPRouteConfig(env *mo // buildSidecarOutboundHTTPRouteConfig builds an outbound HTTP Route for sidecar. // Based on port, will determine all virtual hosts that listen on the port. func (configgen *ConfigGeneratorImpl) buildSidecarOutboundHTTPRouteConfig(env *model.Environment, node *model.Proxy, push *model.PushContext, - proxyInstances []*model.ServiceInstance, services []*model.Service, routeName string) *xdsapi.RouteConfiguration { + proxyInstances []*model.ServiceInstance, routeName string) *xdsapi.RouteConfiguration { listenerPort := 0 - if routeName != RDSHttpProxy { - var err error - listenerPort, err = strconv.Atoi(routeName) - if err != nil { + var err error + listenerPort, err = strconv.Atoi(routeName) + if err != nil { + // we have a port whose name is http_proxy or unix:///foo/bar + // check for both. + if routeName != RDSHttpProxy && !strings.HasPrefix(routeName, model.UnixAddressPrefix) { + return nil + } + } + + var virtualServices []model.Config + var services []*model.Service + + // Get the list of services that correspond to this egressListener from the sidecarScope + sidecarScope := node.SidecarScope + // sidecarScope should never be nil + if sidecarScope != nil && sidecarScope.Config != nil { + // this is a user supplied sidecar scope. Get the services from the egress listener + egressListener := sidecarScope.GetEgressListenerForRDS(listenerPort, routeName) + // We should never be getting a nil egress listener because the code that setup this RDS + // call obviously saw an egress listener + if egressListener == nil { return nil } + + services = egressListener.Services() + // To maintain correctness, we should only use the virtualservices for + // this listener and not all virtual services accessible to this proxy. + virtualServices = egressListener.VirtualServices() + + // When generating RDS for ports created via the SidecarScope, we treat + // these ports as HTTP proxy style ports. All services attached to this listener + // must feature in this RDS route irrespective of the service port. + if egressListener.IstioListener != nil && egressListener.IstioListener.Port != nil { + listenerPort = 0 + } + } else { + meshGateway := map[string]bool{model.IstioMeshGateway: true} + services = push.Services(node) + virtualServices = push.VirtualServices(node, meshGateway) } nameToServiceMap := make(map[model.Hostname]*model.Service) for _, svc := range services { if listenerPort == 0 { + // Take all ports when listen port is 0 (http_proxy or uds) + // Expect virtualServices to resolve to right port nameToServiceMap[svc.Hostname] = svc } else { if svcPort, exists := svc.Ports.GetByPort(listenerPort); exists { - svc.Mutex.RLock() - clusterVIPs := make(map[string]string, len(svc.ClusterVIPs)) - for k, v := range svc.ClusterVIPs { - clusterVIPs[k] = v - } - svc.Mutex.RUnlock() - nameToServiceMap[svc.Hostname] = &model.Service{ Hostname: svc.Hostname, Address: svc.Address, - ClusterVIPs: clusterVIPs, MeshExternal: svc.MeshExternal, Ports: []*model.Port{svcPort}, } @@ -132,7 +163,7 @@ func (configgen *ConfigGeneratorImpl) buildSidecarOutboundHTTPRouteConfig(env *m } // Get list of virtual services bound to the mesh gateway - virtualHostWrappers := istio_route.BuildVirtualHostsFromConfigAndRegistry(node, push, nameToServiceMap, proxyLabels) + virtualHostWrappers := istio_route.BuildSidecarVirtualHostsFromConfigAndRegistry(node, push, nameToServiceMap, proxyLabels, virtualServices, listenerPort) vHostPortMap := make(map[int][]route.VirtualHost) for _, virtualHostWrapper := range virtualHostWrappers { @@ -162,7 +193,7 @@ func (configgen *ConfigGeneratorImpl) buildSidecarOutboundHTTPRouteConfig(env *m } var virtualHosts []route.VirtualHost - if routeName == RDSHttpProxy { + if listenerPort == 0 { virtualHosts = mergeAllVirtualHosts(vHostPortMap) } else { virtualHosts = vHostPortMap[listenerPort] diff --git a/pilot/pkg/networking/core/v1alpha3/httproute_test.go b/pilot/pkg/networking/core/v1alpha3/httproute_test.go index cdede568c921..6596793b578c 100644 --- a/pilot/pkg/networking/core/v1alpha3/httproute_test.go +++ b/pilot/pkg/networking/core/v1alpha3/httproute_test.go @@ -15,11 +15,14 @@ package v1alpha3 import ( + "fmt" "reflect" "sort" "testing" + networking "istio.io/api/networking/v1alpha3" "istio.io/istio/pilot/pkg/model" + "istio.io/istio/pilot/pkg/networking/plugin" ) func TestGenerateVirtualHostDomains(t *testing.T) { @@ -79,3 +82,240 @@ func TestGenerateVirtualHostDomains(t *testing.T) { } } } + +func TestSidecarOutboundHTTPRouteConfig(t *testing.T) { + services := []*model.Service{ + buildHTTPService("bookinfo.com", networking.ConfigScope_PUBLIC, wildcardIP, "default", 9999, 70), + buildHTTPService("private.com", networking.ConfigScope_PRIVATE, wildcardIP, "default", 9999, 80), + buildHTTPService("test.com", networking.ConfigScope_PUBLIC, "8.8.8.8", "not-default", 8080), + buildHTTPService("test-private.com", networking.ConfigScope_PRIVATE, "9.9.9.9", "not-default", 80, 70), + } + + sidecarConfig := &model.Config{ + ConfigMeta: model.ConfigMeta{ + Name: "foo", + Namespace: "not-default", + }, + Spec: &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + // A port that is not in any of the services + Number: 9000, + Protocol: "HTTP", + Name: "something", + }, + Bind: "1.1.1.1", + Hosts: []string{"*/bookinfo.com"}, + }, + { + Port: &networking.Port{ + // Unix domain socket listener + Number: 0, + Protocol: "HTTP", + Name: "something", + }, + Bind: "unix://foo/bar/baz", + Hosts: []string{"*/bookinfo.com"}, + }, + { + Port: &networking.Port{ + // A port that is in one of the services + Number: 8080, + Protocol: "HTTP", + Name: "foo", + }, + Hosts: []string{"default/bookinfo.com", "not-default/test.com"}, + }, + { + // Wildcard egress importing from all namespaces + Hosts: []string{"*/*"}, + }, + }, + }, + } + + // With the config above, RDS should return a valid route for the following route names + // port 9000 - [bookinfo.com:9999], [bookinfo.com:70] but no bookinfo.com + // unix://foo/bar/baz - [bookinfo.com:9999], [bookinfo.com:70] but no bookinfo.com + // port 8080 - [bookinfo.com:9999], [bookinfo.com:70], [test.com:8080, 8.8.8.8:8080], but no bookinfo.com or test.com + // port 9999 - [bookinfo.com, bookinfo.com:9999] + // port 80 - [test-private.com, test-private.com:80, 9.9.9.9:80, 9.9.9.9] + // port 70 - [test-private.com, test-private.com:70, 9.9.9.9, 9.9.9.9:70], [bookinfo.com, bookinfo.com:70] + + // Without sidecar config [same as wildcard egress listener], expect routes + // 9999 - [bookinfo.com, bookinfo.com:9999], + // 8080 - [test.com, test.com:8080, 8.8.8.8:8080, 8.8.8.8] + // 80 - [test-private.com, test-private.com:80, 9.9.9.9:80, 9.9.9.9] + // 70 - [bookinfo.com, bookinfo.com:70],[test-private.com, test-private.com:70, 9.9.9.9:70, 9.9.9.9] + cases := []struct { + name string + routeName string + sidecarConfig *model.Config + // virtualHost Name and domains + expectedHosts map[string]map[string]bool + }{ + { + name: "sidecar config port that is not in any service", + routeName: "9000", + sidecarConfig: sidecarConfig, + expectedHosts: map[string]map[string]bool{ + "bookinfo.com:9999": {"bookinfo.com:9999": true}, + "bookinfo.com:70": {"bookinfo.com:70": true}, + }, + }, + { + name: "sidecar config with unix domain socket listener", + routeName: "unix://foo/bar/baz", + sidecarConfig: sidecarConfig, + expectedHosts: map[string]map[string]bool{ + "bookinfo.com:9999": {"bookinfo.com:9999": true}, + "bookinfo.com:70": {"bookinfo.com:70": true}, + }, + }, + { + name: "sidecar config port that is in one of the services", + routeName: "8080", + sidecarConfig: sidecarConfig, + expectedHosts: map[string]map[string]bool{ + "bookinfo.com:9999": {"bookinfo.com:9999": true}, + "bookinfo.com:70": {"bookinfo.com:70": true}, + "test.com:8080": {"test.com:8080": true, "8.8.8.8:8080": true}, + }, + }, + { + name: "wildcard egress importing from all namespaces: 9999", + routeName: "9999", + sidecarConfig: sidecarConfig, + expectedHosts: map[string]map[string]bool{ + "bookinfo.com:9999": {"bookinfo.com:9999": true, "bookinfo.com": true}, + }, + }, + { + name: "wildcard egress importing from all namespaces: 80", + routeName: "80", + sidecarConfig: sidecarConfig, + expectedHosts: map[string]map[string]bool{ + "test-private.com:80": { + "test-private.com": true, "test-private.com:80": true, "9.9.9.9": true, "9.9.9.9:80": true, + }, + }, + }, + { + name: "wildcard egress importing from all namespaces: 70", + routeName: "70", + sidecarConfig: sidecarConfig, + expectedHosts: map[string]map[string]bool{ + "test-private.com:70": { + "test-private.com": true, "test-private.com:70": true, "9.9.9.9": true, "9.9.9.9:70": true, + }, + "bookinfo.com:70": {"bookinfo.com": true, "bookinfo.com:70": true}, + }, + }, + { + name: "no sidecar config - import public service from other namespaces: 9999", + routeName: "9999", + sidecarConfig: nil, + expectedHosts: map[string]map[string]bool{ + "bookinfo.com:9999": {"bookinfo.com:9999": true, "bookinfo.com": true}, + }, + }, + { + name: "no sidecar config - import public service from other namespaces: 8080", + routeName: "8080", + sidecarConfig: nil, + expectedHosts: map[string]map[string]bool{ + "test.com:8080": { + "test.com:8080": true, "test.com": true, "8.8.8.8": true, "8.8.8.8:8080": true}, + }, + }, + { + name: "no sidecar config - import public services from other namespaces: 80", + routeName: "80", + sidecarConfig: nil, + expectedHosts: map[string]map[string]bool{ + "test-private.com:80": { + "test-private.com": true, "test-private.com:80": true, "9.9.9.9": true, "9.9.9.9:80": true, + }, + }, + }, + { + name: "no sidecar config - import public services from other namespaces: 70", + routeName: "70", + sidecarConfig: nil, + expectedHosts: map[string]map[string]bool{ + "test-private.com:70": { + "test-private.com": true, "test-private.com:70": true, "9.9.9.9": true, "9.9.9.9:70": true, + }, + "bookinfo.com:70": {"bookinfo.com": true, "bookinfo.com:70": true}, + }, + }, + } + + for _, c := range cases { + testSidecarRDSVHosts(t, c.name, services, c.sidecarConfig, c.routeName, c.expectedHosts) + } +} + +func testSidecarRDSVHosts(t *testing.T, testName string, services []*model.Service, sidecarConfig *model.Config, + routeName string, expectedHosts map[string]map[string]bool) { + t.Helper() + p := &fakePlugin{} + configgen := NewConfigGenerator([]plugin.Plugin{p}) + + env := buildListenerEnv(services) + + if err := env.PushContext.InitContext(&env); err != nil { + t.Fatalf("testSidecarRDSVhosts(%s): failed to initialize push context", testName) + } + if sidecarConfig == nil { + proxy.SidecarScope = model.DefaultSidecarScopeForNamespace(env.PushContext, "not-default") + } else { + proxy.SidecarScope = model.ConvertToSidecarScope(env.PushContext, sidecarConfig) + } + + route := configgen.buildSidecarOutboundHTTPRouteConfig(&env, &proxy, env.PushContext, proxyInstances, routeName) + if route == nil { + t.Fatalf("testSidecarRDSVhosts(%s): got nil route for %s", testName, routeName) + } + + for _, vhost := range route.VirtualHosts { + if _, found := expectedHosts[vhost.Name]; !found { + t.Fatalf("testSidecarRDSVhosts(%s): unexpected vhost block %s for route %s", testName, + vhost.Name, routeName) + } + for _, domain := range vhost.Domains { + if !expectedHosts[vhost.Name][domain] { + t.Fatalf("testSidecarRDSVhosts(%s): unexpected vhost domain %s in vhost %s, for route %s", + testName, domain, vhost.Name, routeName) + } + } + } +} + +func buildHTTPService(hostname string, scope networking.ConfigScope, ip, namespace string, ports ...int) *model.Service { + service := &model.Service{ + CreationTime: tnow, + Hostname: model.Hostname(hostname), + Address: ip, + ClusterVIPs: make(map[string]string), + Resolution: model.Passthrough, + Attributes: model.ServiceAttributes{ + Namespace: namespace, + ConfigScope: scope, + }, + } + + Ports := make([]*model.Port, 0) + + for _, p := range ports { + Ports = append(Ports, &model.Port{ + Name: fmt.Sprintf("http-%d", p), + Port: p, + Protocol: model.ProtocolHTTP, + }) + } + + service.Ports = Ports + return service +} diff --git a/pilot/pkg/networking/core/v1alpha3/listener.go b/pilot/pkg/networking/core/v1alpha3/listener.go index a17c48fbf966..dffed1b440b7 100644 --- a/pilot/pkg/networking/core/v1alpha3/listener.go +++ b/pilot/pkg/networking/core/v1alpha3/listener.go @@ -40,6 +40,7 @@ import ( "istio.io/istio/pilot/pkg/model" "istio.io/istio/pilot/pkg/networking/plugin" "istio.io/istio/pilot/pkg/networking/util" + "istio.io/istio/pkg/features/pilot" "istio.io/istio/pkg/log" "istio.io/istio/pkg/proto" ) @@ -162,7 +163,7 @@ var ListenersALPNProtocols = []string{"h2", "http/1.1"} // BuildListeners produces a list of listeners and referenced clusters for all proxies func (configgen *ConfigGeneratorImpl) BuildListeners(env *model.Environment, node *model.Proxy, push *model.PushContext) ([]*xdsapi.Listener, error) { switch node.Type { - case model.Sidecar: + case model.SidecarProxy: return configgen.buildSidecarListeners(env, node, push) case model.Router, model.Ingress: return configgen.buildGatewayListeners(env, node, push) @@ -181,48 +182,66 @@ func (configgen *ConfigGeneratorImpl) buildSidecarListeners(env *model.Environme return nil, err } - services := push.Services(node) - + noneMode := node.GetInterceptionMode() == model.InterceptionNone listeners := make([]*xdsapi.Listener, 0) if mesh.ProxyListenPort > 0 { inbound := configgen.buildSidecarInboundListeners(env, node, push, proxyInstances) - outbound := configgen.buildSidecarOutboundListeners(env, node, push, proxyInstances, services) + outbound := configgen.buildSidecarOutboundListeners(env, node, push, proxyInstances) listeners = append(listeners, inbound...) listeners = append(listeners, outbound...) - // Let ServiceDiscovery decide which IP and Port are used for management if - // there are multiple IPs - mgmtListeners := make([]*xdsapi.Listener, 0) - for _, ip := range node.IPAddresses { - managementPorts := env.ManagementPorts(ip) - management := buildSidecarInboundMgmtListeners(node, env, managementPorts, ip) - mgmtListeners = append(mgmtListeners, management...) + // Do not generate any management port listeners if the user has specified a SidecarScope object + // with ingress listeners. Specifying the ingress listener implies that the user wants + // to only have those specific listeners and nothing else, in the inbound path. + generateManagementListeners := true + + sidecarScope := node.SidecarScope + if sidecarScope != nil && sidecarScope.HasCustomIngressListeners || + noneMode { + generateManagementListeners = false } - // If management listener port and service port are same, bad things happen - // when running in kubernetes, as the probes stop responding. So, append - // non overlapping listeners only. - for i := range mgmtListeners { - m := mgmtListeners[i] - l := util.GetByAddress(listeners, m.Address.String()) - if l != nil { - log.Warnf("Omitting listener for management address %s (%s) due to collision with service listener %s (%s)", - m.Name, m.Address.String(), l.Name, l.Address.String()) - continue + if generateManagementListeners { + // Let ServiceDiscovery decide which IP and Port are used for management if + // there are multiple IPs + mgmtListeners := make([]*xdsapi.Listener, 0) + for _, ip := range node.IPAddresses { + managementPorts := env.ManagementPorts(ip) + management := buildSidecarInboundMgmtListeners(node, env, managementPorts, ip) + mgmtListeners = append(mgmtListeners, management...) + } + + // If management listener port and service port are same, bad things happen + // when running in kubernetes, as the probes stop responding. So, append + // non overlapping listeners only. + for i := range mgmtListeners { + m := mgmtListeners[i] + l := util.GetByAddress(listeners, m.Address.String()) + if l != nil { + log.Warnf("Omitting listener for management address %s (%s) due to collision with service listener %s (%s)", + m.Name, m.Address.String(), l.Name, l.Address.String()) + continue + } + listeners = append(listeners, m) } - listeners = append(listeners, m) } - // We need a passthrough filter to fill in the filter stack for orig_dst listener - passthroughTCPProxy := &tcp_proxy.TcpProxy{ - StatPrefix: util.PassthroughCluster, - ClusterSpecifier: &tcp_proxy.TcpProxy_Cluster{Cluster: util.PassthroughCluster}, + tcpProxy := &tcp_proxy.TcpProxy{ + StatPrefix: util.BlackHoleCluster, + ClusterSpecifier: &tcp_proxy.TcpProxy_Cluster{Cluster: util.BlackHoleCluster}, } + if mesh.OutboundTrafficPolicy.Mode == meshconfig.MeshConfig_OutboundTrafficPolicy_ALLOW_ANY { + // We need a passthrough filter to fill in the filter stack for orig_dst listener + tcpProxy = &tcp_proxy.TcpProxy{ + StatPrefix: util.PassthroughCluster, + ClusterSpecifier: &tcp_proxy.TcpProxy_Cluster{Cluster: util.PassthroughCluster}, + } + } var transparent *google_protobuf.BoolValue - if mode := node.Metadata["INTERCEPTION_MODE"]; mode == "TPROXY" { + if node.GetInterceptionMode() == model.InterceptionTproxy { transparent = proto.BoolTrue } @@ -238,7 +257,7 @@ func (configgen *ConfigGeneratorImpl) buildSidecarListeners(env *model.Environme { Name: xdsutil.TCPProxy, ConfigType: &listener.Filter_Config{ - Config: util.MessageToStruct(passthroughTCPProxy), + Config: util.MessageToStruct(tcpProxy), }, }, }, @@ -247,24 +266,22 @@ func (configgen *ConfigGeneratorImpl) buildSidecarListeners(env *model.Environme }) } + httpProxyPort := mesh.ProxyHttpPort + if httpProxyPort == 0 && noneMode { // make sure http proxy is enabled for 'none' interception. + httpProxyPort = int32(pilot.DefaultPortHTTPProxy) + } // enable HTTP PROXY port if necessary; this will add an RDS route for this port - if mesh.ProxyHttpPort > 0 { + if httpProxyPort > 0 { useRemoteAddress := false traceOperation := http_conn.EGRESS listenAddress := LocalhostAddress - if node.Type == model.Router { - useRemoteAddress = true - traceOperation = http_conn.INGRESS - listenAddress = WildcardAddress - } - opts := buildListenerOpts{ env: env, proxy: node, proxyInstances: proxyInstances, - ip: listenAddress, - port: int(mesh.ProxyHttpPort), + bind: listenAddress, + port: int(httpProxyPort), filterChainOpts: []*filterChainOpts{{ httpOpts: &httpListenerOpts{ rds: RDSHttpProxy, @@ -312,136 +329,289 @@ func (configgen *ConfigGeneratorImpl) buildSidecarInboundListeners(env *model.En proxyInstances []*model.ServiceInstance) []*xdsapi.Listener { var listeners []*xdsapi.Listener - listenerMap := make(map[string]*model.ServiceInstance) - // inbound connections/requests are redirected to the endpoint address but appear to be sent - // to the service address. - for _, instance := range proxyInstances { - endpoint := instance.Endpoint - protocol := endpoint.ServicePort.Protocol - - // Local service instances can be accessed through one of three - // addresses: localhost, endpoint IP, and service - // VIP. Localhost bypasses the proxy and doesn't need any TCP - // route config. Endpoint IP is handled below and Service IP is handled - // by outbound routes. - // Traffic sent to our service VIP is redirected by remote - // services' kubeproxy to our specific endpoint IP. - listenerOpts := buildListenerOpts{ - env: env, - proxy: node, - proxyInstances: proxyInstances, - ip: endpoint.Address, - port: endpoint.Port, - } + listenerMap := make(map[string]*inboundListenerEntry) - listenerMapKey := fmt.Sprintf("%s:%d", endpoint.Address, endpoint.Port) - if old, exists := listenerMap[listenerMapKey]; exists { - push.Add(model.ProxyStatusConflictInboundListener, node.ID, node, - fmt.Sprintf("Rejected %s, used %s for %s", instance.Service.Hostname, old.Service.Hostname, listenerMapKey)) - // Skip building listener for the same ip port - continue - } - allChains := []plugin.FilterChain{} - var httpOpts *httpListenerOpts - var tcpNetworkFilters []listener.Filter - listenerProtocol := plugin.ModelProtocolToListenerProtocol(protocol) - pluginParams := &plugin.InputParams{ - ListenerProtocol: listenerProtocol, - ListenerCategory: networking.EnvoyFilter_ListenerMatch_SIDECAR_INBOUND, - Env: env, - Node: node, - ProxyInstances: proxyInstances, - ServiceInstance: instance, - Port: endpoint.ServicePort, - Push: push, + // If the user specifies a Sidecar CRD with an inbound listener, only construct that listener + // and not the ones from the proxyInstances + var proxyLabels model.LabelsCollection + for _, w := range proxyInstances { + proxyLabels = append(proxyLabels, w.Labels) + } + + sidecarScope := node.SidecarScope + noneMode := node.GetInterceptionMode() == model.InterceptionNone + + if sidecarScope == nil || !sidecarScope.HasCustomIngressListeners { + // There is no user supplied sidecarScope for this namespace + // Construct inbound listeners in the usual way by looking at the ports of the service instances + // attached to the proxy + // We should not create inbound listeners in NONE mode based on the service instances + // Doing so will prevent the workloads from starting as they would be listening on the same port + // Users are required to provide the sidecar config to define the inbound listeners + if node.GetInterceptionMode() == model.InterceptionNone { + return nil } - switch listenerProtocol { - case plugin.ListenerProtocolHTTP: - httpOpts = &httpListenerOpts{ - routeConfig: configgen.buildSidecarInboundHTTPRouteConfig(env, node, push, instance), - rds: "", // no RDS for inbound traffic - useRemoteAddress: false, - direction: http_conn.INGRESS, - connectionManager: &http_conn.HttpConnectionManager{ - // Append and forward client cert to backend. - ForwardClientCertDetails: http_conn.APPEND_FORWARD, - SetCurrentClientCertDetails: &http_conn.HttpConnectionManager_SetCurrentClientCertDetails{ - Subject: &google_protobuf.BoolValue{Value: true}, - Uri: true, - Dns: true, - }, - ServerName: EnvoyServerName, - }, + + // inbound connections/requests are redirected to the endpoint address but appear to be sent + // to the service address. + for _, instance := range proxyInstances { + endpoint := instance.Endpoint + bindToPort := false + bind := endpoint.Address + + // Local service instances can be accessed through one of three + // addresses: localhost, endpoint IP, and service + // VIP. Localhost bypasses the proxy and doesn't need any TCP + // route config. Endpoint IP is handled below and Service IP is handled + // by outbound routes. + // Traffic sent to our service VIP is redirected by remote + // services' kubeproxy to our specific endpoint IP. + listenerOpts := buildListenerOpts{ + env: env, + proxy: node, + proxyInstances: proxyInstances, + proxyLabels: proxyLabels, + bind: bind, + port: endpoint.Port, + bindToPort: bindToPort, + } + + pluginParams := &plugin.InputParams{ + ListenerProtocol: plugin.ModelProtocolToListenerProtocol(endpoint.ServicePort.Protocol), + ListenerCategory: networking.EnvoyFilter_ListenerMatch_SIDECAR_INBOUND, + Env: env, + Node: node, + ProxyInstances: proxyInstances, + ServiceInstance: instance, + Port: endpoint.ServicePort, + Push: push, + Bind: bind, + } + + if l := configgen.buildSidecarInboundListenerForPortOrUDS(listenerOpts, pluginParams, listenerMap); l != nil { + listeners = append(listeners, l) } - // See https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/helloworld#configure-the-proxy - if endpoint.ServicePort.Protocol.IsHTTP2() { - httpOpts.connectionManager.Http2ProtocolOptions = &core.Http2ProtocolOptions{} - if endpoint.ServicePort.Protocol == model.ProtocolGRPCWeb { - httpOpts.addGRPCWebFilter = true + } + + } else { + rule := sidecarScope.Config.Spec.(*networking.Sidecar) + + for _, ingressListener := range rule.Ingress { + // determine the bindToPort setting for listeners + bindToPort := false + if noneMode { + // dont care what the listener's capture mode setting is. The proxy does not use iptables + bindToPort = true + } else { + // proxy uses iptables redirect or tproxy. IF mode is not set + // for older proxies, it defaults to iptables redirect. If the + // listener's capture mode specifies NONE, then the proxy wants + // this listener alone to be on a physical port. If the + // listener's capture mode is default, then its same as + // iptables i.e. bindToPort is false. + if ingressListener.CaptureMode == networking.CaptureMode_NONE { + bindToPort = true } } - case plugin.ListenerProtocolTCP: - tcpNetworkFilters = buildInboundNetworkFilters(env, node, instance) + listenPort := &model.Port{ + Port: int(ingressListener.Port.Number), + Protocol: model.ParseProtocol(ingressListener.Port.Protocol), + Name: ingressListener.Port.Name, + } - default: - log.Warnf("Unsupported inbound protocol %v for port %#v", protocol, endpoint.ServicePort) - continue - } + bind := ingressListener.Bind + // if bindToPort is true, we set the bind address if empty to 0.0.0.0 - this is an inbound port. + if len(bind) == 0 && bindToPort { + bind = WildcardAddress + } else if len(bind) == 0 { + // auto infer the IP from the proxyInstances + // We assume all endpoints in the proxy instances have the same IP + // as they should all be pointing to the same network endpoint + bind = proxyInstances[0].Endpoint.Address + } - for _, p := range configgen.Plugins { - chains := p.OnInboundFilterChains(pluginParams) - if len(chains) == 0 { - continue + listenerOpts := buildListenerOpts{ + env: env, + proxy: node, + proxyInstances: proxyInstances, + proxyLabels: proxyLabels, + bind: bind, + port: listenPort.Port, + bindToPort: bindToPort, } - if len(allChains) != 0 { - log.Warnf("Found two plugin setups inbound filter chains for listeners, FilterChainMatch may not work as intended!") + + // Construct a dummy service instance for this port so that the rest of the code doesn't freak out + // due to a missing instance. Technically this instance is not a service instance as it corresponds to + // some workload listener. But given that we force all workloads to be part of atleast one service, + // lets create a service instance for this workload based on the first service associated with the workload. + // TODO: We are arbitrarily using the first proxyInstance. When a workload has multiple services bound to it, + // what happens? We could run the loop for every instance but we would have the same listeners. + + // First create a copy of a service instance + instance := &model.ServiceInstance{ + Endpoint: proxyInstances[0].Endpoint, + Service: proxyInstances[0].Service, + Labels: proxyInstances[0].Labels, + ServiceAccount: proxyInstances[0].ServiceAccount, + } + + // Update the values here so that the plugins use the right ports + // uds values + // TODO: all plugins need to be updated to account for the fact that + // the port may be 0 but bind may have a UDS value + // Inboundroute will be different for + instance.Endpoint.Address = bind + instance.Endpoint.ServicePort = listenPort + // TODO: this should be parsed from the defaultEndpoint field in the ingressListener + instance.Endpoint.Port = listenPort.Port + + pluginParams := &plugin.InputParams{ + ListenerProtocol: plugin.ModelProtocolToListenerProtocol(listenPort.Protocol), + ListenerCategory: networking.EnvoyFilter_ListenerMatch_SIDECAR_INBOUND, + Env: env, + Node: node, + ProxyInstances: proxyInstances, + ServiceInstance: instance, + Port: listenPort, + Push: push, + Bind: bind, + } + + if l := configgen.buildSidecarInboundListenerForPortOrUDS(listenerOpts, pluginParams, listenerMap); l != nil { + listeners = append(listeners, l) } - allChains = append(allChains, chains...) } - // Construct the default filter chain. - if len(allChains) == 0 { - log.Infof("Use default filter chain for %v", endpoint) - // add one empty entry to the list so we generate a default listener below - allChains = []plugin.FilterChain{{}} + } + + return listeners +} + +// buildSidecarInboundListenerForPortOrUDS creates a single listener on the server-side (inbound) +// for a given port or unix domain socket +func (configgen *ConfigGeneratorImpl) buildSidecarInboundListenerForPortOrUDS(listenerOpts buildListenerOpts, + pluginParams *plugin.InputParams, listenerMap map[string]*inboundListenerEntry) *xdsapi.Listener { + + // Local service instances can be accessed through one of four addresses: + // unix domain socket, localhost, endpoint IP, and service + // VIP. Localhost bypasses the proxy and doesn't need any TCP + // route config. Endpoint IP is handled below and Service IP is handled + // by outbound routes. Traffic sent to our service VIP is redirected by + // remote services' kubeproxy to our specific endpoint IP. + listenerMapKey := fmt.Sprintf("%s:%d", listenerOpts.bind, pluginParams.Port.Port) + + if old, exists := listenerMap[listenerMapKey]; exists { + // For sidecar specified listeners, the caller is expected to supply a dummy service instance + // with the right port and a hostname constructed from the sidecar config's name+namespace + pluginParams.Push.Add(model.ProxyStatusConflictInboundListener, pluginParams.Node.ID, pluginParams.Node, + fmt.Sprintf("Conflicting inbound listener:%s. existing: %s, incoming: %s", listenerMapKey, + old.instanceHostname, pluginParams.ServiceInstance.Service.Hostname)) + + // Skip building listener for the same ip port + return nil + } + + allChains := []plugin.FilterChain{} + var httpOpts *httpListenerOpts + var tcpNetworkFilters []listener.Filter + + switch pluginParams.ListenerProtocol { + case plugin.ListenerProtocolHTTP: + httpOpts = &httpListenerOpts{ + routeConfig: configgen.buildSidecarInboundHTTPRouteConfig(pluginParams.Env, pluginParams.Node, + pluginParams.Push, pluginParams.ServiceInstance), + rds: "", // no RDS for inbound traffic + useRemoteAddress: false, + direction: http_conn.INGRESS, + connectionManager: &http_conn.HttpConnectionManager{ + // Append and forward client cert to backend. + ForwardClientCertDetails: http_conn.APPEND_FORWARD, + SetCurrentClientCertDetails: &http_conn.HttpConnectionManager_SetCurrentClientCertDetails{ + Subject: &google_protobuf.BoolValue{Value: true}, + Uri: true, + Dns: true, + }, + ServerName: EnvoyServerName, + }, } - for _, chain := range allChains { - listenerOpts.filterChainOpts = append(listenerOpts.filterChainOpts, &filterChainOpts{ - httpOpts: httpOpts, - networkFilters: tcpNetworkFilters, - tlsContext: chain.TLSContext, - match: chain.FilterChainMatch, - listenerFilters: chain.ListenerFilters, - }) + // See https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/helloworld#configure-the-proxy + if pluginParams.ServiceInstance.Endpoint.ServicePort.Protocol.IsHTTP2() { + httpOpts.connectionManager.Http2ProtocolOptions = &core.Http2ProtocolOptions{} + if pluginParams.ServiceInstance.Endpoint.ServicePort.Protocol == model.ProtocolGRPCWeb { + httpOpts.addGRPCWebFilter = true + } } - // call plugins - l := buildListener(listenerOpts) - mutable := &plugin.MutableObjects{ - Listener: l, - FilterChains: make([]plugin.FilterChain, len(l.FilterChains)), + case plugin.ListenerProtocolTCP: + tcpNetworkFilters = buildInboundNetworkFilters(pluginParams.Env, pluginParams.Node, pluginParams.ServiceInstance) + + default: + log.Warnf("Unsupported inbound protocol %v for port %#v", pluginParams.ListenerProtocol, + pluginParams.ServiceInstance.Endpoint.ServicePort) + return nil + } + + for _, p := range configgen.Plugins { + chains := p.OnInboundFilterChains(pluginParams) + if len(chains) == 0 { + continue } - for _, p := range configgen.Plugins { - if err := p.OnInboundListener(pluginParams, mutable); err != nil { - log.Warn(err.Error()) - } + if len(allChains) != 0 { + log.Warnf("Found two plugin setups inbound filter chains for listeners, FilterChainMatch may not work as intended!") } - // Filters are serialized one time into an opaque struct once we have the complete list. - if err := buildCompleteFilterChain(pluginParams, mutable, listenerOpts); err != nil { - log.Warna("buildSidecarInboundListeners ", err.Error()) - } else { - listeners = append(listeners, mutable.Listener) - listenerMap[listenerMapKey] = instance + allChains = append(allChains, chains...) + } + // Construct the default filter chain. + if len(allChains) == 0 { + log.Infof("Use default filter chain for %v", pluginParams.ServiceInstance.Endpoint) + // add one empty entry to the list so we generate a default listener below + allChains = []plugin.FilterChain{{}} + } + for _, chain := range allChains { + listenerOpts.filterChainOpts = append(listenerOpts.filterChainOpts, &filterChainOpts{ + httpOpts: httpOpts, + networkFilters: tcpNetworkFilters, + tlsContext: chain.TLSContext, + match: chain.FilterChainMatch, + listenerFilters: chain.ListenerFilters, + }) + } + + // call plugins + l := buildListener(listenerOpts) + mutable := &plugin.MutableObjects{ + Listener: l, + FilterChains: make([]plugin.FilterChain, len(l.FilterChains)), + } + for _, p := range configgen.Plugins { + if err := p.OnInboundListener(pluginParams, mutable); err != nil { + log.Warn(err.Error()) } } - return listeners + // Filters are serialized one time into an opaque struct once we have the complete list. + if err := buildCompleteFilterChain(pluginParams, mutable, listenerOpts); err != nil { + log.Warna("buildSidecarInboundListeners ", err.Error()) + return nil + } + + listenerMap[listenerMapKey] = &inboundListenerEntry{ + bind: listenerOpts.bind, + instanceHostname: pluginParams.ServiceInstance.Service.Hostname, + } + return mutable.Listener } -type listenerEntry struct { - // TODO: Clean this up +type inboundListenerEntry struct { + bind string + instanceHostname model.Hostname // could be empty if generated via Sidecar CRD +} + +type outboundListenerEntry struct { services []*model.Service servicePort *model.Port + bind string listener *xdsapi.Listener + locked bool } func protocolName(p model.Protocol) string { @@ -484,7 +654,8 @@ func (c outboundListenerConflict) addMetric(push *model.PushContext) { len(c.currentServices))) } -// buildSidecarOutboundListeners generates http and tcp listeners for outbound connections from the service instance +// buildSidecarOutboundListeners generates http and tcp listeners for +// outbound connections from the proxy based on the sidecar scope associated with the proxy. // TODO(github.com/istio/pilot/issues/237) // // Sharing tcp_proxy and http_connection_manager filters on the same port for @@ -498,245 +669,225 @@ func (c outboundListenerConflict) addMetric(push *model.PushContext) { // Connections to the ports of non-load balanced services are directed to // the connection's original destination. This avoids costly queries of instance // IPs and ports, but requires that ports of non-load balanced service be unique. -func (configgen *ConfigGeneratorImpl) buildSidecarOutboundListeners(env *model.Environment, node *model.Proxy, push *model.PushContext, - proxyInstances []*model.ServiceInstance, services []*model.Service) []*xdsapi.Listener { +func (configgen *ConfigGeneratorImpl) buildSidecarOutboundListeners(env *model.Environment, node *model.Proxy, + push *model.PushContext, proxyInstances []*model.ServiceInstance) []*xdsapi.Listener { var proxyLabels model.LabelsCollection for _, w := range proxyInstances { proxyLabels = append(proxyLabels, w.Labels) } - meshGateway := map[string]bool{model.IstioMeshGateway: true} - configs := push.VirtualServices(node, meshGateway) + sidecarScope := node.SidecarScope + noneMode := node.GetInterceptionMode() == model.InterceptionNone var tcpListeners, httpListeners []*xdsapi.Listener // For conflict resolution - listenerMap := make(map[string]*listenerEntry) - for _, service := range services { - for _, servicePort := range service.Ports { - listenAddress := WildcardAddress - var destinationIPAddress string - var listenerMapKey string - var currentListenerEntry *listenerEntry - listenerOpts := buildListenerOpts{ - env: env, - proxy: node, - proxyInstances: proxyInstances, - ip: WildcardAddress, - port: servicePort.Port, - } + listenerMap := make(map[string]*outboundListenerEntry) + + if sidecarScope == nil || sidecarScope.Config == nil { + // this namespace has no sidecar scope. Construct listeners in the old way + services := push.Services(node) + meshGateway := map[string]bool{model.IstioMeshGateway: true} + virtualServices := push.VirtualServices(node, meshGateway) + + // determine the bindToPort setting for listeners + bindToPort := false + if noneMode { + bindToPort = true + } - pluginParams := &plugin.InputParams{ - ListenerProtocol: plugin.ModelProtocolToListenerProtocol(servicePort.Protocol), - ListenerCategory: networking.EnvoyFilter_ListenerMatch_SIDECAR_OUTBOUND, - Env: env, - Node: node, - ProxyInstances: proxyInstances, - Service: service, - Port: servicePort, - Push: push, - } - switch pluginParams.ListenerProtocol { - case plugin.ListenerProtocolHTTP: - listenerMapKey = fmt.Sprintf("%s:%d", listenAddress, servicePort.Port) - var exists bool - // Check if this HTTP listener conflicts with an existing wildcard TCP listener - // i.e. one of NONE resolution type, since we collapse all HTTP listeners into - // a single 0.0.0.0:port listener and use vhosts to distinguish individual http - // services in that port - if currentListenerEntry, exists = listenerMap[listenerMapKey]; exists { - if !currentListenerEntry.servicePort.Protocol.IsHTTP() { - outboundListenerConflict{ - metric: model.ProxyStatusConflictOutboundListenerTCPOverHTTP, - node: node, - listenerName: listenerMapKey, - currentServices: currentListenerEntry.services, - currentProtocol: currentListenerEntry.servicePort.Protocol, - newHostname: service.Hostname, - newProtocol: servicePort.Protocol, - }.addMetric(push) - } - // Skip building listener for the same http port - currentListenerEntry.services = append(currentListenerEntry.services, service) + bind := "" + if bindToPort { + bind = LocalhostAddress + } + for _, service := range services { + for _, servicePort := range service.Ports { + // if the workload has NONE mode interception, then we generate TCP ports only + // Skip generating HTTP listeners, as we will generate a single HTTP proxy + if bindToPort && servicePort.Protocol.IsHTTP() { continue } - listenerOpts.filterChainOpts = []*filterChainOpts{{ - httpOpts: &httpListenerOpts{ - rds: fmt.Sprintf("%d", servicePort.Port), - useRemoteAddress: false, - direction: http_conn.EGRESS, - }, - }} - case plugin.ListenerProtocolTCP: - // Determine the listener address - // we listen on the service VIP if and only - // if the address is an IP address. If its a CIDR, we listen on - // 0.0.0.0, and setup a filter chain match for the CIDR range. - // As a small optimization, CIDRs with /32 prefix will be converted - // into listener address so that there is a dedicated listener for this - // ip:port. This will reduce the impact of a listener reload - - svcListenAddress := service.GetServiceAddressForProxy(node) - // We should never get an empty address. - // This is a safety guard, in case some platform adapter isn't doing things - // properly - if len(svcListenAddress) > 0 { - if !strings.Contains(svcListenAddress, "/") { - listenAddress = svcListenAddress - } else { - // Address is a CIDR. Fall back to 0.0.0.0 and - // filter chain match - destinationIPAddress = svcListenAddress - } + listenerOpts := buildListenerOpts{ + env: env, + proxy: node, + proxyInstances: proxyInstances, + proxyLabels: proxyLabels, + port: servicePort.Port, + bind: bind, + bindToPort: bindToPort, } - listenerMapKey = fmt.Sprintf("%s:%d", listenAddress, servicePort.Port) - var exists bool - // Check if this TCP listener conflicts with an existing HTTP listener on 0.0.0.0:Port - if currentListenerEntry, exists = listenerMap[listenerMapKey]; exists { - // Check for port collisions between TCP/TLS and HTTP. - // If configured correctly, TCP/TLS ports may not collide. - // We'll need to do additional work to find out if there is a collision within TCP/TLS. - if !currentListenerEntry.servicePort.Protocol.IsTCP() { - outboundListenerConflict{ - metric: model.ProxyStatusConflictOutboundListenerHTTPOverTCP, - node: node, - listenerName: listenerMapKey, - currentServices: currentListenerEntry.services, - currentProtocol: currentListenerEntry.servicePort.Protocol, - newHostname: service.Hostname, - newProtocol: servicePort.Protocol, - }.addMetric(push) - continue - } - // WE have a collision with another TCP port. - // This can happen only if the service is listening on 0.0.0.0: - // which is the case for headless services, or non-k8s services that do not have a VIP. - // Unfortunately we won't know if this is a real conflict or not - // until we process the VirtualServices, etc. - // The conflict resolution is done later in this code + pluginParams := &plugin.InputParams{ + ListenerProtocol: plugin.ModelProtocolToListenerProtocol(servicePort.Protocol), + ListenerCategory: networking.EnvoyFilter_ListenerMatch_SIDECAR_OUTBOUND, + Env: env, + Node: node, + ProxyInstances: proxyInstances, + Push: push, + Bind: bind, + Port: servicePort, + Service: service, } - listenerOpts.filterChainOpts = buildSidecarOutboundTCPTLSFilterChainOpts(env, node, push, configs, - destinationIPAddress, service, servicePort, proxyLabels, meshGateway) - default: - // UDP or other protocols: no need to log, it's too noisy - continue + configgen.buildSidecarOutboundListenerForPortOrUDS(listenerOpts, pluginParams, listenerMap, virtualServices) + } + } + } else { + // The sidecarConfig if provided could filter the list of + // services/virtual services that we need to process. It could also + // define one or more listeners with specific ports. Once we generate + // listeners for these user specified ports, we will auto generate + // configs for other ports if and only if the sidecarConfig has an + // egressListener on wildcard port. + // + // Validation will ensure that we have utmost one wildcard egress listener + // occurring in the end + + // Add listeners based on the config in the sidecar.EgressListeners if + // no Sidecar CRD is provided for this config namespace, + // push.SidecarScope will generate a default catch all egress listener. + for _, egressListener := range sidecarScope.EgressListeners { + + services := egressListener.Services() + virtualServices := egressListener.VirtualServices() + + // determine the bindToPort setting for listeners + bindToPort := false + if node.GetInterceptionMode() == model.InterceptionNone { + // dont care what the listener's capture mode setting is. The proxy does not use iptables + bindToPort = true + } else { + // proxy uses iptables redirect or tproxy. IF mode is not set + // for older proxies, it defaults to iptables redirect. If the + // listener's capture mode specifies NONE, then the proxy wants + // this listener alone to be on a physical port. If the + // listener's capture mode is default, then its same as + // iptables i.e. bindToPort is false. + if egressListener.IstioListener != nil && + egressListener.IstioListener.CaptureMode == networking.CaptureMode_NONE { + bindToPort = true + } } - // Even if we have a non empty current listener, lets build the new listener with the filter chains - // In the end, we will merge the filter chains + if egressListener.IstioListener != nil && + egressListener.IstioListener.Port != nil { + // We have a non catch all listener on some user specified port + // The user specified port may or may not match a service port. + // If it does not match any service port, then we expect the + // user to provide a virtualService that will route to a proper + // Service. This is the reason why we can't reuse the big + // forloop logic below as it iterates over all services and + // their service ports. + + listenPort := &model.Port{ + Port: int(egressListener.IstioListener.Port.Number), + Protocol: model.ParseProtocol(egressListener.IstioListener.Port.Protocol), + Name: egressListener.IstioListener.Port.Name, + } - // call plugins - listenerOpts.ip = listenAddress - l := buildListener(listenerOpts) - mutable := &plugin.MutableObjects{ - Listener: l, - FilterChains: make([]plugin.FilterChain, len(l.FilterChains)), - } + // user can specify a Port without a bind address. If IPtables capture mode + // (i.e bind to port is false) we bind to 0.0.0.0. Else, for NONE mode we bind to 127.0.0.1 + // We cannot auto infer bind IPs here from the imported services as the user could specify + // some random port and import 100s of multi-port services. Our behavior for HTTP is that + // when the user explicitly specifies a port, we establish a HTTP proxy on that port for + // the imported services. For TCP, the user would have to specify a virtualService for the + // imported Service, mapping from the listenPort to some specific service port + bind := egressListener.IstioListener.Bind + // if bindToPort is true, we set the bind address if empty to 127.0.0.1 + if len(bind) == 0 { + if bindToPort { + bind = LocalhostAddress + } else { + bind = WildcardAddress + } + } - for _, p := range configgen.Plugins { - if err := p.OnOutboundListener(pluginParams, mutable); err != nil { - log.Warn(err.Error()) + listenerOpts := buildListenerOpts{ + env: env, + proxy: node, + proxyInstances: proxyInstances, + proxyLabels: proxyLabels, + bind: bind, + port: listenPort.Port, + bindToPort: bindToPort, } - } - // Filters are serialized one time into an opaque struct once we have the complete list. - if err := buildCompleteFilterChain(pluginParams, mutable, listenerOpts); err != nil { - log.Warna("buildSidecarOutboundListeners: ", err.Error()) - continue - } + pluginParams := &plugin.InputParams{ + ListenerProtocol: plugin.ModelProtocolToListenerProtocol(listenPort.Protocol), + ListenerCategory: networking.EnvoyFilter_ListenerMatch_SIDECAR_OUTBOUND, + Env: env, + Node: node, + ProxyInstances: proxyInstances, + Push: push, + Bind: bind, + Port: listenPort, + } - // TODO(rshriram) merge multiple identical filter chains with just a single destination CIDR based - // filter chain matche, into a single filter chain and array of destinationcidr matches - - // We checked TCP over HTTP, and HTTP over TCP conflicts above. - // The code below checks for TCP over TCP conflicts and merges listeners - if currentListenerEntry != nil { - // merge the newly built listener with the existing listener - // if and only if the filter chains have distinct conditions - // Extract the current filter chain matches - // For every new filter chain match being added, check if any previous match is same - // if so, skip adding this filter chain with a warning - // This is very unoptimized. - newFilterChains := make([]listener.FilterChain, 0, - len(currentListenerEntry.listener.FilterChains)+len(mutable.Listener.FilterChains)) - newFilterChains = append(newFilterChains, currentListenerEntry.listener.FilterChains...) - for _, incomingFilterChain := range mutable.Listener.FilterChains { - conflictFound := false - - compareWithExisting: - for _, existingFilterChain := range currentListenerEntry.listener.FilterChains { - if existingFilterChain.FilterChainMatch == nil { - // This is a catch all filter chain. - // We can only merge with a non-catch all filter chain - // Else mark it as conflict - if incomingFilterChain.FilterChainMatch == nil { - conflictFound = true - outboundListenerConflict{ - metric: model.ProxyStatusConflictOutboundListenerTCPOverTCP, - node: node, - listenerName: listenerMapKey, - currentServices: currentListenerEntry.services, - currentProtocol: currentListenerEntry.servicePort.Protocol, - newHostname: service.Hostname, - newProtocol: servicePort.Protocol, - }.addMetric(push) - break compareWithExisting - } else { - continue - } - } - if incomingFilterChain.FilterChainMatch == nil { - continue + configgen.buildSidecarOutboundListenerForPortOrUDS(listenerOpts, pluginParams, listenerMap, virtualServices) + + } else { + // This is a catch all egress listener with no port. This + // should be the last egress listener in the sidecar + // Scope. Construct a listener for each service and service + // port, if and only if this port was not specified in any of + // the preceding listeners from the sidecarScope. This allows + // users to specify a trimmed set of services for one or more + // listeners and then add a catch all egress listener for all + // other ports. Doing so allows people to restrict the set of + // services exposed on one or more listeners, and avoid hard + // port conflicts like tcp taking over http or http taking over + // tcp, or simply specify that of all the listeners that Istio + // generates, the user would like to have only specific sets of + // services exposed on a particular listener. + // + // To ensure that we do not add anything to listeners we have + // already generated, run through the outboundListenerEntry map and set + // the locked bit to true. + // buildSidecarOutboundListenerForPortOrUDS will not add/merge + // any HTTP/TCP listener if there is already a outboundListenerEntry + // with locked bit set to true + for _, e := range listenerMap { + e.locked = true + } + + bind := "" + if bindToPort { + bind = LocalhostAddress + } + for _, service := range services { + for _, servicePort := range service.Ports { + listenerOpts := buildListenerOpts{ + env: env, + proxy: node, + proxyInstances: proxyInstances, + proxyLabels: proxyLabels, + port: servicePort.Port, + bind: bind, + bindToPort: bindToPort, } - // We have two non-catch all filter chains. Check for duplicates - if reflect.DeepEqual(*existingFilterChain.FilterChainMatch, *incomingFilterChain.FilterChainMatch) { - conflictFound = true - outboundListenerConflict{ - metric: model.ProxyStatusConflictOutboundListenerTCPOverTCP, - node: node, - listenerName: listenerMapKey, - currentServices: currentListenerEntry.services, - currentProtocol: currentListenerEntry.servicePort.Protocol, - newHostname: service.Hostname, - newProtocol: servicePort.Protocol, - }.addMetric(push) - break compareWithExisting + pluginParams := &plugin.InputParams{ + ListenerProtocol: plugin.ModelProtocolToListenerProtocol(servicePort.Protocol), + ListenerCategory: networking.EnvoyFilter_ListenerMatch_SIDECAR_OUTBOUND, + Env: env, + Node: node, + ProxyInstances: proxyInstances, + Push: push, + Bind: bind, + Port: servicePort, + Service: service, } - } - if !conflictFound { - // There is no conflict with any filter chain in the existing listener. - // So append the new filter chains to the existing listener's filter chains - newFilterChains = append(newFilterChains, incomingFilterChain) - lEntry := listenerMap[listenerMapKey] - lEntry.services = append(lEntry.services, service) + configgen.buildSidecarOutboundListenerForPortOrUDS(listenerOpts, pluginParams, listenerMap, virtualServices) } } - currentListenerEntry.listener.FilterChains = newFilterChains - } else { - listenerMap[listenerMapKey] = &listenerEntry{ - services: []*model.Service{service}, - servicePort: servicePort, - listener: mutable.Listener, - } - } - - if log.DebugEnabled() && len(mutable.Listener.FilterChains) > 1 || currentListenerEntry != nil { - var numChains int - if currentListenerEntry != nil { - numChains = len(currentListenerEntry.listener.FilterChains) - } else { - numChains = len(mutable.Listener.FilterChains) - } - log.Debugf("buildSidecarOutboundListeners: multiple filter chain listener %s with %d chains", mutable.Listener.Name, numChains) } } } - + // Now validate all the listeners. Collate the tcp listeners first and then the HTTP listeners + // TODO: This is going to be bad for caching as the order of listeners in tcpListeners or httpListeners is not + // guaranteed. for name, l := range listenerMap { if err := l.listener.Validate(); err != nil { log.Warnf("buildSidecarOutboundListeners: error validating listener %s (type %v): %v", name, l.servicePort.Protocol, err) @@ -753,6 +904,316 @@ func (configgen *ConfigGeneratorImpl) buildSidecarOutboundListeners(env *model.E return append(tcpListeners, httpListeners...) } +// buildSidecarOutboundListenerForPortOrUDS builds a single listener and +// adds it to the listenerMap provided by the caller. Listeners are added +// if one doesn't already exist. HTTP listeners on same port are ignored +// (as vhosts are shipped through RDS). TCP listeners on same port are +// allowed only if they have different CIDR matches. +func (configgen *ConfigGeneratorImpl) buildSidecarOutboundListenerForPortOrUDS(listenerOpts buildListenerOpts, + pluginParams *plugin.InputParams, listenerMap map[string]*outboundListenerEntry, virtualServices []model.Config) { + + var destinationIPAddress string + var listenerMapKey string + var currentListenerEntry *outboundListenerEntry + + switch pluginParams.ListenerProtocol { + case plugin.ListenerProtocolHTTP: + // first identify the bind if its not set. Then construct the key + // used to lookup the listener in the conflict map. + if len(listenerOpts.bind) == 0 { // no user specified bind. Use 0.0.0.0:Port + listenerOpts.bind = WildcardAddress + } + listenerMapKey = fmt.Sprintf("%s:%d", listenerOpts.bind, pluginParams.Port.Port) + + var exists bool + + // Have we already generated a listener for this Port based on user + // specified listener ports? if so, we should not add any more HTTP + // services to the port. The user could have specified a sidecar + // resource with one or more explicit ports and then added a catch + // all listener, implying add all other ports as usual. When we are + // iterating through the services for a catchAll egress listener, + // the caller would have set the locked bit for each listener Entry + // in the map. + // + // Check if this HTTP listener conflicts with an existing TCP + // listener. We could have listener conflicts occur on unix domain + // sockets, or on IP binds. Specifically, its common to see + // conflicts on binds for wildcard address when a service has NONE + // resolution type, since we collapse all HTTP listeners into a + // single 0.0.0.0:port listener and use vhosts to distinguish + // individual http services in that port + if currentListenerEntry, exists = listenerMap[listenerMapKey]; exists { + // NOTE: This is not a conflict. This is simply filtering the + // services for a given listener explicitly. + if currentListenerEntry.locked { + return + } + if pluginParams.Service != nil { + if !currentListenerEntry.servicePort.Protocol.IsHTTP() { + outboundListenerConflict{ + metric: model.ProxyStatusConflictOutboundListenerTCPOverHTTP, + node: pluginParams.Node, + listenerName: listenerMapKey, + currentServices: currentListenerEntry.services, + currentProtocol: currentListenerEntry.servicePort.Protocol, + newHostname: pluginParams.Service.Hostname, + newProtocol: pluginParams.Port.Protocol, + }.addMetric(pluginParams.Push) + } + + // Skip building listener for the same http port + currentListenerEntry.services = append(currentListenerEntry.services, pluginParams.Service) + } + return + } + + // No conflicts. Add a http filter chain option to the listenerOpts + var rdsName string + if pluginParams.Port.Port == 0 { + rdsName = listenerOpts.bind // use the UDS as a rds name + } else { + rdsName = fmt.Sprintf("%d", pluginParams.Port.Port) + } + listenerOpts.filterChainOpts = []*filterChainOpts{{ + httpOpts: &httpListenerOpts{ + useRemoteAddress: false, + direction: http_conn.EGRESS, + rds: rdsName, + }, + }} + + case plugin.ListenerProtocolTCP: + // first identify the bind if its not set. Then construct the key + // used to lookup the listener in the conflict map. + + // Determine the listener address if bind is empty + // we listen on the service VIP if and only + // if the address is an IP address. If its a CIDR, we listen on + // 0.0.0.0, and setup a filter chain match for the CIDR range. + // As a small optimization, CIDRs with /32 prefix will be converted + // into listener address so that there is a dedicated listener for this + // ip:port. This will reduce the impact of a listener reload + + if len(listenerOpts.bind) == 0 { + svcListenAddress := pluginParams.Service.GetServiceAddressForProxy(pluginParams.Node) + // We should never get an empty address. + // This is a safety guard, in case some platform adapter isn't doing things + // properly + if len(svcListenAddress) > 0 { + if !strings.Contains(svcListenAddress, "/") { + listenerOpts.bind = svcListenAddress + } else { + // Address is a CIDR. Fall back to 0.0.0.0 and + // filter chain match + destinationIPAddress = svcListenAddress + listenerOpts.bind = WildcardAddress + } + } + } + + // could be a unix domain socket or an IP bind + listenerMapKey = fmt.Sprintf("%s:%d", listenerOpts.bind, pluginParams.Port.Port) + + var exists bool + + // Have we already generated a listener for this Port based on user + // specified listener ports? if so, we should not add any more + // services to the port. The user could have specified a sidecar + // resource with one or more explicit ports and then added a catch + // all listener, implying add all other ports as usual. When we are + // iterating through the services for a catchAll egress listener, + // the caller would have set the locked bit for each listener Entry + // in the map. + // + // Check if this TCP listener conflicts with an existing HTTP listener + if currentListenerEntry, exists = listenerMap[listenerMapKey]; exists { + // NOTE: This is not a conflict. This is simply filtering the + // services for a given listener explicitly. + if currentListenerEntry.locked { + return + } + // Check for port collisions between TCP/TLS and HTTP. If + // configured correctly, TCP/TLS ports may not collide. We'll + // need to do additional work to find out if there is a + // collision within TCP/TLS. + if !currentListenerEntry.servicePort.Protocol.IsTCP() { + // NOTE: While pluginParams.Service can be nil, + // this code cannot be reached if Service is nil because a pluginParams.Service can be nil only + // for user defined Egress listeners with ports. And these should occur in the API before + // the wildcard egress listener. the check for the "locked" bit will eliminate the collision. + // User is also not allowed to add duplicate ports in the egress listener + var newHostname model.Hostname + if pluginParams.Service != nil { + newHostname = pluginParams.Service.Hostname + } else { + // user defined outbound listener via sidecar API + newHostname = "sidecar-config-egress-http-listener" + } + + outboundListenerConflict{ + metric: model.ProxyStatusConflictOutboundListenerHTTPOverTCP, + node: pluginParams.Node, + listenerName: listenerMapKey, + currentServices: currentListenerEntry.services, + currentProtocol: currentListenerEntry.servicePort.Protocol, + newHostname: newHostname, + newProtocol: pluginParams.Port.Protocol, + }.addMetric(pluginParams.Push) + return + } + + // We have a collision with another TCP port. This can happen + // for headless services, or non-k8s services that do not have + // a VIP, or when we have two binds on a unix domain socket or + // on same IP. Unfortunately we won't know if this is a real + // conflict or not until we process the VirtualServices, etc. + // The conflict resolution is done later in this code + } + + meshGateway := map[string]bool{model.IstioMeshGateway: true} + listenerOpts.filterChainOpts = buildSidecarOutboundTCPTLSFilterChainOpts(pluginParams.Env, pluginParams.Node, + pluginParams.Push, virtualServices, + destinationIPAddress, pluginParams.Service, + pluginParams.Port, listenerOpts.proxyLabels, meshGateway) + default: + // UDP or other protocols: no need to log, it's too noisy + return + } + + // Lets build the new listener with the filter chains. In the end, we will + // merge the filter chains with any existing listener on the same port/bind point + l := buildListener(listenerOpts) + mutable := &plugin.MutableObjects{ + Listener: l, + FilterChains: make([]plugin.FilterChain, len(l.FilterChains)), + } + + for _, p := range configgen.Plugins { + if err := p.OnOutboundListener(pluginParams, mutable); err != nil { + log.Warn(err.Error()) + } + } + + // Filters are serialized one time into an opaque struct once we have the complete list. + if err := buildCompleteFilterChain(pluginParams, mutable, listenerOpts); err != nil { + log.Warna("buildSidecarOutboundListeners: ", err.Error()) + return + } + + // TODO(rshriram) merge multiple identical filter chains with just a single destination CIDR based + // filter chain match, into a single filter chain and array of destinationcidr matches + + // We checked TCP over HTTP, and HTTP over TCP conflicts above. + // The code below checks for TCP over TCP conflicts and merges listeners + if currentListenerEntry != nil { + // merge the newly built listener with the existing listener + // if and only if the filter chains have distinct conditions + // Extract the current filter chain matches + // For every new filter chain match being added, check if any previous match is same + // if so, skip adding this filter chain with a warning + // This is very unoptimized. + newFilterChains := make([]listener.FilterChain, 0, + len(currentListenerEntry.listener.FilterChains)+len(mutable.Listener.FilterChains)) + newFilterChains = append(newFilterChains, currentListenerEntry.listener.FilterChains...) + + for _, incomingFilterChain := range mutable.Listener.FilterChains { + conflictFound := false + + compareWithExisting: + for _, existingFilterChain := range currentListenerEntry.listener.FilterChains { + if existingFilterChain.FilterChainMatch == nil { + // This is a catch all filter chain. + // We can only merge with a non-catch all filter chain + // Else mark it as conflict + if incomingFilterChain.FilterChainMatch == nil { + // NOTE: While pluginParams.Service can be nil, + // this code cannot be reached if Service is nil because a pluginParams.Service can be nil only + // for user defined Egress listeners with ports. And these should occur in the API before + // the wildcard egress listener. the check for the "locked" bit will eliminate the collision. + // User is also not allowed to add duplicate ports in the egress listener + var newHostname model.Hostname + if pluginParams.Service != nil { + newHostname = pluginParams.Service.Hostname + } else { + // user defined outbound listener via sidecar API + newHostname = "sidecar-config-egress-tcp-listener" + } + + conflictFound = true + outboundListenerConflict{ + metric: model.ProxyStatusConflictOutboundListenerTCPOverTCP, + node: pluginParams.Node, + listenerName: listenerMapKey, + currentServices: currentListenerEntry.services, + currentProtocol: currentListenerEntry.servicePort.Protocol, + newHostname: newHostname, + newProtocol: pluginParams.Port.Protocol, + }.addMetric(pluginParams.Push) + break compareWithExisting + } else { + continue + } + } + if incomingFilterChain.FilterChainMatch == nil { + continue + } + + // We have two non-catch all filter chains. Check for duplicates + if reflect.DeepEqual(*existingFilterChain.FilterChainMatch, *incomingFilterChain.FilterChainMatch) { + var newHostname model.Hostname + if pluginParams.Service != nil { + newHostname = pluginParams.Service.Hostname + } else { + // user defined outbound listener via sidecar API + newHostname = "sidecar-config-egress-tcp-listener" + } + + conflictFound = true + outboundListenerConflict{ + metric: model.ProxyStatusConflictOutboundListenerTCPOverTCP, + node: pluginParams.Node, + listenerName: listenerMapKey, + currentServices: currentListenerEntry.services, + currentProtocol: currentListenerEntry.servicePort.Protocol, + newHostname: newHostname, + newProtocol: pluginParams.Port.Protocol, + }.addMetric(pluginParams.Push) + break compareWithExisting + } + } + + if !conflictFound { + // There is no conflict with any filter chain in the existing listener. + // So append the new filter chains to the existing listener's filter chains + newFilterChains = append(newFilterChains, incomingFilterChain) + if pluginParams.Service != nil { + lEntry := listenerMap[listenerMapKey] + lEntry.services = append(lEntry.services, pluginParams.Service) + } + } + } + currentListenerEntry.listener.FilterChains = newFilterChains + } else { + listenerMap[listenerMapKey] = &outboundListenerEntry{ + services: []*model.Service{pluginParams.Service}, + servicePort: pluginParams.Port, + bind: listenerOpts.bind, + listener: mutable.Listener, + } + } + + if log.DebugEnabled() && len(mutable.Listener.FilterChains) > 1 || currentListenerEntry != nil { + var numChains int + if currentListenerEntry != nil { + numChains = len(currentListenerEntry.listener.FilterChains) + } else { + numChains = len(mutable.Listener.FilterChains) + } + log.Debugf("buildSidecarOutboundListeners: multiple filter chain listener %s with %d chains", mutable.Listener.Name, numChains) + } +} + // buildSidecarInboundMgmtListeners creates inbound TCP only listeners for the management ports on // server (inbound). Management port listeners are slightly different from standard Inbound listeners // in that, they do not have mixer filters nor do they have inbound auth. @@ -773,6 +1234,12 @@ func buildSidecarInboundMgmtListeners(node *model.Proxy, env *model.Environment, managementIP = "127.0.0.1" } + // NOTE: We should not generate inbound listeners when the proxy does not have any IPtables traffic capture + // as it would interfere with the workloads listening on the same port + if node.GetInterceptionMode() == model.InterceptionNone { + return nil + } + // assumes that inbound connections/requests are sent to the endpoint address for _, mPort := range managementPorts { switch mPort.Protocol { @@ -790,7 +1257,7 @@ func buildSidecarInboundMgmtListeners(node *model.Proxy, env *model.Environment, }, } listenerOpts := buildListenerOpts{ - ip: managementIP, + bind: managementIP, port: mPort.Port, filterChainOpts: []*filterChainOpts{{ networkFilters: buildInboundNetworkFilters(env, node, instance), @@ -859,7 +1326,8 @@ type buildListenerOpts struct { env *model.Environment proxy *model.Proxy proxyInstances []*model.ServiceInstance - ip string + proxyLabels model.LabelsCollection + bind string port int bindToPort bool filterChainOpts []*filterChainOpts @@ -927,7 +1395,7 @@ func buildHTTPConnectionManager(node *model.Proxy, env *model.Environment, httpO Path: env.Mesh.AccessLogFile, } - if util.Is11Proxy(node) { + if util.IsProxyVersionGE11(node) { buildAccessLog(fl, env) } @@ -1037,8 +1505,10 @@ func buildListener(opts buildListenerOpts) *xdsapi.Listener { } return &xdsapi.Listener{ - Name: fmt.Sprintf("%s_%d", opts.ip, opts.port), - Address: util.BuildAddress(opts.ip, uint32(opts.port)), + // TODO: need to sanitize the opts.bind if its a UDS socket, as it could have colons, that envoy + // doesn't like + Name: fmt.Sprintf("%s_%d", opts.bind, opts.port), + Address: util.BuildAddress(opts.bind, uint32(opts.port)), ListenerFilters: listenerFilters, FilterChains: filterChains, DeprecatedV1: deprecatedV1, diff --git a/pilot/pkg/networking/core/v1alpha3/listener_test.go b/pilot/pkg/networking/core/v1alpha3/listener_test.go index 38f14235271c..48b0b35d75d1 100644 --- a/pilot/pkg/networking/core/v1alpha3/listener_test.go +++ b/pilot/pkg/networking/core/v1alpha3/listener_test.go @@ -21,6 +21,7 @@ import ( xdsapi "github.com/envoyproxy/go-control-plane/envoy/api/v2" + networking "istio.io/api/networking/v1alpha3" "istio.io/istio/pilot/pkg/model" "istio.io/istio/pilot/pkg/networking/core/v1alpha3/fakes" "istio.io/istio/pilot/pkg/networking/plugin" @@ -34,10 +35,25 @@ var ( tnow = time.Now() tzero = time.Time{} proxy = model.Proxy{ - Type: model.Sidecar, - IPAddresses: []string{"1.1.1.1"}, - ID: "v0.default", - DNSDomain: "default.example.org", + Type: model.SidecarProxy, + IPAddresses: []string{"1.1.1.1"}, + ID: "v0.default", + DNSDomain: "default.example.org", + Metadata: map[string]string{model.NodeConfigNamespace: "not-default"}, + ConfigNamespace: "not-default", + } + proxyInstances = []*model.ServiceInstance{ + { + Service: &model.Service{ + Hostname: "v0.default.example.org", + Address: "9.9.9.9", + CreationTime: tnow, + Attributes: model.ServiceAttributes{ + Namespace: "not-default", + }, + }, + Labels: nil, + }, } ) @@ -74,7 +90,7 @@ func TestOutboundListenerConflict_TCPWithCurrentTCP(t *testing.T) { buildService("test3.com", "1.2.3.4", model.ProtocolTCP, tnow.Add(2*time.Second)), } p := &fakePlugin{} - listeners := buildOutboundListeners(p, services...) + listeners := buildOutboundListeners(p, nil, services...) if len(listeners) != 1 { t.Fatalf("expected %d listeners, found %d", 1, len(listeners)) } @@ -101,6 +117,16 @@ func TestInboundListenerConfig_HTTP(t *testing.T) { // Add a service and verify it's config testInboundListenerConfig(t, buildService("test.com", wildcardIP, model.ProtocolHTTP, tnow)) + testInboundListenerConfigWithSidecar(t, + buildService("test.com", wildcardIP, model.ProtocolHTTP, tnow)) +} + +func TestOutboundListenerConfig_WithSidecar(t *testing.T) { + // Add a service and verify it's config + testOutboundListenerConfigWithSidecar(t, + buildService("test1,com", wildcardIP, model.ProtocolHTTP, tnow.Add(1*time.Second)), + buildService("test2,com", wildcardIP, model.ProtocolTCP, tnow), + buildService("test3,com", wildcardIP, model.ProtocolHTTP, tnow.Add(2*time.Second))) } func testOutboundListenerConflict(t *testing.T, services ...*model.Service) { @@ -109,7 +135,7 @@ func testOutboundListenerConflict(t *testing.T, services ...*model.Service) { oldestService := getOldestService(services...) p := &fakePlugin{} - listeners := buildOutboundListeners(p, services...) + listeners := buildOutboundListeners(p, nil, services...) if len(listeners) != 1 { t.Fatalf("expected %d listeners, found %d", 1, len(listeners)) } @@ -134,7 +160,7 @@ func testInboundListenerConfig(t *testing.T, services ...*model.Service) { t.Helper() oldestService := getOldestService(services...) p := &fakePlugin{} - listeners := buildInboundListeners(p, services...) + listeners := buildInboundListeners(p, nil, services...) if len(listeners) != 1 { t.Fatalf("expected %d listeners, found %d", 1, len(listeners)) } @@ -150,6 +176,76 @@ func testInboundListenerConfig(t *testing.T, services ...*model.Service) { } } +func testInboundListenerConfigWithSidecar(t *testing.T, services ...*model.Service) { + t.Helper() + p := &fakePlugin{} + sidecarConfig := &model.Config{ + ConfigMeta: model.ConfigMeta{ + Name: "foo", + Namespace: "not-default", + }, + Spec: &networking.Sidecar{ + Ingress: []*networking.IstioIngressListener{ + { + Port: &networking.Port{ + Number: 80, + Protocol: "HTTP", + Name: "uds", + }, + Bind: "1.1.1.1", + DefaultEndpoint: "127.0.0.1:80", + }, + }, + }, + } + listeners := buildInboundListeners(p, sidecarConfig, services...) + if len(listeners) != 1 { + t.Fatalf("expected %d listeners, found %d", 1, len(listeners)) + } + + if !isHTTPListener(listeners[0]) { + t.Fatal("expected HTTP listener, found TCP") + } +} + +func testOutboundListenerConfigWithSidecar(t *testing.T, services ...*model.Service) { + t.Helper() + p := &fakePlugin{} + sidecarConfig := &model.Config{ + ConfigMeta: model.ConfigMeta{ + Name: "foo", + Namespace: "not-default", + }, + Spec: &networking.Sidecar{ + Egress: []*networking.IstioEgressListener{ + { + Port: &networking.Port{ + Number: 9000, + Protocol: "HTTP", + Name: "uds", + }, + Bind: "1.1.1.1", + Hosts: []string{"*/*"}, + }, + { + Hosts: []string{"*/*"}, + }, + }, + }, + } + listeners := buildOutboundListeners(p, sidecarConfig, services...) + if len(listeners) != 2 { + t.Fatalf("expected %d listeners, found %d", 2, len(listeners)) + } + + if isHTTPListener(listeners[0]) { + t.Fatal("expected TCP listener on port 8080, found HTTP") + } + if !isHTTPListener(listeners[1]) { + t.Fatal("expected HTTP listener on port 9000, found TCP") + } +} + func verifyOutboundTCPListenerHostname(t *testing.T, l *xdsapi.Listener, hostname model.Hostname) { t.Helper() if len(l.FilterChains) != 1 { @@ -218,7 +314,7 @@ func getOldestService(services ...*model.Service) *model.Service { return oldestService } -func buildOutboundListeners(p plugin.Plugin, services ...*model.Service) []*xdsapi.Listener { +func buildOutboundListeners(p plugin.Plugin, sidecarConfig *model.Config, services ...*model.Service) []*xdsapi.Listener { configgen := NewConfigGenerator([]plugin.Plugin{p}) env := buildListenerEnv(services) @@ -227,16 +323,15 @@ func buildOutboundListeners(p plugin.Plugin, services ...*model.Service) []*xdsa return nil } - instances := make([]*model.ServiceInstance, len(services)) - for i, s := range services { - instances[i] = &model.ServiceInstance{ - Service: s, - } + if sidecarConfig == nil { + proxy.SidecarScope = model.DefaultSidecarScopeForNamespace(env.PushContext, "not-default") + } else { + proxy.SidecarScope = model.ConvertToSidecarScope(env.PushContext, sidecarConfig) } - return configgen.buildSidecarOutboundListeners(&env, &proxy, env.PushContext, instances, services) + return configgen.buildSidecarOutboundListeners(&env, &proxy, env.PushContext, proxyInstances) } -func buildInboundListeners(p plugin.Plugin, services ...*model.Service) []*xdsapi.Listener { +func buildInboundListeners(p plugin.Plugin, sidecarConfig *model.Config, services ...*model.Service) []*xdsapi.Listener { configgen := NewConfigGenerator([]plugin.Plugin{p}) env := buildListenerEnv(services) if err := env.PushContext.InitContext(&env); err != nil { @@ -249,6 +344,11 @@ func buildInboundListeners(p plugin.Plugin, services ...*model.Service) []*xdsap Endpoint: buildEndpoint(s), } } + if sidecarConfig == nil { + proxy.SidecarScope = model.DefaultSidecarScopeForNamespace(env.PushContext, "not-default") + } else { + proxy.SidecarScope = model.ConvertToSidecarScope(env.PushContext, sidecarConfig) + } return configgen.buildSidecarInboundListeners(&env, &proxy, env.PushContext, instances) } @@ -302,6 +402,9 @@ func buildService(hostname string, ip string, protocol model.Protocol, creationT }, }, Resolution: model.Passthrough, + Attributes: model.ServiceAttributes{ + Namespace: "default", + }, } } diff --git a/pilot/pkg/networking/core/v1alpha3/networkfilter.go b/pilot/pkg/networking/core/v1alpha3/networkfilter.go index 82b99d85dac4..4f54771c184d 100644 --- a/pilot/pkg/networking/core/v1alpha3/networkfilter.go +++ b/pilot/pkg/networking/core/v1alpha3/networkfilter.go @@ -37,7 +37,8 @@ var redisOpTimeout = 5 * time.Second // buildInboundNetworkFilters generates a TCP proxy network filter on the inbound path func buildInboundNetworkFilters(env *model.Environment, node *model.Proxy, instance *model.ServiceInstance) []listener.Filter { - clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, "", instance.Service.Hostname, instance.Endpoint.ServicePort.Port) + clusterName := model.BuildSubsetKey(model.TrafficDirectionInbound, instance.Endpoint.ServicePort.Name, + instance.Service.Hostname, instance.Endpoint.ServicePort.Port) config := &tcp_proxy.TcpProxy{ StatPrefix: clusterName, ClusterSpecifier: &tcp_proxy.TcpProxy_Cluster{Cluster: clusterName}, @@ -53,7 +54,7 @@ func setAccessLogAndBuildTCPFilter(env *model.Environment, node *model.Proxy, co Path: env.Mesh.AccessLogFile, } - if util.Is11Proxy(node) { + if util.IsProxyVersionGE11(node) { buildAccessLog(fl, env) } diff --git a/pilot/pkg/networking/core/v1alpha3/route/retry/retry.go b/pilot/pkg/networking/core/v1alpha3/route/retry/retry.go new file mode 100644 index 000000000000..d65e9469d155 --- /dev/null +++ b/pilot/pkg/networking/core/v1alpha3/route/retry/retry.go @@ -0,0 +1,111 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package retry + +import ( + "net/http" + "strconv" + "strings" + + "github.com/envoyproxy/go-control-plane/envoy/api/v2/route" + "github.com/gogo/protobuf/types" + + networking "istio.io/api/networking/v1alpha3" + "istio.io/istio/pilot/pkg/networking/util" +) + +// DefaultPolicy gets a copy of the default retry policy. +func DefaultPolicy() *route.RouteAction_RetryPolicy { + policy := route.RouteAction_RetryPolicy{ + NumRetries: &types.UInt32Value{Value: 10}, + RetryOn: "connect-failure,refused-stream,unavailable,cancelled,resource-exhausted", + RetriableStatusCodes: []uint32{http.StatusServiceUnavailable}, + RetryHostPredicate: []*route.RouteAction_RetryPolicy_RetryHostPredicate{ + { + // to configure retries to prefer hosts that haven’t been attempted already, + // the builtin `envoy.retry_host_predicates.previous_hosts` predicate can be used. + Name: "envoy.retry_host_predicates.previous_hosts", + }, + }, + HostSelectionRetryMaxAttempts: 3, + } + return &policy +} + +// ConvertPolicy converts the given Istio retry policy to an Envoy policy. +// +// If in is nil, DefaultPolicy is returned. +// +// If in.Attempts == 0, returns nil. +// +// Otherwise, the returned policy is DefaultPolicy with the following overrides: +// +// - NumRetries: set from in.Attempts +// +// - RetryOn, RetriableStatusCodes: set from in.RetryOn (if specified). RetriableStatusCodes +// is appended when encountering parts that are valid HTTP status codes. +// +// - PerTryTimeout: set from in.PerTryTimeout (if specified) +func ConvertPolicy(in *networking.HTTPRetry) *route.RouteAction_RetryPolicy { + if in == nil { + // No policy was set, use a default. + return DefaultPolicy() + } + + if in.Attempts <= 0 { + // Configuration is explicitly disabling the retry policy. + return nil + } + + // A policy was specified. Start with the default and override with user-provided fields where appropriate. + out := DefaultPolicy() + out.NumRetries = &types.UInt32Value{Value: uint32(in.GetAttempts())} + + if in.RetryOn != "" { + // Allow the incoming configuration to specify both Envoy RetryOn and RetriableStatusCodes. Any integers are + // assumed to be status codes. + out.RetryOn, out.RetriableStatusCodes = parseRetryOn(in.RetryOn) + } + + if in.PerTryTimeout != nil { + d := util.GogoDurationToDuration(in.PerTryTimeout) + out.PerTryTimeout = &d + } + return out +} + +func parseRetryOn(retryOn string) (string, []uint32) { + codes := make([]uint32, 0) + tojoin := make([]string, 0) + + parts := strings.Split(retryOn, ",") + for _, part := range parts { + part = strings.TrimSpace(part) + if part == "" { + continue + } + + // Try converting it to an integer to see if it's a valid HTTP status code. + i, _ := strconv.Atoi(part) + + if http.StatusText(i) != "" { + codes = append(codes, uint32(i)) + } else { + tojoin = append(tojoin, part) + } + } + + return strings.Join(tojoin, ","), codes +} diff --git a/pilot/pkg/networking/core/v1alpha3/route/retry/retry_test.go b/pilot/pkg/networking/core/v1alpha3/route/retry/retry_test.go new file mode 100644 index 000000000000..b85051f91d0b --- /dev/null +++ b/pilot/pkg/networking/core/v1alpha3/route/retry/retry_test.go @@ -0,0 +1,178 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package retry_test + +import ( + "testing" + "time" + + protoTypes "github.com/gogo/protobuf/types" + . "github.com/onsi/gomega" + + networking "istio.io/api/networking/v1alpha3" + "istio.io/istio/pilot/pkg/networking/core/v1alpha3/route/retry" +) + +func TestNilRetryShouldReturnDefault(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route where no retry policy has been explicitly set. + route := networking.HTTPRoute{} + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(*policy).To(Equal(*retry.DefaultPolicy())) +} + +func TestZeroAttemptsShouldReturnNilPolicy(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + // Explicitly not retrying. + Attempts: 0, + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(BeNil()) +} + +func TestRetryWithAllFieldsSet(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + Attempts: 2, + RetryOn: "some,fake,conditions", + PerTryTimeout: &protoTypes.Duration{ + Seconds: 3, + }, + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(policy.RetryOn).To(Equal("some,fake,conditions")) + g.Expect(*policy.PerTryTimeout).To(Equal(time.Second * 3)) + g.Expect(policy.NumRetries.Value).To(Equal(uint32(2))) + g.Expect(policy.RetriableStatusCodes).To(Equal(make([]uint32, 0))) + g.Expect(policy.RetryPriority).To(BeNil()) + g.Expect(policy.HostSelectionRetryMaxAttempts).To(Equal(retry.DefaultPolicy().HostSelectionRetryMaxAttempts)) + g.Expect(policy.RetryHostPredicate).To(Equal(retry.DefaultPolicy().RetryHostPredicate)) +} + +func TestRetryOnWithEmptyParts(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + // Explicitly not retrying. + Attempts: 2, + RetryOn: "some,fake,conditions,,,", + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(policy.RetryOn).To(Equal("some,fake,conditions")) + g.Expect(policy.RetriableStatusCodes).To(Equal([]uint32{})) +} + +func TestRetryOnWithWhitespace(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + // Explicitly not retrying. + Attempts: 2, + RetryOn: " some, ,fake , conditions, ,", + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(policy.RetryOn).To(Equal("some,fake,conditions")) + g.Expect(policy.RetriableStatusCodes).To(Equal([]uint32{})) +} + +func TestRetryOnContainingStatusCodes(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + Attempts: 2, + RetryOn: "some,fake,5xx,404,conditions,503", + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(policy.RetryOn).To(Equal("some,fake,5xx,conditions")) + g.Expect(policy.RetriableStatusCodes).To(Equal([]uint32{404, 503})) +} + +func TestRetryOnWithInvalidStatusCodesShouldAddToRetryOn(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + Attempts: 2, + RetryOn: "some,fake,conditions,1000", + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(policy.RetryOn).To(Equal("some,fake,conditions,1000")) + g.Expect(policy.RetriableStatusCodes).To(Equal([]uint32{})) +} + +func TestMissingRetryOnShouldReturnDefaults(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + Attempts: 2, + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(policy.RetryOn).To(Equal(retry.DefaultPolicy().RetryOn)) + g.Expect(policy.RetriableStatusCodes).To(Equal(retry.DefaultPolicy().RetriableStatusCodes)) +} + +func TestMissingPerTryTimeoutShouldReturnNil(t *testing.T) { + g := NewGomegaWithT(t) + + // Create a route with a retry policy with zero attempts configured. + route := networking.HTTPRoute{ + Retries: &networking.HTTPRetry{ + Attempts: 2, + }, + } + + policy := retry.ConvertPolicy(route.Retries) + g.Expect(policy).To(Not(BeNil())) + g.Expect(policy.PerTryTimeout).To(BeNil()) +} diff --git a/pilot/pkg/networking/core/v1alpha3/route/route.go b/pilot/pkg/networking/core/v1alpha3/route/route.go index a05309df492f..f526b214cd86 100644 --- a/pilot/pkg/networking/core/v1alpha3/route/route.go +++ b/pilot/pkg/networking/core/v1alpha3/route/route.go @@ -30,6 +30,7 @@ import ( networking "istio.io/api/networking/v1alpha3" "istio.io/istio/pilot/pkg/model" + "istio.io/istio/pilot/pkg/networking/core/v1alpha3/route/retry" "istio.io/istio/pilot/pkg/networking/util" "istio.io/istio/pkg/log" "istio.io/istio/pkg/proto" @@ -65,21 +66,21 @@ type VirtualHostWrapper struct { Routes []route.Route } -// BuildVirtualHostsFromConfigAndRegistry creates virtual hosts from the given set of virtual services and a list of -// services from the service registry. Services are indexed by FQDN hostnames. -func BuildVirtualHostsFromConfigAndRegistry( +// BuildSidecarVirtualHostsFromConfigAndRegistry creates virtual hosts from +// the given set of virtual services and a list of services from the +// service registry. Services are indexed by FQDN hostnames. +func BuildSidecarVirtualHostsFromConfigAndRegistry( node *model.Proxy, push *model.PushContext, serviceRegistry map[model.Hostname]*model.Service, - proxyLabels model.LabelsCollection) []VirtualHostWrapper { + proxyLabels model.LabelsCollection, + virtualServices []model.Config, listenPort int) []VirtualHostWrapper { out := make([]VirtualHostWrapper, 0) - meshGateway := map[string]bool{model.IstioMeshGateway: true} - virtualServices := push.VirtualServices(node, meshGateway) // translate all virtual service configs into virtual hosts for _, virtualService := range virtualServices { - wrappers := buildVirtualHostsForVirtualService(node, push, virtualService, serviceRegistry, proxyLabels, meshGateway) + wrappers := buildSidecarVirtualHostsForVirtualService(node, push, virtualService, serviceRegistry, proxyLabels, listenPort) if len(wrappers) == 0 { // If none of the routes matched by source (i.e. proxyLabels), then discard this entire virtual service continue @@ -108,7 +109,7 @@ func BuildVirtualHostsFromConfigAndRegistry( out = append(out, VirtualHostWrapper{ Port: port.Port, Services: []*model.Service{svc}, - Routes: []route.Route{*BuildDefaultHTTPRoute(cluster, traceOperation)}, + Routes: []route.Route{*BuildDefaultHTTPOutboundRoute(cluster, traceOperation)}, }) } } @@ -144,17 +145,18 @@ func separateVSHostsAndServices(virtualService model.Config, return hosts, servicesInVirtualService } -// buildVirtualHostsForVirtualService creates virtual hosts corresponding to a virtual service. +// buildSidecarVirtualHostsForVirtualService creates virtual hosts corresponding to a virtual service. // Called for each port to determine the list of vhosts on the given port. // It may return an empty list if no VirtualService rule has a matching service. -func buildVirtualHostsForVirtualService( +func buildSidecarVirtualHostsForVirtualService( node *model.Proxy, push *model.PushContext, virtualService model.Config, serviceRegistry map[model.Hostname]*model.Service, proxyLabels model.LabelsCollection, - gatewayName map[string]bool) []VirtualHostWrapper { + listenPort int) []VirtualHostWrapper { hosts, servicesInVirtualService := separateVSHostsAndServices(virtualService, serviceRegistry) + // Now group these services by port so that we can infer the destination.port if the user // doesn't specify any port for a multiport service. We need to know the destination port in // order to build the cluster name (outbound|||) @@ -178,9 +180,10 @@ func buildVirtualHostsForVirtualService( // the current code is written. serviceByPort[80] = nil } + meshGateway := map[string]bool{model.IstioMeshGateway: true} out := make([]VirtualHostWrapper, 0, len(serviceByPort)) for port, portServices := range serviceByPort { - routes, err := BuildHTTPRoutesForVirtualService(node, push, virtualService, serviceRegistry, port, proxyLabels, gatewayName) + routes, err := BuildHTTPRoutesForVirtualService(node, push, virtualService, serviceRegistry, listenPort, proxyLabels, meshGateway) if err != nil || len(routes) == 0 { continue } @@ -229,12 +232,13 @@ func GetDestinationCluster(destination *networking.Destination, service *model.S // This is called for each port to compute virtual hosts. // Each VirtualService is tried, with a list of services that listen on the port. // Error indicates the given virtualService can't be used on the port. +// This function is used by both the gateway and the sidecar func BuildHTTPRoutesForVirtualService( node *model.Proxy, push *model.PushContext, virtualService model.Config, serviceRegistry map[model.Hostname]*model.Service, - port int, + listenPort int, proxyLabels model.LabelsCollection, gatewayNames map[string]bool) ([]route.Route, error) { @@ -249,13 +253,13 @@ func BuildHTTPRoutesForVirtualService( allroutes: for _, http := range vs.Http { if len(http.Match) == 0 { - if r := translateRoute(push, node, http, nil, port, vsName, serviceRegistry, proxyLabels, gatewayNames); r != nil { + if r := translateRoute(push, node, http, nil, listenPort, vsName, serviceRegistry, proxyLabels, gatewayNames); r != nil { out = append(out, *r) } break allroutes // we have a rule with catch all match prefix: /. Other rules are of no use } else { for _, match := range http.Match { - if r := translateRoute(push, node, http, match, port, vsName, serviceRegistry, proxyLabels, gatewayNames); r != nil { + if r := translateRoute(push, node, http, match, listenPort, vsName, serviceRegistry, proxyLabels, gatewayNames); r != nil { out = append(out, *r) rType, _ := getEnvoyRouteTypeAndVal(r) if rType == envoyCatchAll { @@ -331,7 +335,7 @@ func translateRoute(push *model.PushContext, node *model.Proxy, in *networking.H } else { action := &route.RouteAction{ Cors: translateCORSPolicy(in.CorsPolicy), - RetryPolicy: translateRetryPolicy(in.Retries), + RetryPolicy: retry.ConvertPolicy(in.Retries), } if in.Timeout != nil { @@ -558,32 +562,6 @@ func translateHeaderMatch(name string, in *networking.StringMatch) route.HeaderM return out } -// translateRetryPolicy translates retry policy -func translateRetryPolicy(in *networking.HTTPRetry) *route.RouteAction_RetryPolicy { - if in != nil && in.Attempts > 0 { - d := util.GogoDurationToDuration(in.PerTryTimeout) - // default retry on condition - retryOn := "gateway-error,connect-failure,refused-stream,unavailable,cancelled,resource-exhausted" - if in.RetryOn != "" { - retryOn = in.RetryOn - } - return &route.RouteAction_RetryPolicy{ - NumRetries: &types.UInt32Value{Value: uint32(in.GetAttempts())}, - RetryOn: retryOn, - PerTryTimeout: &d, - RetryHostPredicate: []*route.RouteAction_RetryPolicy_RetryHostPredicate{ - { - // to configure retries to prefer hosts that haven’t been attempted already, - // the builtin `envoy.retry_host_predicates.previous_hosts` predicate can be used. - Name: "envoy.retry_host_predicates.previous_hosts", - }, - }, - HostSelectionRetryMaxAttempts: 3, - } - } - return nil -} - // translateCORSPolicy translates CORS policy func translateCORSPolicy(in *networking.CorsPolicy) *route.CorsPolicy { if in == nil { @@ -631,8 +609,8 @@ func getRouteOperation(in *route.Route, vsName string, port int) string { return fmt.Sprintf("%s:%d%s", vsName, port, path) } -// BuildDefaultHTTPRoute builds a default route. -func BuildDefaultHTTPRoute(clusterName string, operation string) *route.Route { +// BuildDefaultHTTPInboundRoute builds a default inbound route. +func BuildDefaultHTTPInboundRoute(clusterName string, operation string) *route.Route { notimeout := 0 * time.Second return &route.Route{ @@ -650,6 +628,16 @@ func BuildDefaultHTTPRoute(clusterName string, operation string) *route.Route { } } +// BuildDefaultHTTPOutboundRoute builds a default outbound route, including a retry policy. +func BuildDefaultHTTPOutboundRoute(clusterName string, operation string) *route.Route { + // Start with the same configuration as for inbound. + out := BuildDefaultHTTPInboundRoute(clusterName, operation) + + // Add a default retry policy for outbound routes. + out.GetRoute().RetryPolicy = retry.DefaultPolicy() + return out +} + // translatePercentToFractionalPercent translates an v1alpha3 Percent instance // to an envoy.type.FractionalPercent instance. func translatePercentToFractionalPercent(p *networking.Percent) *xdstype.FractionalPercent { @@ -677,7 +665,7 @@ func translateFault(node *model.Proxy, in *networking.HTTPFaultInjection) *xdsht out := xdshttpfault.HTTPFault{} if in.Delay != nil { out.Delay = &xdsfault.FaultDelay{Type: xdsfault.FaultDelay_FIXED} - if util.Is11Proxy(node) { + if util.IsProxyVersionGE11(node) { if in.Delay.Percentage != nil { out.Delay.Percentage = translatePercentToFractionalPercent(in.Delay.Percentage) } else { @@ -704,7 +692,7 @@ func translateFault(node *model.Proxy, in *networking.HTTPFaultInjection) *xdsht if in.Abort != nil { out.Abort = &xdshttpfault.FaultAbort{} - if util.Is11Proxy(node) { + if util.IsProxyVersionGE11(node) { if in.Abort.Percentage != nil { out.Abort.Percentage = translatePercentToFractionalPercent(in.Abort.Percentage) } else { diff --git a/pilot/pkg/networking/core/v1alpha3/route/route_test.go b/pilot/pkg/networking/core/v1alpha3/route/route_test.go index cbcf04253371..5c3b886b6dc8 100644 --- a/pilot/pkg/networking/core/v1alpha3/route/route_test.go +++ b/pilot/pkg/networking/core/v1alpha3/route/route_test.go @@ -44,7 +44,7 @@ func TestBuildHTTPRoutes(t *testing.T) { } node := &model.Proxy{ - Type: model.Sidecar, + Type: model.SidecarProxy, IPAddresses: []string{"1.1.1.1"}, ID: "someID", DNSDomain: "foo.com", diff --git a/pilot/pkg/networking/core/v1alpha3/tls.go b/pilot/pkg/networking/core/v1alpha3/tls.go index b90a8c1c342f..3a62626183cd 100644 --- a/pilot/pkg/networking/core/v1alpha3/tls.go +++ b/pilot/pkg/networking/core/v1alpha3/tls.go @@ -19,6 +19,7 @@ import ( "istio.io/api/networking/v1alpha3" "istio.io/istio/pilot/pkg/model" + "istio.io/istio/pilot/pkg/networking/util" ) // Match by source labels, the listener port where traffic comes in, the gateway on which the rule is being @@ -62,13 +63,13 @@ func matchTCP(match *v1alpha3.L4MatchAttributes, proxyLabels model.LabelsCollect } // Select the config pertaining to the service being processed. -func getConfigsForHost(host model.Hostname, configs []model.Config) []*model.Config { - svcConfigs := make([]*model.Config, 0) +func getConfigsForHost(host model.Hostname, configs []model.Config) []model.Config { + svcConfigs := make([]model.Config, 0) for index := range configs { virtualService := configs[index].Spec.(*v1alpha3.VirtualService) for _, vsHost := range virtualService.Hosts { if model.Hostname(vsHost).Matches(host) { - svcConfigs = append(svcConfigs, &configs[index]) + svcConfigs = append(svcConfigs, configs[index]) break } } @@ -83,7 +84,7 @@ func hashRuntimeTLSMatchPredicates(match *v1alpha3.TLSMatchAttributes) string { func buildSidecarOutboundTLSFilterChainOpts(env *model.Environment, node *model.Proxy, push *model.PushContext, destinationIPAddress string, service *model.Service, listenPort *model.Port, proxyLabels model.LabelsCollection, - gateways map[string]bool, configs []*model.Config) []*filterChainOpts { + gateways map[string]bool, configs []model.Config) []*filterChainOpts { if !listenPort.Protocol.IsTLS() { return nil @@ -125,8 +126,12 @@ func buildSidecarOutboundTLSFilterChainOpts(env *model.Environment, node *model. // Use the service's virtual address first. // But if a virtual service overrides it with its own destination subnet match // give preference to the user provided one + // destinationIPAddress will be empty for unix domain sockets destinationCIDRs := []string{destinationIPAddress} - if len(match.DestinationSubnets) > 0 { + // Only set CIDR match if the listener is bound to an IP. + // If its bound to a unix domain socket, then ignore the CIDR matches + // Unix domain socket bound ports have Port value set to 0 + if len(match.DestinationSubnets) > 0 && listenPort.Port > 0 { destinationCIDRs = match.DestinationSubnets } matchHash := hashRuntimeTLSMatchPredicates(match) @@ -146,7 +151,17 @@ func buildSidecarOutboundTLSFilterChainOpts(env *model.Environment, node *model. // HTTPS or TLS ports without associated virtual service will be treated as opaque TCP traffic. if !hasTLSMatch { - clusterName := model.BuildSubsetKey(model.TrafficDirectionOutbound, "", service.Hostname, listenPort.Port) + var clusterName string + // The service could be nil if we are being called in the context of a sidecar config with + // user specified port in the egress listener. Since we dont know the destination service + // and this piece of code is establishing the final fallback path, we set the + // tcp proxy cluster to a blackhole cluster + if service != nil { + clusterName = model.BuildSubsetKey(model.TrafficDirectionOutbound, "", service.Hostname, listenPort.Port) + } else { + clusterName = util.BlackHoleCluster + } + out = append(out, &filterChainOpts{ destinationCIDRs: []string{destinationIPAddress}, networkFilters: buildOutboundNetworkFiltersWithSingleDestination(env, node, clusterName, listenPort), @@ -158,7 +173,7 @@ func buildSidecarOutboundTLSFilterChainOpts(env *model.Environment, node *model. func buildSidecarOutboundTCPFilterChainOpts(env *model.Environment, node *model.Proxy, push *model.PushContext, destinationIPAddress string, service *model.Service, listenPort *model.Port, proxyLabels model.LabelsCollection, - gateways map[string]bool, configs []*model.Config) []*filterChainOpts { + gateways map[string]bool, configs []model.Config) []*filterChainOpts { if listenPort.Protocol.IsTLS() { return nil @@ -195,10 +210,11 @@ TcpLoop: // Scan all the match blocks // if we find any match block without a runtime destination subnet match // i.e. match any destination address, then we treat it as the terminal match/catch all match - // and break out of the loop. + // and break out of the loop. We also treat it as a terminal match if the listener is bound + // to a unix domain socket. // But if we find only runtime destination subnet matches in all match blocks, collect them // (this is similar to virtual hosts in http) and create filter chain match accordingly. - if len(match.DestinationSubnets) == 0 { + if len(match.DestinationSubnets) == 0 || listenPort.Port == 0 { out = append(out, &filterChainOpts{ destinationCIDRs: destinationCIDRs, networkFilters: buildOutboundNetworkFilters(env, node, tcp.Route, push, listenPort, config.ConfigMeta), @@ -221,7 +237,18 @@ TcpLoop: } if !defaultRouteAdded { - clusterName := model.BuildSubsetKey(model.TrafficDirectionOutbound, "", service.Hostname, listenPort.Port) + + var clusterName string + // The service could be nil if we are being called in the context of a sidecar config with + // user specified port in the egress listener. Since we dont know the destination service + // and this piece of code is establishing the final fallback path, we set the + // tcp proxy cluster to a blackhole cluster + if service != nil { + clusterName = model.BuildSubsetKey(model.TrafficDirectionOutbound, "", service.Hostname, listenPort.Port) + } else { + clusterName = util.BlackHoleCluster + } + out = append(out, &filterChainOpts{ destinationCIDRs: []string{destinationIPAddress}, networkFilters: buildOutboundNetworkFiltersWithSingleDestination(env, node, clusterName, listenPort), @@ -231,12 +258,22 @@ TcpLoop: return out } +// This function can be called for namespaces with the auto generated sidecar, i.e. once per service and per port. +// OR, it could be called in the context of an egress listener with specific TCP port on a sidecar config. +// In the latter case, there is no service associated with this listen port. So we have to account for this +// missing service throughout this file func buildSidecarOutboundTCPTLSFilterChainOpts(env *model.Environment, node *model.Proxy, push *model.PushContext, configs []model.Config, destinationIPAddress string, service *model.Service, listenPort *model.Port, proxyLabels model.LabelsCollection, gateways map[string]bool) []*filterChainOpts { out := make([]*filterChainOpts, 0) - svcConfigs := getConfigsForHost(service.Hostname, configs) + var svcConfigs []model.Config + if service != nil { + svcConfigs = getConfigsForHost(service.Hostname, configs) + } else { + svcConfigs = configs + } + out = append(out, buildSidecarOutboundTLSFilterChainOpts(env, node, push, destinationIPAddress, service, listenPort, proxyLabels, gateways, svcConfigs)...) out = append(out, buildSidecarOutboundTCPFilterChainOpts(env, node, push, destinationIPAddress, service, listenPort, diff --git a/pilot/pkg/networking/plugin/authn/authentication.go b/pilot/pkg/networking/plugin/authn/authentication.go index 053d71b7035e..2419ced6376e 100644 --- a/pilot/pkg/networking/plugin/authn/authentication.go +++ b/pilot/pkg/networking/plugin/authn/authentication.go @@ -32,6 +32,7 @@ import ( "istio.io/istio/pilot/pkg/model" "istio.io/istio/pilot/pkg/networking/plugin" "istio.io/istio/pilot/pkg/networking/util" + "istio.io/istio/pkg/features/pilot" "istio.io/istio/pkg/log" protovalue "istio.io/istio/pkg/proto" ) @@ -87,7 +88,7 @@ func GetMutualTLS(policy *authn.Policy) *authn.MutualTls { } // setupFilterChains sets up filter chains based on authentication policy. -func setupFilterChains(authnPolicy *authn.Policy, sdsUdsPath string, sdsUseTrustworthyJwt, sdsUseNormalJwt bool) []plugin.FilterChain { +func setupFilterChains(authnPolicy *authn.Policy, sdsUdsPath string, sdsUseTrustworthyJwt, sdsUseNormalJwt bool, meta map[string]string) []plugin.FilterChain { if authnPolicy == nil || len(authnPolicy.Peers) == 0 { return nil } @@ -96,24 +97,33 @@ func setupFilterChains(authnPolicy *authn.Policy, sdsUdsPath string, sdsUseTrust } tls := &auth.DownstreamTlsContext{ CommonTlsContext: &auth.CommonTlsContext{ - // TODO(incfly): should this be {"istio", "http1.1", "h2"}? - // Currently it works: when server is in permissive mode, client sidecar can send tls traffic. + // Note that in the PERMISSIVE mode, we match filter chain on "istio" ALPN, + // which is used to differentiate between service mesh and legacy traffic. + // + // Client sidecar outbound cluster's TLSContext.ALPN must include "istio". + // + // Server sidecar filter chain's FilterChainMatch.ApplicationProtocols must + // include "istio" for the secure traffic, but its TLSContext.ALPN must not + // include "istio", which would interfere with negotiation of the underlying + // protocol, e.g. HTTP/2. AlpnProtocols: util.ALPNHttp, }, RequireClientCertificate: protovalue.BoolTrue, } if sdsUdsPath == "" { - tls.CommonTlsContext.ValidationContextType = model.ConstructValidationContext(model.AuthCertsPath+model.RootCertFilename, []string{} /*subjectAltNames*/) + base := meta[pilot.BaseDir] + model.AuthCertsPath + + tls.CommonTlsContext.ValidationContextType = model.ConstructValidationContext(base+model.RootCertFilename, []string{} /*subjectAltNames*/) tls.CommonTlsContext.TlsCertificates = []*auth.TlsCertificate{ { CertificateChain: &core.DataSource{ Specifier: &core.DataSource_Filename{ - Filename: model.AuthCertsPath + model.CertChainFilename, + Filename: base + model.CertChainFilename, }, }, PrivateKey: &core.DataSource{ Specifier: &core.DataSource_Filename{ - Filename: model.AuthCertsPath + model.KeyFilename, + Filename: base + model.KeyFilename, }, }, }, @@ -173,7 +183,7 @@ func setupFilterChains(authnPolicy *authn.Policy, sdsUdsPath string, sdsUseTrust func (Plugin) OnInboundFilterChains(in *plugin.InputParams) []plugin.FilterChain { port := in.ServiceInstance.Endpoint.ServicePort authnPolicy := model.GetConsolidateAuthenticationPolicy(in.Env.IstioConfigStore, in.ServiceInstance.Service, port) - return setupFilterChains(authnPolicy, in.Env.Mesh.SdsUdsPath, in.Env.Mesh.EnableSdsTokenMount, in.Env.Mesh.SdsUseK8SSaJwt) + return setupFilterChains(authnPolicy, in.Env.Mesh.SdsUdsPath, in.Env.Mesh.EnableSdsTokenMount, in.Env.Mesh.SdsUseK8SSaJwt, in.Node.Metadata) } // CollectJwtSpecs returns a list of all JWT specs (pointers) defined the policy. This @@ -260,7 +270,7 @@ func ConvertPolicyToAuthNFilterConfig(policy *authn.Policy, proxyType model.Node switch peer.GetParams().(type) { case *authn.PeerAuthenticationMethod_Mtls: // Only enable mTLS for sidecar, not Ingress/Router for now. - if proxyType == model.Sidecar { + if proxyType == model.SidecarProxy { if peer.GetMtls() == nil { peer.Params = &authn.PeerAuthenticationMethod_Mtls{Mtls: &authn.MutualTls{}} } @@ -337,7 +347,7 @@ func (Plugin) OnOutboundListener(in *plugin.InputParams, mutable *plugin.Mutable // Can be used to add additional filters (e.g., mixer filter) or add more stuff to the HTTP connection manager // on the inbound path func (Plugin) OnInboundListener(in *plugin.InputParams, mutable *plugin.MutableObjects) error { - if in.Node.Type != model.Sidecar { + if in.Node.Type != model.SidecarProxy { // Only care about sidecar. return nil } diff --git a/pilot/pkg/networking/plugin/authn/authentication_test.go b/pilot/pkg/networking/plugin/authn/authentication_test.go index 67cda96b05a0..8da03f6d8a77 100644 --- a/pilot/pkg/networking/plugin/authn/authentication_test.go +++ b/pilot/pkg/networking/plugin/authn/authentication_test.go @@ -445,7 +445,7 @@ func TestConvertPolicyToAuthNFilterConfig(t *testing.T) { }, } for _, c := range cases { - if got := ConvertPolicyToAuthNFilterConfig(c.in, model.Sidecar); !reflect.DeepEqual(c.expected, got) { + if got := ConvertPolicyToAuthNFilterConfig(c.in, model.SidecarProxy); !reflect.DeepEqual(c.expected, got) { t.Errorf("Test case %s: expected\n%#v\n, got\n%#v", c.name, c.expected.String(), got.String()) } } @@ -497,7 +497,7 @@ func TestBuildAuthNFilter(t *testing.T) { } for _, c := range cases { - got := BuildAuthNFilter(c.in, model.Sidecar) + got := BuildAuthNFilter(c.in, model.SidecarProxy) if got == nil { if c.expectedFilterConfig != nil { t.Errorf("BuildAuthNFilter(%#v), got: nil, wanted filter with config %s", c.in, c.expectedFilterConfig.String()) @@ -678,7 +678,7 @@ func TestOnInboundFilterChains(t *testing.T) { }, } for _, c := range cases { - if got := setupFilterChains(c.in, c.sdsUdsPath, c.useTrustworthyJwt, c.useNormalJwt); !reflect.DeepEqual(got, c.expected) { + if got := setupFilterChains(c.in, c.sdsUdsPath, c.useTrustworthyJwt, c.useNormalJwt, map[string]string{}); !reflect.DeepEqual(got, c.expected) { t.Errorf("[%v] unexpected filter chains, got %v, want %v", c.name, got, c.expected) } } diff --git a/pilot/pkg/networking/plugin/authz/rbac.go b/pilot/pkg/networking/plugin/authz/rbac.go index 744f35c28dba..75ff99c24821 100644 --- a/pilot/pkg/networking/plugin/authz/rbac.go +++ b/pilot/pkg/networking/plugin/authz/rbac.go @@ -27,6 +27,8 @@ import ( "sort" "strings" + "istio.io/istio/pkg/spiffe" + xdsapi "github.com/envoyproxy/go-control-plane/envoy/api/v2" "github.com/envoyproxy/go-control-plane/envoy/api/v2/listener" http_config "github.com/envoyproxy/go-control-plane/envoy/config/filter/http/rbac/v2" @@ -81,7 +83,7 @@ const ( methodHeader = ":method" pathHeader = ":path" - spiffePrefix = "spiffe://" + spiffePrefix = spiffe.Scheme + "://" ) // serviceMetadata is a collection of different kind of information about a service. @@ -312,7 +314,7 @@ func (Plugin) OnInboundFilterChains(in *plugin.InputParams) []plugin.FilterChain // on the inbound path func (Plugin) OnInboundListener(in *plugin.InputParams, mutable *plugin.MutableObjects) error { // Only supports sidecar proxy for now. - if in.Node.Type != model.Sidecar { + if in.Node.Type != model.SidecarProxy { return nil } diff --git a/pilot/pkg/networking/plugin/health/health.go b/pilot/pkg/networking/plugin/health/health.go index 958cd351affe..1d811f9f8777 100644 --- a/pilot/pkg/networking/plugin/health/health.go +++ b/pilot/pkg/networking/plugin/health/health.go @@ -96,7 +96,7 @@ func (Plugin) OnInboundListener(in *plugin.InputParams, mutable *plugin.MutableO return nil } - if in.Node.Type != model.Sidecar { + if in.Node.Type != model.SidecarProxy { // Only care about sidecar. return nil } diff --git a/pilot/pkg/networking/plugin/mixer/mixer.go b/pilot/pkg/networking/plugin/mixer/mixer.go index 4afa74c820e2..f1217bacf795 100644 --- a/pilot/pkg/networking/plugin/mixer/mixer.go +++ b/pilot/pkg/networking/plugin/mixer/mixer.go @@ -22,6 +22,7 @@ import ( xdsapi "github.com/envoyproxy/go-control-plane/envoy/api/v2" "github.com/envoyproxy/go-control-plane/envoy/api/v2/core" + e "github.com/envoyproxy/go-control-plane/envoy/api/v2/endpoint" "github.com/envoyproxy/go-control-plane/envoy/api/v2/listener" "github.com/envoyproxy/go-control-plane/envoy/api/v2/route" http_conn "github.com/envoyproxy/go-control-plane/envoy/config/filter/network/http_connection_manager/v2" @@ -132,7 +133,31 @@ func (mixerplugin) OnInboundListener(in *plugin.InputParams, mutable *plugin.Mut // OnOutboundCluster implements the Plugin interface method. func (mixerplugin) OnOutboundCluster(in *plugin.InputParams, cluster *xdsapi.Cluster) { - // do nothing + if !in.Env.Mesh.SidecarToTelemetrySessionAffinity { + // if session affinity is not enabled, do nothing + return + } + withoutPort := strings.Split(in.Env.Mesh.MixerReportServer, ":") + if strings.Contains(cluster.Name, withoutPort[0]) { + // config telemetry service discovery to be strict_dns for session affinity. + // To enable session affinity, DNS needs to provide only one and the same telemetry instance IP + // (e.g. in k8s, telemetry service spec needs to have SessionAffinity: ClientIP) + cluster.Type = xdsapi.Cluster_STRICT_DNS + addr := util.BuildAddress(in.Service.Address, uint32(in.Port.Port)) + cluster.LoadAssignment = &xdsapi.ClusterLoadAssignment{ + ClusterName: cluster.Name, + Endpoints: []e.LocalityLbEndpoints{ + { + LbEndpoints: []e.LbEndpoint{ + { + Endpoint: &e.Endpoint{Address: &addr}, + }, + }, + }, + }, + } + cluster.EdsClusterConfig = nil + } } // OnInboundCluster implements the Plugin interface method. diff --git a/pilot/pkg/networking/plugin/mixer/mixer_test.go b/pilot/pkg/networking/plugin/mixer/mixer_test.go deleted file mode 100644 index 0c9ff3090f68..000000000000 --- a/pilot/pkg/networking/plugin/mixer/mixer_test.go +++ /dev/null @@ -1,142 +0,0 @@ -// Copyright 2018 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package mixer - -import ( - "testing" - - meshconfig "istio.io/api/mesh/v1alpha1" - - "istio.io/istio/pilot/pkg/model" - "istio.io/istio/pilot/pkg/networking/plugin" -) - -func TestDisablePolicyChecks(t *testing.T) { - disablePolicyChecks := &plugin.InputParams{ - ListenerProtocol: plugin.ListenerProtocolHTTP, - Node: &model.Proxy{ - Type: model.Router, - }, - Env: &model.Environment{ - Mesh: &meshconfig.MeshConfig{ - DisablePolicyChecks: true, - }, - }, - } - - enablePolicyChecks := &plugin.InputParams{ - ListenerProtocol: plugin.ListenerProtocolHTTP, - Node: &model.Proxy{ - Type: model.Router, - }, - Env: &model.Environment{ - Mesh: &meshconfig.MeshConfig{ - DisablePolicyChecks: false, - }, - }, - } - - disableClientPolicyChecksParams := &plugin.InputParams{ - ListenerProtocol: plugin.ListenerProtocolHTTP, - Node: &model.Proxy{ - Type: model.Sidecar, - }, - Env: &model.Environment{ - Mesh: &meshconfig.MeshConfig{ - EnableClientSidePolicyCheck: false, - }, - }, - } - - enableClientPolicyChecks := &plugin.InputParams{ - ListenerProtocol: plugin.ListenerProtocolHTTP, - Node: &model.Proxy{ - Type: model.Sidecar, - }, - Env: &model.Environment{ - Mesh: &meshconfig.MeshConfig{ - EnableClientSidePolicyCheck: true, - }, - }, - } - - testCases := []struct { - name string - inputParams *plugin.InputParams - result bool - }{ - { - name: "disable policy checks", - inputParams: disablePolicyChecks, - result: true, - }, - { - name: "enable policy checks", - inputParams: enablePolicyChecks, - result: false, - }, - { - name: "disable client policy checks", - inputParams: disableClientPolicyChecksParams, - result: true, - }, - { - name: "enable client policy checks", - inputParams: enableClientPolicyChecks, - result: false, - }, - } - - for _, tc := range testCases { - ret := disableClientPolicyChecks(tc.inputParams.Env.Mesh, tc.inputParams.Node) - - if tc.result != ret { - t.Errorf("%s: expecting %v but got %v", tc.name, tc.result, ret) - } - } -} - -func TestOnInboundListener(t *testing.T) { - mixerCheckServerNotPresent := &plugin.InputParams{ - ListenerProtocol: plugin.ListenerProtocolHTTP, - Env: &model.Environment{ - Mesh: &meshconfig.MeshConfig{ - MixerCheckServer: "", - MixerReportServer: "", - }, - }, - } - testCases := []struct { - name string - inputParams *plugin.InputParams - mutableParams *plugin.MutableObjects - result error - }{ - { - name: "mixer check and report server not available", - inputParams: mixerCheckServerNotPresent, - result: nil, - }, - } - - for _, tc := range testCases { - p := NewPlugin() - ret := p.OnInboundListener(tc.inputParams, tc.mutableParams) - - if tc.result != ret { - t.Errorf("%s: expecting %v but got %v", tc.name, tc.result, ret) - } - } -} diff --git a/pilot/pkg/networking/plugin/plugin.go b/pilot/pkg/networking/plugin/plugin.go index 64f5810e5c20..21ad165f6925 100644 --- a/pilot/pkg/networking/plugin/plugin.go +++ b/pilot/pkg/networking/plugin/plugin.go @@ -82,6 +82,12 @@ type InputParams struct { // For outbound/inbound sidecars this is the service port (not endpoint port) // For inbound listener on gateway, this is the gateway server port Port *model.Port + // Bind holds the listener IP or unix domain socket to which this listener is bound + // if bind is using UDS, the port will be 0 with valid protocol and name + Bind string + // SidecarConfig holds the Sidecar CRD associated with this listener + SidecarConfig *model.Config + // The subset associated with the service for which the cluster is being programmed Subset string // Push holds stats and other information about the current push. diff --git a/pilot/pkg/networking/util/util.go b/pilot/pkg/networking/util/util.go index 552a79c6aaf3..4790c089e97c 100644 --- a/pilot/pkg/networking/util/util.go +++ b/pilot/pkg/networking/util/util.go @@ -86,12 +86,22 @@ func ConvertAddressToCidr(addr string) *core.CidrRange { return cidr } -// BuildAddress returns a SocketAddress with the given ip and port. -func BuildAddress(ip string, port uint32) core.Address { +// BuildAddress returns a SocketAddress with the given ip and port or uds. +func BuildAddress(bind string, port uint32) core.Address { + if len(bind) > 0 && strings.HasPrefix(bind, model.UnixAddressPrefix) { + return core.Address{ + Address: &core.Address_Pipe{ + Pipe: &core.Pipe{ + Path: bind, + }, + }, + } + } + return core.Address{ Address: &core.Address_SocketAddress{ SocketAddress: &core.SocketAddress{ - Address: ip, + Address: bind, PortSpecifier: &core.SocketAddress_PortValue{ PortValue: port, }, @@ -212,20 +222,13 @@ func SortVirtualHosts(hosts []route.VirtualHost) { }) } -// isProxyVersion checks whether the given Proxy version matches the supplied prefix. -func isProxyVersion(node *model.Proxy, prefix string) bool { - ver, found := node.GetProxyVersion() - return found && strings.HasPrefix(ver, prefix) -} - -// Is1xProxy checks whether the given Proxy version is 1.x. -func Is1xProxy(node *model.Proxy) bool { - return isProxyVersion(node, "1.") -} - -// Is11Proxy checks whether the given Proxy version is 1.1. -func Is11Proxy(node *model.Proxy) bool { - return isProxyVersion(node, "1.1") +// IsProxyVersionGE11 checks whether the given Proxy version is greater than or equals 1.1. +func IsProxyVersionGE11(node *model.Proxy) bool { + ver, _ := node.GetProxyVersion() + if ver >= "1.1" { + return true + } + return false } // ResolveHostsInNetworksConfig will go through the Gateways addresses for all diff --git a/pilot/pkg/networking/util/util_test.go b/pilot/pkg/networking/util/util_test.go index c2f89a0a0c21..98563ef201a2 100644 --- a/pilot/pkg/networking/util/util_test.go +++ b/pilot/pkg/networking/util/util_test.go @@ -97,12 +97,11 @@ func TestGetNetworkEndpointAddress(t *testing.T) { } } -func Test_isProxyVersion(t *testing.T) { +func TestIsProxyVersionGE11(t *testing.T) { tests := []struct { - name string - node *model.Proxy - prefix string - want bool + name string + node *model.Proxy + want bool }{ { "the given Proxy version is 1.x", @@ -111,8 +110,7 @@ func Test_isProxyVersion(t *testing.T) { "ISTIO_PROXY_VERSION": "1.0", }, }, - "1.", - true, + false, }, { "the given Proxy version is not 1.x", @@ -121,7 +119,6 @@ func Test_isProxyVersion(t *testing.T) { "ISTIO_PROXY_VERSION": "0.8", }, }, - "1.", false, }, { @@ -131,14 +128,40 @@ func Test_isProxyVersion(t *testing.T) { "ISTIO_PROXY_VERSION": "1.1", }, }, - "1.1", + true, + }, + { + "the given Proxy version is 1.1.1", + &model.Proxy{ + Metadata: map[string]string{ + "ISTIO_PROXY_VERSION": "1.1.1", + }, + }, + true, + }, + { + "the given Proxy version is 2.0", + &model.Proxy{ + Metadata: map[string]string{ + "ISTIO_PROXY_VERSION": "2.0", + }, + }, + true, + }, + { + "the given Proxy version is 10.0", + &model.Proxy{ + Metadata: map[string]string{ + "ISTIO_PROXY_VERSION": "2.0", + }, + }, true, }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - if got := isProxyVersion(tt.node, tt.prefix); got != tt.want { - t.Errorf("isProxyVersion() = %v, want %v", got, tt.want) + if got := IsProxyVersionGE11(tt.node); got != tt.want { + t.Errorf("IsProxyVersionGE11() = %v, want %v", got, tt.want) } }) } diff --git a/pilot/pkg/proxy/envoy/infra_auth.go b/pilot/pkg/proxy/envoy/infra_auth.go index 67ecef8c7f42..ba55caae4f26 100644 --- a/pilot/pkg/proxy/envoy/infra_auth.go +++ b/pilot/pkg/proxy/envoy/infra_auth.go @@ -15,7 +15,7 @@ package envoy import ( - "fmt" + "istio.io/istio/pkg/spiffe" ) const ( @@ -24,20 +24,12 @@ const ( pilotSvcAccName string = "istio-pilot-service-account" ) -func getSAN(domain, ns, svcAccName string) []string { - // spiffe SAN for services is of the form. e.g. for pilot - // "spiffe://cluster.local/ns/istio-system/sa/istio-pilot-service-account" - svcSAN := fmt.Sprintf("spiffe://%v/ns/%v/sa/%v", domain, ns, svcAccName) - svcSANs := []string{svcSAN} - return svcSANs -} - // GetMixerSAN returns the SAN used for mixer mTLS -func GetMixerSAN(domain, ns string) []string { - return getSAN(domain, ns, mixerSvcAccName) +func GetMixerSAN(ns string) string { + return spiffe.MustGenSpiffeURI(ns, mixerSvcAccName) } // GetPilotSAN returns the SAN used for pilot mTLS -func GetPilotSAN(domain, ns string) []string { - return getSAN(domain, ns, pilotSvcAccName) +func GetPilotSAN(ns string) string { + return spiffe.MustGenSpiffeURI(ns, pilotSvcAccName) } diff --git a/pilot/pkg/proxy/envoy/infra_auth_test.go b/pilot/pkg/proxy/envoy/infra_auth_test.go index 6a49b3f8c704..262893185392 100644 --- a/pilot/pkg/proxy/envoy/infra_auth_test.go +++ b/pilot/pkg/proxy/envoy/infra_auth_test.go @@ -17,6 +17,8 @@ package envoy import ( "strings" "testing" + + "istio.io/istio/pkg/spiffe" ) const ( @@ -25,21 +27,17 @@ const ( ) func TestGetMixerSAN(t *testing.T) { - mixerSANs := GetMixerSAN("cluster.local", "istio-system") - if len(mixerSANs) != 1 { - t.Errorf("unexpected length of pilot SAN %d", len(mixerSANs)) - } - if strings.Compare(mixerSANs[0], expMixerSAN) != 0 { + spiffe.SetTrustDomain("cluster.local") + mixerSANs := GetMixerSAN("istio-system") + if strings.Compare(mixerSANs, expMixerSAN) != 0 { t.Errorf("GetMixerSAN() => expected %#v but got %#v", expMixerSAN, mixerSANs[0]) } } func TestGetPilotSAN(t *testing.T) { - pilotSANs := GetPilotSAN("cluster.local", "istio-system") - if len(pilotSANs) != 1 { - t.Errorf("unexpected length of pilot SAN %d", len(pilotSANs)) - } - if strings.Compare(pilotSANs[0], expPilotSAN) != 0 { + spiffe.SetTrustDomain("cluster.local") + pilotSANs := GetPilotSAN("istio-system") + if strings.Compare(pilotSANs, expPilotSAN) != 0 { t.Errorf("GetPilotSAN() => expected %#v but got %#v", expPilotSAN, pilotSANs[0]) } } diff --git a/pilot/pkg/proxy/envoy/proxy.go b/pilot/pkg/proxy/envoy/proxy.go index d7e288b55694..7b7be6f76a32 100644 --- a/pilot/pkg/proxy/envoy/proxy.go +++ b/pilot/pkg/proxy/envoy/proxy.go @@ -103,9 +103,6 @@ func (e *envoy) Run(config interface{}, epoch int, abort <-chan error) error { // spin up a new Envoy process args := e.args(fname, epoch) - if len(e.config.CustomConfigFile) == 0 { - args = append(args, "--v2-config-only") - } log.Infof("Envoy command: %v", args) /* #nosec */ diff --git a/pilot/pkg/proxy/envoy/v2/README.md b/pilot/pkg/proxy/envoy/v2/README.md index ad9453d264f7..8fcac1e25706 100644 --- a/pilot/pkg/proxy/envoy/v2/README.md +++ b/pilot/pkg/proxy/envoy/v2/README.md @@ -120,7 +120,7 @@ What we log and how to use it: - sidecar connecting to pilot: "EDS/CSD/LDS: REQ ...". This includes the node, IP and the discovery request proto. Should show up when the sidecar starts up. - sidecar disconnecting from pilot: xDS: close. This happens when a pod is stopped. -- push events - whenever we push a config the the sidecar. +- push events - whenever we push a config to the sidecar. - "XDS: Registry event..." - indicates a registry event, should be followed by PUSH messages for each endpoint. - "EDS: no instances": pay close attention to this event, it indicates that Envoy asked for diff --git a/pilot/pkg/proxy/envoy/v2/ads.go b/pilot/pkg/proxy/envoy/v2/ads.go index 19f9c2e26cab..66ccd4510f45 100644 --- a/pilot/pkg/proxy/envoy/v2/ads.go +++ b/pilot/pkg/proxy/envoy/v2/ads.go @@ -211,9 +211,6 @@ type XdsConnection struct { // same info can be sent to all clients, without recomputing. pushChannel chan *XdsEvent - // doneChannel will be closed when the client is closed. - doneChannel chan struct{} - // TODO: migrate other fields as needed from model.Proxy and replace it //HttpConnectionManagers map[string]*http_conn.HttpConnectionManager @@ -320,7 +317,7 @@ func (s *DiscoveryServer) configDump(conn *XdsConnection) (*adminapi.ConfigDump, type XdsEvent struct { // If not empty, it is used to indicate the event is caused by a change in the clusters. // Only EDS for the listed clusters will be sent. - edsUpdatedServices map[string]*EndpointShardsByService + edsUpdatedServices map[string]*EndpointShards push *model.PushContext @@ -332,7 +329,6 @@ type XdsEvent struct { func newXdsConnection(peerAddr string, stream DiscoveryStream) *XdsConnection { return &XdsConnection{ pushChannel: make(chan *XdsEvent), - doneChannel: make(chan struct{}), PeerAddr: peerAddr, Clusters: []string{}, Connect: time.Now(), @@ -358,7 +354,12 @@ func receiveThread(con *XdsConnection, reqChannel chan *xdsapi.DiscoveryRequest, totalXDSInternalErrors.Add(1) return } - reqChannel <- req + select { + case reqChannel <- req: + case <-con.stream.Context().Done(): + adsLog.Errorf("ADS: %q %s terminated with stream closed", con.PeerAddr, con.ConID) + return + } } } @@ -369,7 +370,6 @@ func (s *DiscoveryServer) StreamAggregatedResources(stream ads.AggregatedDiscove if ok { peerAddr = peerInfo.Addr.String() } - var discReq *xdsapi.DiscoveryRequest t0 := time.Now() // rate limit the herd, after restart all endpoints will reconnect to the @@ -390,7 +390,6 @@ func (s *DiscoveryServer) StreamAggregatedResources(stream ads.AggregatedDiscove return err } con := newXdsConnection(peerAddr, stream) - defer close(con.doneChannel) // Do not call: defer close(con.pushChannel) ! // the push channel will be garbage collected when the connection is no longer used. @@ -409,7 +408,7 @@ func (s *DiscoveryServer) StreamAggregatedResources(stream ads.AggregatedDiscove for { // Block until either a request is received or a push is triggered. select { - case discReq, ok = <-reqChannel: + case discReq, ok := <-reqChannel: if !ok { // Remote side closed connection. return receiveError @@ -630,10 +629,10 @@ func (s *DiscoveryServer) initConnectionNode(discReq *xdsapi.DiscoveryRequest, c } con.mu.Unlock() - s.globalPushContext().UpdateNodeIsolation(con.modelNode) - - // TODO - // use networkScope to update the list of namespace and service dependencies + // Set the sidecarScope associated with this proxy if its a sidecar. + if con.modelNode.Type == model.SidecarProxy { + s.globalPushContext().SetSidecarScope(con.modelNode) + } return nil } @@ -659,8 +658,11 @@ func (s *DiscoveryServer) pushConnection(con *XdsConnection, pushEv *XdsEvent) e return nil } - // This needs to be called to update list of visible services in the node. - pushEv.push.UpdateNodeIsolation(con.modelNode) + // Precompute the sidecar scope associated with this proxy if its a sidecar type. + // Saves compute cycles in networking code + if con.modelNode.Type == model.SidecarProxy { + pushEv.push.SetSidecarScope(con.modelNode) + } adsLog.Infof("Pushing %v", con.ConID) @@ -735,7 +737,7 @@ func AdsPushAll(s *DiscoveryServer) { // Primary code path is from v1 discoveryService.clearCache(), which is added as a handler // to the model ConfigStorageCache and Controller. func (s *DiscoveryServer) AdsPushAll(version string, push *model.PushContext, - full bool, edsUpdates map[string]*EndpointShardsByService) { + full bool, edsUpdates map[string]*EndpointShards) { if !full { s.edsIncremental(version, push, edsUpdates) return @@ -773,7 +775,7 @@ func (s *DiscoveryServer) AdsPushAll(version string, push *model.PushContext, // Send a signal to all connections, with a push event. func (s *DiscoveryServer) startPush(version string, push *model.PushContext, full bool, - edsUpdates map[string]*EndpointShardsByService) { + edsUpdates map[string]*EndpointShards) { // Push config changes, iterating over connected envoys. This cover ADS and EDS(0.7), both share // the same connection table @@ -834,7 +836,7 @@ func (s *DiscoveryServer) startPush(version string, push *model.PushContext, ful }: client.LastPush = time.Now() client.LastPushFailure = timeZero - case <-client.doneChannel: // connection was closed + case <-client.stream.Context().Done(): // grpc stream was closed adsLog.Infof("Client closed connection %v", client.ConID) case <-time.After(PushTimeout): // This may happen to some clients if the other side is in a bad state and can't receive. diff --git a/pilot/pkg/proxy/envoy/v2/cds.go b/pilot/pkg/proxy/envoy/v2/cds.go index ea69a1b44230..f06bbcb3db1f 100644 --- a/pilot/pkg/proxy/envoy/v2/cds.go +++ b/pilot/pkg/proxy/envoy/v2/cds.go @@ -58,7 +58,7 @@ func (s *DiscoveryServer) pushCds(con *XdsConnection, push *model.PushContext, v response := con.clusters(rawClusters) err = con.send(response) if err != nil { - adsLog.Warnf("CDS: Send failure, closing grpc %s: %v", con.modelNode.ID, err) + adsLog.Warnf("CDS: Send failure %s: %v", con.ConID, err) pushes.With(prometheus.Labels{"type": "cds_senderr"}).Add(1) return err } @@ -66,7 +66,7 @@ func (s *DiscoveryServer) pushCds(con *XdsConnection, push *model.PushContext, v // The response can't be easily read due to 'any' marshaling. adsLog.Infof("CDS: PUSH %s for %s %q, Clusters: %d, Services %d", version, - con.modelNode.ID, con.PeerAddr, len(rawClusters), len(push.Services(nil))) + con.ConID, con.PeerAddr, len(rawClusters), len(push.Services(nil))) return nil } diff --git a/pilot/pkg/proxy/envoy/v2/discovery.go b/pilot/pkg/proxy/envoy/v2/discovery.go index 8a9f8e51cb94..2434e27c3406 100644 --- a/pilot/pkg/proxy/envoy/v2/discovery.go +++ b/pilot/pkg/proxy/envoy/v2/discovery.go @@ -122,9 +122,9 @@ type DiscoveryServer struct { // shards. mutex sync.RWMutex - // EndpointShardsByService for a service. This is a global (per-server) list, built from + // EndpointShards for a service. This is a global (per-server) list, built from // incremental updates. - EndpointShardsByService map[string]*EndpointShardsByService + EndpointShardsByService map[string]*EndpointShards // WorkloadsById keeps track of information about a workload, based on direct notifications // from registry. This acts as a cache and allows detecting changes. @@ -135,7 +135,7 @@ type DiscoveryServer struct { // updated. This should only be used in the xDS server - will be removed/made private in 1.1, // once the last v1 pieces are cleaned. For 1.0.3+ it is used only for tracking incremental // pushes between the 2 packages. - edsUpdates map[string]*EndpointShardsByService + edsUpdates map[string]*EndpointShards updateChannel chan *updateReq @@ -148,15 +148,17 @@ type updateReq struct { full bool } -// EndpointShardsByService holds the set of endpoint shards of a service. Registries update +// EndpointShards holds the set of endpoint shards of a service. Registries update // individual shards incrementally. The shards are aggregated and split into // clusters when a push for the specific cluster is needed. -type EndpointShardsByService struct { +type EndpointShards struct { + // mutex protecting below map. + mutex sync.RWMutex // Shards is used to track the shards. EDS updates are grouped by shard. // Current implementation uses the registry name as key - in multicluster this is the // name of the k8s cluster, derived from the config (secret). - Shards map[string]*EndpointShard + Shards map[string][]*model.IstioEndpoint // ServiceAccounts has the concatenation of all service accounts seen so far in endpoints. // This is updated on push, based on shards. If the previous list is different than @@ -166,14 +168,6 @@ type EndpointShardsByService struct { ServiceAccounts map[string]bool } -// EndpointShard contains all the endpoints for a single shard (subset) of a service. -// Shards are updated atomically by registries. A registry may split a service into -// multiple shards (for example each deployment, or smaller sub-sets). -type EndpointShard struct { - Shard string - Entries []*model.IstioEndpoint -} - // Workload has the minimal info we need to detect if we need to push workloads, and to // cache data to avoid expensive model allocations. type Workload struct { @@ -201,9 +195,9 @@ func NewDiscoveryServer(env *model.Environment, generator core.ConfigGenerator, Env: env, ConfigGenerator: generator, ConfigController: configCache, - EndpointShardsByService: map[string]*EndpointShardsByService{}, + EndpointShardsByService: map[string]*EndpointShards{}, WorkloadsByID: map[string]*Workload{}, - edsUpdates: map[string]*EndpointShardsByService{}, + edsUpdates: map[string]*EndpointShards{}, concurrentPushLimit: make(chan struct{}, 20), // TODO(hzxuzhonghu): support configuration updateChannel: make(chan *updateReq, 10), } @@ -298,7 +292,7 @@ func (s *DiscoveryServer) periodicRefreshMetrics() { // Push is called to push changes on config updates using ADS. This is set in DiscoveryService.Push, // to avoid direct dependencies. -func (s *DiscoveryServer) Push(full bool, edsUpdates map[string]*EndpointShardsByService) { +func (s *DiscoveryServer) Push(full bool, edsUpdates map[string]*EndpointShards) { if !full { adsLog.Infof("XDS Incremental Push EDS:%d", len(edsUpdates)) go s.AdsPushAll(versionInfo(), s.globalPushContext(), false, edsUpdates) @@ -373,17 +367,14 @@ func (s *DiscoveryServer) ClearCache() { func (s *DiscoveryServer) doPush(full bool) { // more config update events may happen while doPush is processing. // we don't want to lose updates. - s.updateMutex.Lock() s.mutex.Lock() // Swap the edsUpdates map - tracking requests for incremental updates. // The changes to the map are protected by ds.mutex. edsUpdates := s.edsUpdates // Reset - any new updates will be tracked by the new map - s.edsUpdates = map[string]*EndpointShardsByService{} + s.edsUpdates = map[string]*EndpointShards{} s.mutex.Unlock() - s.updateMutex.Unlock() - s.Push(full, edsUpdates) } diff --git a/pilot/pkg/proxy/envoy/v2/eds.go b/pilot/pkg/proxy/envoy/v2/eds.go index d54b27f5cf01..e8dadfa6f281 100644 --- a/pilot/pkg/proxy/envoy/v2/eds.go +++ b/pilot/pkg/proxy/envoy/v2/eds.go @@ -193,14 +193,24 @@ func networkEndpointToEnvoyEndpoint(e *model.NetworkEndpoint) (*endpoint.LbEndpo } // updateClusterInc computes an envoy cluster assignment from the service shards. +// TODO: this code is incorrect. With config scoping, two sidecars can get +// a cluster of same name but with different set of endpoints. See the +// explanation below for more details func (s *DiscoveryServer) updateClusterInc(push *model.PushContext, clusterName string, edsCluster *EdsCluster) error { var hostname model.Hostname - var port int var subsetName string _, subsetName, hostname, port = model.ParseSubsetKey(clusterName) + + // TODO: BUG. this code is incorrect. With destination rule scoping + // (public/private) as well as sidecar scopes allowing import of + // specific destination rules, the destination rule for a given + // namespace should be determined based on the sidecar scope or the + // proxy's config namespace. As such, this code searches through all + // destination rules, public and private and returns a completely + // arbitrary destination rule's subset labels! labels := push.SubsetToLabels(subsetName, hostname) portMap, f := push.ServicePort2Name[string(hostname)] @@ -221,32 +231,35 @@ func (s *DiscoveryServer) updateClusterInc(push *model.PushContext, clusterName cnt := 0 localityEpMap := make(map[string]*endpoint.LocalityLbEndpoints) + se.mutex.RLock() // The shards are updated independently, now need to filter and merge // for this cluster - for _, es := range se.Shards { - for _, el := range es.Entries { - if svcPort.Name != el.ServicePortName { + for _, endpoints := range se.Shards { + for _, ep := range endpoints { + if svcPort.Name != ep.ServicePortName { continue } // Port labels - if !labels.HasSubsetOf(model.Labels(el.Labels)) { + if !labels.HasSubsetOf(model.Labels(ep.Labels)) { continue } cnt++ - locLbEps, found := localityEpMap[el.Locality] + locLbEps, found := localityEpMap[ep.Locality] if !found { locLbEps = &endpoint.LocalityLbEndpoints{ - Locality: util.ConvertLocality(el.Locality), + Locality: util.ConvertLocality(ep.Locality), } - localityEpMap[el.Locality] = locLbEps + localityEpMap[ep.Locality] = locLbEps } - if el.EnvoyEndpoint == nil { - el.EnvoyEndpoint = buildEnvoyLbEndpoint(el.UID, el.Family, el.Address, el.EndpointPort, el.Network) + if ep.EnvoyEndpoint == nil { + ep.EnvoyEndpoint = buildEnvoyLbEndpoint(ep.UID, ep.Family, ep.Address, ep.EndpointPort, ep.Network) } - locLbEps.LbEndpoints = append(locLbEps.LbEndpoints, *el.EnvoyEndpoint) + locLbEps.LbEndpoints = append(locLbEps.LbEndpoints, *ep.EnvoyEndpoint) } } + se.mutex.RUnlock() + locEps := make([]endpoint.LocalityLbEndpoints, 0, len(localityEpMap)) for _, locLbEps := range localityEpMap { locLbEps.LoadBalancingWeight = &types.UInt32Value{ @@ -254,6 +267,8 @@ func (s *DiscoveryServer) updateClusterInc(push *model.PushContext, clusterName } locEps = append(locEps, *locLbEps) } + // Normalize LoadBalancingWeight in range [1, 128] + locEps = LoadBalancingWeightNormalize(locEps) if cnt == 0 { push.Add(model.ProxyStatusClusterNoInstances, clusterName, nil, "") @@ -268,9 +283,6 @@ func (s *DiscoveryServer) updateClusterInc(push *model.PushContext, clusterName edsCluster.mutex.Lock() defer edsCluster.mutex.Unlock() - // Normalize LoadBalancingWeight in range [1, 128] - locEps = LoadBalancingWeightNormalize(locEps) - edsCluster.LoadAssignment = &xdsapi.ClusterLoadAssignment{ ClusterName: clusterName, Endpoints: locEps, @@ -411,8 +423,6 @@ func (s *DiscoveryServer) updateCluster(push *model.PushContext, clusterName str // SvcUpdate is a callback from service discovery when service info changes. func (s *DiscoveryServer) SvcUpdate(cluster, hostname string, ports map[string]uint32, rports map[uint32]string) { pc := s.globalPushContext() - s.mutex.Lock() - defer s.mutex.Unlock() if cluster == "" { pl := model.PortList{} for k, v := range ports { @@ -421,14 +431,16 @@ func (s *DiscoveryServer) SvcUpdate(cluster, hostname string, ports map[string]u Name: k, }) } + pc.Mutex.Lock() pc.ServicePort2Name[hostname] = pl + pc.Mutex.Unlock() } // TODO: for updates from other clusters, warn if they don't match primary. } // Update clusters for an incremental EDS push, and initiate the push. // Only clusters that changed are updated/pushed. -func (s *DiscoveryServer) edsIncremental(version string, push *model.PushContext, edsUpdates map[string]*EndpointShardsByService) { +func (s *DiscoveryServer) edsIncremental(version string, push *model.PushContext, edsUpdates map[string]*EndpointShards) { adsLog.Infof("XDS:EDSInc Pushing %s Services: %v, "+ "ConnectedEndpoints: %d", version, edsUpdates, adsClientCount()) t0 := time.Now() @@ -501,15 +513,15 @@ func (s *DiscoveryServer) WorkloadUpdate(id string, labels map[string]string, an // the hostname-keyed map. And it avoids the conversion from Endpoint to ServiceEntry to envoy // on each step: instead the conversion happens once, when an endpoint is first discovered. func (s *DiscoveryServer) EDSUpdate(shard, serviceName string, - entries []*model.IstioEndpoint) error { - s.edsUpdate(shard, serviceName, entries, false) + istioEndpoints []*model.IstioEndpoint) error { + s.edsUpdate(shard, serviceName, istioEndpoints, false) return nil } // edsUpdate updates edsUpdates by shard, serviceName, IstioEndpoints, // and requests a full/eds push. func (s *DiscoveryServer) edsUpdate(shard, serviceName string, - entries []*model.IstioEndpoint, internal bool) { + istioEndpoints []*model.IstioEndpoint, internal bool) { // edsShardUpdate replaces a subset (shard) of endpoints, as result of an incremental // update. The endpoint updates may be grouped by K8S clusters, other service registries // or by deployment. Multiple updates are debounced, to avoid too frequent pushes. @@ -525,8 +537,8 @@ func (s *DiscoveryServer) edsUpdate(shard, serviceName string, // This endpoint is for a service that was not previously loaded. // Return an error to force a full sync, which will also cause the // EndpointsShardsByService to be initialized with all services. - ep = &EndpointShardsByService{ - Shards: map[string]*EndpointShard{}, + ep = &EndpointShards{ + Shards: map[string][]*model.IstioEndpoint{}, ServiceAccounts: map[string]bool{}, } s.EndpointShardsByService[serviceName] = ep @@ -538,13 +550,7 @@ func (s *DiscoveryServer) edsUpdate(shard, serviceName string, // 2. Update data for the specific cluster. Each cluster gets independent // updates containing the full list of endpoints for the service in that cluster. - ce := &EndpointShard{ - Shard: shard, - Entries: []*model.IstioEndpoint{}, - } - - for _, e := range entries { - ce.Entries = append(ce.Entries, e) + for _, e := range istioEndpoints { if e.ServiceAccount != "" { _, f = ep.ServiceAccounts[e.ServiceAccount] if !f && !internal { @@ -555,7 +561,9 @@ func (s *DiscoveryServer) edsUpdate(shard, serviceName string, } } } - ep.Shards[shard] = ce + ep.mutex.Lock() + ep.Shards[shard] = istioEndpoints + ep.mutex.Unlock() s.edsUpdates[serviceName] = ep // for internal update: this called by DiscoveryServer.Push --> updateServiceShards, @@ -608,7 +616,7 @@ func connectionID(node string) string { // pushEds is pushing EDS updates for a single connection. Called the first time // a client connects, for incremental updates and for full periodic updates. func (s *DiscoveryServer) pushEds(push *model.PushContext, con *XdsConnection, - full bool, edsUpdatedServices map[string]*EndpointShardsByService) error { + full bool, edsUpdatedServices map[string]*EndpointShards) error { resAny := []types.Any{} emptyClusters := 0 @@ -668,7 +676,7 @@ func (s *DiscoveryServer) pushEds(push *model.PushContext, con *XdsConnection, response := s.endpoints(con.Clusters, resAny) err := con.send(response) if err != nil { - adsLog.Warnf("EDS: Send failure, closing grpc %v", err) + adsLog.Warnf("EDS: Send failure %s: %v", con.ConID, err) pushes.With(prometheus.Labels{"type": "eds_senderr"}).Add(1) return err } diff --git a/pilot/pkg/proxy/envoy/v2/eds_test.go b/pilot/pkg/proxy/envoy/v2/eds_test.go index 648f4eb9b754..ac8bb6960bdc 100644 --- a/pilot/pkg/proxy/envoy/v2/eds_test.go +++ b/pilot/pkg/proxy/envoy/v2/eds_test.go @@ -338,8 +338,8 @@ func multipleRequest(server *bootstrap.Server, inc bool, nclients, // This will be throttled - we want to trigger a single push //server.EnvoyXdsServer.MemRegistry.SetEndpoints(edsIncSvc, // newEndpointWithAccount("127.0.0.2", "hello-sa", "v1")) - updates := map[string]*v2.EndpointShardsByService{ - edsIncSvc: &v2.EndpointShardsByService{}, + updates := map[string]*v2.EndpointShards{ + edsIncSvc: &v2.EndpointShards{}, } server.EnvoyXdsServer.AdsPushAll(strconv.Itoa(j), server.EnvoyXdsServer.Env.PushContext, false, updates) } else { diff --git a/pilot/pkg/proxy/envoy/v2/ep_filters.go b/pilot/pkg/proxy/envoy/v2/ep_filters.go index 22e20135d4cb..f03585e37f71 100644 --- a/pilot/pkg/proxy/envoy/v2/ep_filters.go +++ b/pilot/pkg/proxy/envoy/v2/ep_filters.go @@ -22,7 +22,6 @@ import ( "istio.io/istio/pilot/pkg/model" "istio.io/istio/pilot/pkg/networking/util" - "istio.io/istio/pkg/features/pilot" ) // EndpointsFilterFunc is a function that filters data from the ClusterLoadAssignment and returns updated one @@ -34,7 +33,7 @@ type EndpointsFilterFunc func(endpoints []endpoint.LocalityLbEndpoints, conn *Xd // Information for the mesh networks is provided as a MeshNetwork config map. func EndpointsByNetworkFilter(endpoints []endpoint.LocalityLbEndpoints, conn *XdsConnection, env *model.Environment) []endpoint.LocalityLbEndpoints { // If the sidecar does not specify a network, ignore Split Horizon EDS and return all - network, found := conn.modelNode.Metadata[pilot.NodeMetadataNetwork] + network, found := conn.modelNode.Metadata[model.NodeMetadataNetwork] if !found { // Couldn't find the sidecar network, using default/local network = "" diff --git a/pilot/pkg/proxy/envoy/v2/lds.go b/pilot/pkg/proxy/envoy/v2/lds.go index 6d24386fd42a..93e0ab5b8576 100644 --- a/pilot/pkg/proxy/envoy/v2/lds.go +++ b/pilot/pkg/proxy/envoy/v2/lds.go @@ -35,17 +35,9 @@ func (s *DiscoveryServer) pushLds(con *XdsConnection, push *model.PushContext, _ con.LDSListeners = rawListeners } response := ldsDiscoveryResponse(rawListeners, version) - if version != versionInfo() { - // Just report for now - after debugging we can suppress the push. - // Change1 -> push1 - // Change2 (after few seconds ) -> push2 - // push1 may take 10 seconds and be slower - and a sidecar may get - // LDS from push2 first, followed by push1 - which will be out of date. - adsLog.Warnf("LDS: overlap %s %s %s", con.ConID, version, versionInfo()) - } err = con.send(response) if err != nil { - adsLog.Warnf("LDS: Send failure, closing grpc %v", err) + adsLog.Warnf("LDS: Send failure %s: %v", con.ConID, err) pushes.With(prometheus.Labels{"type": "lds_senderr"}).Add(1) return err } diff --git a/pilot/pkg/proxy/envoy/v2/lds_test.go b/pilot/pkg/proxy/envoy/v2/lds_test.go index 8a5f4050b47f..c42627ef01a6 100644 --- a/pilot/pkg/proxy/envoy/v2/lds_test.go +++ b/pilot/pkg/proxy/envoy/v2/lds_test.go @@ -16,12 +16,159 @@ package v2_test import ( "io/ioutil" "testing" + "time" "istio.io/istio/pilot/pkg/model" + "istio.io/istio/pkg/adsc" "istio.io/istio/pkg/test/env" "istio.io/istio/tests/util" ) +// TestLDS using isolated namespaces +func TestLDSIsolated(t *testing.T) { + + _, tearDown := initLocalPilotTestEnv(t) + defer tearDown() + + // Sidecar in 'none' mode + t.Run("sidecar_none", func(t *testing.T) { + // TODO: add a Service with EDS resolution in the none ns. + // The ServiceEntry only allows STATIC - both STATIC and EDS should generated TCP listeners on :port + // while DNS and NONE should generate old-style bind ports. + // Right now 'STATIC' and 'EDS' result in ClientSideLB in the internal object, so listener test is valid. + + ldsr, err := adsc.Dial(util.MockPilotGrpcAddr, "", &adsc.Config{ + Meta: map[string]string{ + model.NodeMetadataInterceptionMode: string(model.InterceptionNone), + }, + IP: "10.11.0.1", // matches none.yaml s1tcp.none + Namespace: "none", + }) + if err != nil { + t.Fatal(err) + } + defer ldsr.Close() + + ldsr.Watch() + + _, err = ldsr.Wait("rds", 50000*time.Second) + if err != nil { + t.Fatal("Failed to receive LDS", err) + return + } + + err = ldsr.Save(env.IstioOut + "/none") + if err != nil { + t.Fatal(err) + } + + // s1http - inbound HTTP on 7071 (forwarding to app on 30000 + 7071 - or custom port) + // All outbound on http proxy + if len(ldsr.HTTPListeners) != 3 { + // TODO: we are still debating if for HTTP services we have any use case to create a 127.0.0.1:port outbound + // for the service (the http proxy is already covering this) + t.Error("HTTP listeners, expecting 3 got ", len(ldsr.HTTPListeners), ldsr.HTTPListeners) + } + + // s1tcp:2000 outbound, bind=true (to reach other instances of the service) + // s1:5005 outbound, bind=true + // :443 - https external, bind=false + // 10.11.0.1_7070, bind=true -> inbound|2000|s1 - on port 7070, fwd to 37070 + // virtual + if len(ldsr.TCPListeners) == 0 { + t.Fatal("No response") + } + + for _, s := range []string{"lds_tcp", "lds_http", "rds", "cds", "ecds"} { + want, err := ioutil.ReadFile(env.IstioOut + "/none_" + s + ".json") + if err != nil { + t.Fatal(err) + } + got, err := ioutil.ReadFile("testdata/none_" + s + ".json") + if err != nil { + t.Fatal(err) + } + + if err = util.Compare(got, want); err != nil { + // Just log for now - golden changes every time there is a config generation update. + // It is mostly intended as a reference for what is generated - we need to add explicit checks + // for things we need, like the number of expected listeners. + t.Logf("error in golden file %s %v", s, err) + } + } + + // TODO: check bind==true + // TODO: verify listeners for outbound are on 127.0.0.1 (not yet), port 2000, 2005, 2007 + // TODO: verify virtual listeners for unsupported cases + // TODO: add and verify SNI listener on 127.0.0.1:443 + // TODO: verify inbound service port is on 127.0.0.1, and containerPort on 0.0.0.0 + // TODO: BUG, SE with empty endpoints is rejected - it is actually valid config (service may not have endpoints) + }) + + // Test for the examples in the ServiceEntry doc + t.Run("se_example", func(t *testing.T) { + // TODO: add a Service with EDS resolution in the none ns. + // The ServiceEntry only allows STATIC - both STATIC and EDS should generated TCP listeners on :port + // while DNS and NONE should generate old-style bind ports. + // Right now 'STATIC' and 'EDS' result in ClientSideLB in the internal object, so listener test is valid. + + ldsr, err := adsc.Dial(util.MockPilotGrpcAddr, "", &adsc.Config{ + Meta: map[string]string{}, + IP: "10.12.0.1", // matches none.yaml s1tcp.none + Namespace: "seexamples", + }) + if err != nil { + t.Fatal(err) + } + defer ldsr.Close() + + ldsr.Watch() + + _, err = ldsr.Wait("rds", 50000*time.Second) + if err != nil { + t.Fatal("Failed to receive LDS", err) + return + } + + err = ldsr.Save(env.IstioOut + "/seexample") + if err != nil { + t.Fatal(err) + } + }) + + // Test for the examples in the ServiceEntry doc + t.Run("se_examplegw", func(t *testing.T) { + // TODO: add a Service with EDS resolution in the none ns. + // The ServiceEntry only allows STATIC - both STATIC and EDS should generated TCP listeners on :port + // while DNS and NONE should generate old-style bind ports. + // Right now 'STATIC' and 'EDS' result in ClientSideLB in the internal object, so listener test is valid. + + ldsr, err := adsc.Dial(util.MockPilotGrpcAddr, "", &adsc.Config{ + Meta: map[string]string{}, + IP: "10.13.0.1", + Namespace: "exampleegressgw", + }) + if err != nil { + t.Fatal(err) + } + defer ldsr.Close() + + ldsr.Watch() + + _, err = ldsr.Wait("rds", 50000*time.Second) + if err != nil { + t.Fatal("Failed to receive LDS", err) + return + } + + err = ldsr.Save(env.IstioOut + "/seexample-eg") + if err != nil { + t.Fatal(err) + } + }) + +} + // TestLDS is running LDSv2 tests. func TestLDS(t *testing.T) { _, tearDown := initLocalPilotTestEnv(t) @@ -109,3 +256,10 @@ func TestLDS(t *testing.T) { // TODO: dynamic checks ( see EDS ) } + +// TODO: helper to test the http listener content +// - file access log +// - generate request id +// - cors, fault, router filters +// - tracing +// diff --git a/pilot/pkg/proxy/envoy/v2/mem.go b/pilot/pkg/proxy/envoy/v2/mem.go index ef9a4767a975..71b71dec7364 100644 --- a/pilot/pkg/proxy/envoy/v2/mem.go +++ b/pilot/pkg/proxy/envoy/v2/mem.go @@ -19,6 +19,8 @@ import ( "fmt" "sync" + "istio.io/istio/pkg/spiffe" + "istio.io/istio/pilot/pkg/model" ) @@ -55,7 +57,7 @@ func (c *MemServiceController) Run(<-chan struct{}) {} // MemServiceDiscovery is a mock discovery interface type MemServiceDiscovery struct { services map[model.Hostname]*model.Service - // EndpointShardsByService table. Key is the fqdn of the service, ':', port + // EndpointShards table. Key is the fqdn of the service, ':', port instancesByPortNum map[string][]*model.ServiceInstance instancesByPortName map[string][]*model.ServiceInstance @@ -344,8 +346,8 @@ func (sd *MemServiceDiscovery) GetIstioServiceAccounts(hostname model.Hostname, defer sd.mutex.Unlock() if hostname == "world.default.svc.cluster.local" { return []string{ - "spiffe://cluster.local/ns/default/sa/serviceaccount1", - "spiffe://cluster.local/ns/default/sa/serviceaccount2", + spiffe.MustGenSpiffeURI("default", "serviceaccount1"), + spiffe.MustGenSpiffeURI("default", "serviceaccount2"), } } return make([]string, 0) diff --git a/pilot/pkg/proxy/envoy/v2/rds.go b/pilot/pkg/proxy/envoy/v2/rds.go index d5275979cc36..fae01a99951d 100644 --- a/pilot/pkg/proxy/envoy/v2/rds.go +++ b/pilot/pkg/proxy/envoy/v2/rds.go @@ -42,7 +42,7 @@ func (s *DiscoveryServer) pushRoute(con *XdsConnection, push *model.PushContext) response := routeDiscoveryResponse(rawRoutes) err = con.send(response) if err != nil { - adsLog.Warnf("ADS: RDS: Send failure for node %v, closing grpc %v", con.modelNode, err) + adsLog.Warnf("ADS: RDS: Send failure %v: %v", con.modelNode.ID, err) pushes.With(prometheus.Labels{"type": "rds_senderr"}).Add(1) return err } diff --git a/pilot/pkg/proxy/envoy/v2/testdata/none_cds.json b/pilot/pkg/proxy/envoy/v2/testdata/none_cds.json new file mode 100644 index 000000000000..65bbc3fad2d2 --- /dev/null +++ b/pilot/pkg/proxy/envoy/v2/testdata/none_cds.json @@ -0,0 +1,211 @@ +{ + "BlackHoleCluster": { + "name": "BlackHoleCluster", + "connect_timeout": 1000000000, + "LbConfig": null + }, + "PassthroughCluster": { + "name": "PassthroughCluster", + "type": 4, + "connect_timeout": 1000000000, + "lb_policy": 4, + "LbConfig": null + }, + "inbound|2001|httplocal|s1http.none": { + "name": "inbound|2001|httplocal|s1http.none", + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "inbound|2001|httplocal|s1http.none", + "endpoints": [ + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 7071 + } + } + } + } + } + } + ] + } + ] + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "LbConfig": null + }, + "outbound|2006||s2dns.external.test.istio.io": { + "name": "outbound|2006||s2dns.external.test.istio.io", + "type": 1, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|2006||s2dns.external.test.istio.io", + "endpoints": [ + { + "lb_endpoints": null + }, + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "s2dns.external.test.istio.io", + "PortSpecifier": { + "PortValue": 2006 + } + } + } + } + }, + "load_balancing_weight": { + "value": 1 + } + } + ] + } + ] + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "dns_lookup_family": 1, + "LbConfig": null + }, + "outbound|2007||tcpmeshdns.seexamples.svc": { + "name": "outbound|2007||tcpmeshdns.seexamples.svc", + "type": 1, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|2007||tcpmeshdns.seexamples.svc", + "endpoints": [ + { + "lb_endpoints": null + }, + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "tcpmeshdns.seexamples.svc", + "PortSpecifier": { + "PortValue": 2007 + } + } + } + } + }, + "load_balancing_weight": { + "value": 1 + } + } + ] + } + ] + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "dns_lookup_family": 1, + "LbConfig": null + }, + "outbound|443||api1.facebook.com": { + "name": "outbound|443||api1.facebook.com", + "type": 1, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|443||api1.facebook.com", + "endpoints": [ + { + "lb_endpoints": null + }, + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "api1.facebook.com", + "PortSpecifier": { + "PortValue": 443 + } + } + } + } + }, + "load_balancing_weight": { + "value": 1 + } + } + ] + } + ] + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "dns_lookup_family": 1, + "LbConfig": null + }, + "outbound|443||www1.googleapis.com": { + "name": "outbound|443||www1.googleapis.com", + "type": 1, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|443||www1.googleapis.com", + "endpoints": [ + { + "lb_endpoints": null + }, + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "www1.googleapis.com", + "PortSpecifier": { + "PortValue": 443 + } + } + } + } + }, + "load_balancing_weight": { + "value": 1 + } + } + ] + } + ] + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "dns_lookup_family": 1, + "LbConfig": null + } + } \ No newline at end of file diff --git a/pilot/pkg/proxy/envoy/v2/testdata/none_ecds.json b/pilot/pkg/proxy/envoy/v2/testdata/none_ecds.json new file mode 100644 index 000000000000..1576ee45c04f --- /dev/null +++ b/pilot/pkg/proxy/envoy/v2/testdata/none_ecds.json @@ -0,0 +1,117 @@ +{ + "outbound|2000||s1tcp.none": { + "name": "outbound|2000||s1tcp.none", + "type": 3, + "eds_cluster_config": { + "eds_config": { + "ConfigSourceSpecifier": { + "Ads": {} + } + }, + "service_name": "outbound|2000||s1tcp.none" + }, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|2000||s1tcp.none", + "endpoints": null + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "LbConfig": null + }, + "outbound|2001||s1http.none": { + "name": "outbound|2001||s1http.none", + "type": 3, + "eds_cluster_config": { + "eds_config": { + "ConfigSourceSpecifier": { + "Ads": {} + } + }, + "service_name": "outbound|2001||s1http.none" + }, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|2001||s1http.none", + "endpoints": null + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "LbConfig": null + }, + "outbound|2005||s2.external.test.istio.io": { + "name": "outbound|2005||s2.external.test.istio.io", + "type": 3, + "eds_cluster_config": { + "eds_config": { + "ConfigSourceSpecifier": { + "Ads": {} + } + }, + "service_name": "outbound|2005||s2.external.test.istio.io" + }, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|2005||s2.external.test.istio.io", + "endpoints": null + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "LbConfig": null + }, + "outbound|2008||tcpmeshstatic.seexamples.svc": { + "name": "outbound|2008||tcpmeshstatic.seexamples.svc", + "type": 3, + "eds_cluster_config": { + "eds_config": { + "ConfigSourceSpecifier": { + "Ads": {} + } + }, + "service_name": "outbound|2008||tcpmeshstatic.seexamples.svc" + }, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|2008||tcpmeshstatic.seexamples.svc", + "endpoints": null + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "LbConfig": null + }, + "outbound|2009||tcpmeshstaticint.seexamples.svc": { + "name": "outbound|2009||tcpmeshstaticint.seexamples.svc", + "type": 3, + "eds_cluster_config": { + "eds_config": { + "ConfigSourceSpecifier": { + "Ads": {} + } + }, + "service_name": "outbound|2009||tcpmeshstaticint.seexamples.svc" + }, + "connect_timeout": 1000000000, + "load_assignment": { + "cluster_name": "outbound|2009||tcpmeshstaticint.seexamples.svc", + "endpoints": null + }, + "circuit_breakers": { + "thresholds": [ + {} + ] + }, + "LbConfig": null + } + } \ No newline at end of file diff --git a/pilot/pkg/proxy/envoy/v2/testdata/none_eds.json b/pilot/pkg/proxy/envoy/v2/testdata/none_eds.json new file mode 100644 index 000000000000..7e4710773a1a --- /dev/null +++ b/pilot/pkg/proxy/envoy/v2/testdata/none_eds.json @@ -0,0 +1,146 @@ +{ + "outbound|2000||s1tcp.none": { + "cluster_name": "outbound|2000||s1tcp.none", + "endpoints": [ + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "10.11.0.1", + "PortSpecifier": { + "PortValue": 7070 + } + } + } + } + } + } + ], + "load_balancing_weight": { + "value": 1 + } + } + ] + }, + "outbound|2001||s1http.none": { + "cluster_name": "outbound|2001||s1http.none", + "endpoints": [ + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "10.11.0.1", + "PortSpecifier": { + "PortValue": 7071 + } + } + } + } + } + } + ], + "load_balancing_weight": { + "value": 1 + } + } + ] + }, + "outbound|2005||s2.external.test.istio.io": { + "cluster_name": "outbound|2005||s2.external.test.istio.io", + "endpoints": [ + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "10.11.0.2", + "PortSpecifier": { + "PortValue": 7071 + } + } + } + } + } + }, + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "10.11.0.3", + "PortSpecifier": { + "PortValue": 7072 + } + } + } + } + } + } + ], + "load_balancing_weight": { + "value": 2 + } + } + ] + }, + "outbound|2008||tcpmeshstatic.seexamples.svc": { + "cluster_name": "outbound|2008||tcpmeshstatic.seexamples.svc", + "endpoints": [ + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "10.11.0.8", + "PortSpecifier": { + "PortValue": 7070 + } + } + } + } + } + } + ], + "load_balancing_weight": { + "value": 1 + } + } + ] + }, + "outbound|2009||tcpmeshstaticint.seexamples.svc": { + "cluster_name": "outbound|2009||tcpmeshstaticint.seexamples.svc", + "endpoints": [ + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "Address": { + "SocketAddress": { + "address": "10.11.0.9", + "PortSpecifier": { + "PortValue": 7070 + } + } + } + } + } + } + ], + "load_balancing_weight": { + "value": 1 + } + } + ] + } + } \ No newline at end of file diff --git a/pilot/pkg/proxy/envoy/v2/testdata/none_lds_http.json b/pilot/pkg/proxy/envoy/v2/testdata/none_lds_http.json new file mode 100644 index 000000000000..e12825923d31 --- /dev/null +++ b/pilot/pkg/proxy/envoy/v2/testdata/none_lds_http.json @@ -0,0 +1,1231 @@ +{ + "0.0.0.0_7071": { + "name": "0.0.0.0_7071", + "address": { + "Address": { + "SocketAddress": { + "address": "0.0.0.0", + "PortSpecifier": { + "PortValue": 7071 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "envoy.http_connection_manager", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "forward_client_cert_details": { + "Kind": { + "StringValue": "APPEND_FORWARD" + } + }, + "generate_request_id": { + "Kind": { + "BoolValue": true + } + }, + "http_filters": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "default_destination_service": { + "Kind": { + "StringValue": "default" + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "inbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.ip": { + "Kind": { + "StructValue": { + "fields": { + "bytes_value": { + "Kind": { + "StringValue": "AAAAAAAAAAAAAP//AAAAAA==" + } + } + } + } + } + }, + "destination.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "destination.port": { + "Kind": { + "StructValue": { + "fields": { + "int64_value": { + "Kind": { + "StringValue": "7071" + } + } + } + } + } + }, + "destination.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "service_configs": { + "Kind": { + "StructValue": { + "fields": { + "default": { + "Kind": { + "StructValue": {} + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "mixer" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.cors" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.fault" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.router" + } + } + } + } + } + } + ] + } + } + }, + "route_config": { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "inbound|7071|httplocal|s1http.none" + } + }, + "validate_clusters": { + "Kind": { + "BoolValue": false + } + }, + "virtual_hosts": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "domains": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StringValue": "*" + } + } + ] + } + } + }, + "name": { + "Kind": { + "StringValue": "inbound|http|7071" + } + }, + "routes": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "decorator": { + "Kind": { + "StructValue": { + "fields": { + "operation": { + "Kind": { + "StringValue": "s1http.none:7071/*" + } + } + } + } + } + }, + "match": { + "Kind": { + "StructValue": { + "fields": { + "prefix": { + "Kind": { + "StringValue": "/" + } + } + } + } + } + }, + "per_filter_config": { + "Kind": { + "StructValue": { + "fields": { + "mixer": { + "Kind": { + "StructValue": { + "fields": { + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1http.none" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1http.none" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "route": { + "Kind": { + "StructValue": { + "fields": { + "cluster": { + "Kind": { + "StringValue": "inbound|7071|httplocal|s1http.none" + } + }, + "max_grpc_timeout": { + "Kind": { + "StringValue": "0s" + } + }, + "timeout": { + "Kind": { + "StringValue": "0s" + } + } + } + } + } + } + } + } + } + } + ] + } + } + } + } + } + } + } + ] + } + } + } + } + } + } + }, + "server_name": { + "Kind": { + "StringValue": "istio-envoy" + } + }, + "set_current_client_cert_details": { + "Kind": { + "StructValue": { + "fields": { + "dns": { + "Kind": { + "BoolValue": true + } + }, + "subject": { + "Kind": { + "BoolValue": true + } + }, + "uri": { + "Kind": { + "BoolValue": true + } + } + } + } + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "0.0.0.0_7071" + } + }, + "stream_idle_timeout": { + "Kind": { + "StringValue": "0s" + } + }, + "tracing": { + "Kind": { + "StructValue": { + "fields": { + "client_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + }, + "overall_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + }, + "random_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + } + } + } + } + }, + "upgrade_configs": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "upgrade_type": { + "Kind": { + "StringValue": "websocket" + } + } + } + } + } + } + ] + } + } + }, + "use_remote_address": { + "Kind": { + "BoolValue": false + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_15002": { + "name": "127.0.0.1_15002", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 15002 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "envoy.http_connection_manager", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "generate_request_id": { + "Kind": { + "BoolValue": true + } + }, + "http_filters": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.cors" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.fault" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.router" + } + } + } + } + } + } + ] + } + } + }, + "http_protocol_options": { + "Kind": { + "StructValue": { + "fields": { + "allow_absolute_url": { + "Kind": { + "BoolValue": true + } + } + } + } + } + }, + "rds": { + "Kind": { + "StructValue": { + "fields": { + "config_source": { + "Kind": { + "StructValue": { + "fields": { + "ads": { + "Kind": { + "StructValue": {} + } + } + } + } + } + }, + "route_config_name": { + "Kind": { + "StringValue": "http_proxy" + } + } + } + } + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "127.0.0.1_15002" + } + }, + "stream_idle_timeout": { + "Kind": { + "StringValue": "0s" + } + }, + "tracing": { + "Kind": { + "StructValue": { + "fields": { + "client_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + }, + "operation_name": { + "Kind": { + "StringValue": "EGRESS" + } + }, + "overall_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + }, + "random_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + } + } + } + } + }, + "upgrade_configs": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "upgrade_type": { + "Kind": { + "StringValue": "websocket" + } + } + } + } + } + } + ] + } + } + }, + "use_remote_address": { + "Kind": { + "BoolValue": false + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_2001": { + "name": "127.0.0.1_2001", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 2001 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "envoy.http_connection_manager", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "generate_request_id": { + "Kind": { + "BoolValue": true + } + }, + "http_filters": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "default_destination_service": { + "Kind": { + "StringValue": "default" + } + }, + "forward_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "service_configs": { + "Kind": { + "StructValue": { + "fields": { + "default": { + "Kind": { + "StructValue": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "mixer" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.cors" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.fault" + } + } + } + } + } + }, + { + "Kind": { + "StructValue": { + "fields": { + "name": { + "Kind": { + "StringValue": "envoy.router" + } + } + } + } + } + } + ] + } + } + }, + "rds": { + "Kind": { + "StructValue": { + "fields": { + "config_source": { + "Kind": { + "StructValue": { + "fields": { + "ads": { + "Kind": { + "StructValue": {} + } + } + } + } + } + }, + "route_config_name": { + "Kind": { + "StringValue": "2001" + } + } + } + } + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "127.0.0.1_2001" + } + }, + "stream_idle_timeout": { + "Kind": { + "StringValue": "0s" + } + }, + "tracing": { + "Kind": { + "StructValue": { + "fields": { + "client_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + }, + "operation_name": { + "Kind": { + "StringValue": "EGRESS" + } + }, + "overall_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + }, + "random_sampling": { + "Kind": { + "StructValue": { + "fields": { + "value": { + "Kind": { + "NumberValue": 100 + } + } + } + } + } + } + } + } + } + }, + "upgrade_configs": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "upgrade_type": { + "Kind": { + "StringValue": "websocket" + } + } + } + } + } + } + ] + } + } + }, + "use_remote_address": { + "Kind": { + "BoolValue": false + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + } + } \ No newline at end of file diff --git a/pilot/pkg/proxy/envoy/v2/testdata/none_lds_tcp.json b/pilot/pkg/proxy/envoy/v2/testdata/none_lds_tcp.json new file mode 100644 index 000000000000..7997422930f7 --- /dev/null +++ b/pilot/pkg/proxy/envoy/v2/testdata/none_lds_tcp.json @@ -0,0 +1,2050 @@ +{ + "0.0.0.0_7070": { + "name": "0.0.0.0_7070", + "address": { + "Address": { + "SocketAddress": { + "address": "0.0.0.0", + "PortSpecifier": { + "PortValue": 7070 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "inbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.ip": { + "Kind": { + "StructValue": { + "fields": { + "bytes_value": { + "Kind": { + "StringValue": "AAAAAAAAAAAAAP//AAAAAA==" + } + } + } + } + } + }, + "destination.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "destination.port": { + "Kind": { + "StructValue": { + "fields": { + "int64_value": { + "Kind": { + "StringValue": "7070" + } + } + } + } + } + }, + "destination.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "inbound|7070|tcplocal|s1http.none" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "inbound|7070|tcplocal|s1http.none" + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_2000": { + "name": "127.0.0.1_2000", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 2000 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1tcp.none" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1tcp.none" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|2000||s1tcp.none" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|2000||s1tcp.none" + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_2005": { + "name": "127.0.0.1_2005", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 2005 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s2.external.test.istio.io" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s2.external.test.istio.io" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|2005||s2.external.test.istio.io" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|2005||s2.external.test.istio.io" + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_2006": { + "name": "127.0.0.1_2006", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 2006 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s2dns.external.test.istio.io" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s2dns.external.test.istio.io" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|2006||s2dns.external.test.istio.io" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|2006||s2dns.external.test.istio.io" + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_2007": { + "name": "127.0.0.1_2007", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 2007 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "tcpmeshdns.seexamples.svc" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "tcpmeshdns.seexamples.svc" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|2007||tcpmeshdns.seexamples.svc" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|2007||tcpmeshdns.seexamples.svc" + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_2008": { + "name": "127.0.0.1_2008", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 2008 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "tcpmeshstatic.seexamples.svc" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "tcpmeshstatic.seexamples.svc" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|2008||tcpmeshstatic.seexamples.svc" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|2008||tcpmeshstatic.seexamples.svc" + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_2009": { + "name": "127.0.0.1_2009", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 2009 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "tcpmeshstaticint.seexamples.svc" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "tcpmeshstaticint.seexamples.svc" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|2009||tcpmeshstaticint.seexamples.svc" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|2009||tcpmeshstaticint.seexamples.svc" + } + } + } + } + } + } + ] + } + ], + "listener_filters": null + }, + "127.0.0.1_443": { + "name": "127.0.0.1_443", + "address": { + "Address": { + "SocketAddress": { + "address": "127.0.0.1", + "PortSpecifier": { + "PortValue": 443 + } + } + } + }, + "filter_chains": [ + { + "filter_chain_match": { + "server_names": [ + "www1.googleapis.com" + ] + }, + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "www1.googleapis.com" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "www1.googleapis.com" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|443||www1.googleapis.com" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|443||www1.googleapis.com" + } + } + } + } + } + } + ] + }, + { + "filter_chain_match": { + "server_names": [ + "api1.facebook.com" + ] + }, + "filters": [ + { + "name": "mixer", + "ConfigType": { + "Config": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "context.reporter.kind": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "outbound" + } + } + } + } + } + }, + "context.reporter.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + }, + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "www1.googleapis.com" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "www1.googleapis.com" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + }, + "source.uid": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "kubernetes://test-1.none" + } + } + } + } + } + } + } + } + } + } + } + } + } + }, + "transport": { + "Kind": { + "StructValue": { + "fields": { + "check_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + }, + "network_fail_policy": { + "Kind": { + "StructValue": { + "fields": { + "policy": { + "Kind": { + "StringValue": "FAIL_CLOSE" + } + } + } + } + } + }, + "report_cluster": { + "Kind": { + "StringValue": "outbound|9091||istio-mixer.istio-system" + } + } + } + } + } + } + } + } + } + }, + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "access_log": { + "Kind": { + "ListValue": { + "values": [ + { + "Kind": { + "StructValue": { + "fields": { + "config": { + "Kind": { + "StructValue": { + "fields": { + "path": { + "Kind": { + "StringValue": "/dev/stdout" + } + } + } + } + } + }, + "name": { + "Kind": { + "StringValue": "envoy.file_access_log" + } + } + } + } + } + } + ] + } + } + }, + "cluster": { + "Kind": { + "StringValue": "outbound|443||api1.facebook.com" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "outbound|443||api1.facebook.com" + } + } + } + } + } + } + ] + } + ], + "listener_filters": [ + { + "name": "envoy.listener.tls_inspector", + "ConfigType": null + } + ] + }, + "virtual": { + "name": "virtual", + "address": { + "Address": { + "SocketAddress": { + "address": "0.0.0.0", + "PortSpecifier": { + "PortValue": 15001 + } + } + } + }, + "filter_chains": [ + { + "filters": [ + { + "name": "envoy.tcp_proxy", + "ConfigType": { + "Config": { + "fields": { + "cluster": { + "Kind": { + "StringValue": "PassthroughCluster" + } + }, + "stat_prefix": { + "Kind": { + "StringValue": "PassthroughCluster" + } + } + } + } + } + } + ] + } + ], + "use_original_dst": { + "value": true + }, + "listener_filters": null + } + } \ No newline at end of file diff --git a/pilot/pkg/proxy/envoy/v2/testdata/none_rds.json b/pilot/pkg/proxy/envoy/v2/testdata/none_rds.json new file mode 100644 index 000000000000..d87ac2d0e6f0 --- /dev/null +++ b/pilot/pkg/proxy/envoy/v2/testdata/none_rds.json @@ -0,0 +1,208 @@ +{ + "2001": { + "name": "2001", + "virtual_hosts": [ + { + "name": "s1http.none:2001", + "domains": [ + "s1http.none", + "s1http.none:2001" + ], + "routes": [ + { + "match": { + "PathSpecifier": { + "Prefix": "/" + } + }, + "Action": { + "Route": { + "ClusterSpecifier": { + "Cluster": "outbound|2001||s1http.none" + }, + "HostRewriteSpecifier": null, + "timeout": 0, + "max_grpc_timeout": 0 + } + }, + "decorator": { + "operation": "s1http.none:2001/*" + }, + "per_filter_config": { + "mixer": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1http.none" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1http.none" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + ] + } + ], + "validate_clusters": {} + }, + "7071": { + "name": "7071", + "virtual_hosts": null, + "validate_clusters": {} + }, + "http_proxy": { + "name": "http_proxy", + "virtual_hosts": [ + { + "name": "s1http.none:2001", + "domains": [ + "s1http.none:2001" + ], + "routes": [ + { + "match": { + "PathSpecifier": { + "Prefix": "/" + } + }, + "Action": { + "Route": { + "ClusterSpecifier": { + "Cluster": "outbound|2001||s1http.none" + }, + "HostRewriteSpecifier": null, + "timeout": 0, + "max_grpc_timeout": 0 + } + }, + "decorator": { + "operation": "s1http.none:2001/*" + }, + "per_filter_config": { + "mixer": { + "fields": { + "disable_check_calls": { + "Kind": { + "BoolValue": true + } + }, + "mixer_attributes": { + "Kind": { + "StructValue": { + "fields": { + "attributes": { + "Kind": { + "StructValue": { + "fields": { + "destination.service.host": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1http.none" + } + } + } + } + } + }, + "destination.service.name": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "s1http.none" + } + } + } + } + } + }, + "destination.service.namespace": { + "Kind": { + "StructValue": { + "fields": { + "string_value": { + "Kind": { + "StringValue": "none" + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + } + ] + } + ], + "validate_clusters": {} + } + } \ No newline at end of file diff --git a/pilot/pkg/proxy/envoy/v2/xds_test.go b/pilot/pkg/proxy/envoy/v2/xds_test.go index d0ff1eeed3f5..3054e9a5144d 100644 --- a/pilot/pkg/proxy/envoy/v2/xds_test.go +++ b/pilot/pkg/proxy/envoy/v2/xds_test.go @@ -99,10 +99,14 @@ func startEnvoy(t *testing.T) { t.Fatal("Can't read bootstrap template", err) } testEnv.EnvoyTemplate = string(tmplB) + testEnv.Dir = env.IstioSrc nodeID := sidecarID(app3Ip, "app3") testEnv.EnvoyParams = []string{"--service-cluster", "serviceCluster", "--service-node", nodeID} testEnv.EnvoyConfigOpt = map[string]interface{}{ - "NodeID": nodeID, + "NodeID": nodeID, + "BaseDir": env.IstioSrc + "/tests/testdata/local", + // Same value used in the real template + "meta_json_str": fmt.Sprintf(`"BASE": "%s"`, env.IstioSrc+"/tests/testdata/local"), } // Mixer will push stats every 1 sec @@ -134,7 +138,9 @@ func initLocalPilotTestEnv(t *testing.T) (*bootstrap.Server, util.TearDownFunc) initMutex.Lock() defer initMutex.Unlock() - server, tearDown := util.EnsureTestServer() + server, tearDown := util.EnsureTestServer(func(args *bootstrap.PilotArgs) { + args.Plugins = bootstrap.DefaultPlugins + }) testEnv = testenv.NewTestSetup(testenv.XDSTest, t) testEnv.Ports().PilotGrpcPort = uint16(util.MockPilotGrpcPort) testEnv.Ports().PilotHTTPPort = uint16(util.MockPilotHTTPPort) diff --git a/pilot/pkg/serviceregistry/aggregate/controller_test.go b/pilot/pkg/serviceregistry/aggregate/controller_test.go index 2dce0a2faa1b..0e7b6df2ce9b 100644 --- a/pilot/pkg/serviceregistry/aggregate/controller_test.go +++ b/pilot/pkg/serviceregistry/aggregate/controller_test.go @@ -425,7 +425,7 @@ func TestGetIstioServiceAccounts(t *testing.T) { for i := 0; i < len(accounts); i++ { if accounts[i] != expected[i] { - t.Fatal("Returned account result does not match expected one") + t.Fatal("Returned account result does not match expected one", accounts[i], expected[i]) } } } diff --git a/pilot/pkg/serviceregistry/consul/controller.go b/pilot/pkg/serviceregistry/consul/controller.go index 3146ee63282c..c8f7443c1b9a 100644 --- a/pilot/pkg/serviceregistry/consul/controller.go +++ b/pilot/pkg/serviceregistry/consul/controller.go @@ -17,6 +17,8 @@ package consul import ( "time" + "istio.io/istio/pkg/spiffe" + "github.com/hashicorp/consul/api" "istio.io/istio/pilot/pkg/model" @@ -209,6 +211,6 @@ func (c *Controller) GetIstioServiceAccounts(hostname model.Hostname, ports []in // Follow - https://goo.gl/Dt11Ct return []string{ - "spiffe://cluster.local/ns/default/sa/default", + spiffe.MustGenSpiffeURI("default", "default"), } } diff --git a/pilot/pkg/serviceregistry/kube/controller.go b/pilot/pkg/serviceregistry/kube/controller.go index 7edc3eb7071e..286896078e2e 100644 --- a/pilot/pkg/serviceregistry/kube/controller.go +++ b/pilot/pkg/serviceregistry/kube/controller.go @@ -82,6 +82,9 @@ type ControllerOptions struct { // XDSUpdater will push changes to the xDS server. XDSUpdater model.XDSUpdater + // TrustDomain used in SPIFFE identity + TrustDomain string + stop chan struct{} } @@ -450,7 +453,7 @@ func (c *Controller) InstancesByPort(hostname model.Hostname, reqSvcPort int, az, sa, uid := "", "", "" if pod != nil { az = c.GetPodAZ(pod) - sa = kubeToIstioServiceAccount(pod.Spec.ServiceAccountName, pod.GetNamespace(), c.domainSuffix) + sa = kubeToIstioServiceAccount(pod.Spec.ServiceAccountName, pod.GetNamespace()) uid = fmt.Sprintf("kubernetes://%s.%s", pod.Name, pod.Namespace) } @@ -586,7 +589,7 @@ func getEndpoints(ip string, c *Controller, port v1.EndpointPort, svcPort *model az, sa := "", "" if pod != nil { az = c.GetPodAZ(pod) - sa = kubeToIstioServiceAccount(pod.Spec.ServiceAccountName, pod.GetNamespace(), c.domainSuffix) + sa = kubeToIstioServiceAccount(pod.Spec.ServiceAccountName, pod.GetNamespace()) } return &model.ServiceInstance{ Endpoint: model.NetworkEndpoint{ @@ -774,7 +777,7 @@ func (c *Controller) updateEDS(ep *v1.Endpoints) { ServicePortName: port.Name, Labels: labels, UID: uid, - ServiceAccount: kubeToIstioServiceAccount(pod.Spec.ServiceAccountName, pod.GetNamespace(), c.domainSuffix), + ServiceAccount: kubeToIstioServiceAccount(pod.Spec.ServiceAccountName, pod.GetNamespace()), Network: c.endpointNetwork(ea.IP), }) } diff --git a/pilot/pkg/serviceregistry/kube/controller_test.go b/pilot/pkg/serviceregistry/kube/controller_test.go index c45220ce538a..c7eb3820a1d7 100644 --- a/pilot/pkg/serviceregistry/kube/controller_test.go +++ b/pilot/pkg/serviceregistry/kube/controller_test.go @@ -24,6 +24,7 @@ import ( "time" v1 "k8s.io/api/core/v1" + meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/util/intstr" "k8s.io/client-go/kubernetes" @@ -32,6 +33,7 @@ import ( meshconfig "istio.io/api/mesh/v1alpha1" "istio.io/istio/pilot/pkg/model" "istio.io/istio/pkg/log" + "istio.io/istio/pkg/spiffe" "istio.io/istio/pkg/test" "istio.io/istio/pkg/test/env" ) @@ -136,7 +138,6 @@ func (fx *FakeXdsUpdater) Clear() { case <-fx.Events: default: wait = false - break } } } @@ -468,6 +469,10 @@ func TestGetProxyServiceInstances(t *testing.T) { } func TestController_GetIstioServiceAccounts(t *testing.T) { + oldTrustDomain := spiffe.GetTrustDomain() + spiffe.SetTrustDomain(domainSuffix) + defer spiffe.SetTrustDomain(oldTrustDomain) + controller, fx := newFakeController(t) defer controller.Stop() @@ -833,17 +838,6 @@ func TestController_ExternalNameService(t *testing.T) { Resolution: model.DNSLB, }, } - var expectedInstanceList []*model.ServiceInstance - for i, svc := range expectedSvcList { - expectedInstanceList = append(expectedInstanceList, &model.ServiceInstance{ - Endpoint: model.NetworkEndpoint{ - Address: k8sSvcs[i].Spec.ExternalName, - Port: svc.Ports[0].Port, - ServicePort: svc.Ports[0], - }, - Service: svc, - }) - } svcList, _ := controller.Services() if len(svcList) != len(expectedSvcList) { diff --git a/pilot/pkg/serviceregistry/kube/conversion.go b/pilot/pkg/serviceregistry/kube/conversion.go index caba940b03f2..8d47f8b9004f 100644 --- a/pilot/pkg/serviceregistry/kube/conversion.go +++ b/pilot/pkg/serviceregistry/kube/conversion.go @@ -21,13 +21,14 @@ import ( "strings" multierror "github.com/hashicorp/go-multierror" - v1 "k8s.io/api/core/v1" - meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/apimachinery/pkg/util/intstr" networking "istio.io/api/networking/v1alpha3" "istio.io/istio/pilot/pkg/model" - "istio.io/istio/pkg/features/pilot" + "istio.io/istio/pkg/spiffe" + + v1 "k8s.io/api/core/v1" + meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/intstr" ) const ( @@ -43,8 +44,10 @@ const ( // are allowed to run this service. CanonicalServiceAccountsAnnotation = "alpha.istio.io/canonical-serviceaccounts" - // istioURIPrefix is the URI prefix in the Istio service account scheme - istioURIPrefix = "spiffe" + // ServiceConfigScopeAnnotation configs the scope the service visible to. + // "PUBLIC" which is the default, indicates it is reachable within the mesh + // "PRIVATE" indicates it is reachable within its namespace + ServiceConfigScopeAnnotation = "networking.istio.io/configScope" managementPortPrefix = "mgmt-" ) @@ -99,10 +102,10 @@ func convertService(svc v1.Service, domainSuffix string) *model.Service { } if svc.Annotations[KubeServiceAccountsOnVMAnnotation] != "" { for _, ksa := range strings.Split(svc.Annotations[KubeServiceAccountsOnVMAnnotation], ",") { - serviceaccounts = append(serviceaccounts, kubeToIstioServiceAccount(ksa, svc.Namespace, domainSuffix)) + serviceaccounts = append(serviceaccounts, kubeToIstioServiceAccount(ksa, svc.Namespace)) } } - if svc.Labels[pilot.ServiceConfigScopeAnnotation] == networking.ConfigScope_name[int32(networking.ConfigScope_PRIVATE)] { + if svc.Labels[ServiceConfigScopeAnnotation] == networking.ConfigScope_name[int32(networking.ConfigScope_PRIVATE)] { configScope = networking.ConfigScope_PRIVATE } } @@ -150,8 +153,8 @@ func serviceHostname(name, namespace, domainSuffix string) model.Hostname { } // kubeToIstioServiceAccount converts a K8s service account to an Istio service account -func kubeToIstioServiceAccount(saname string, ns string, domain string) string { - return fmt.Sprintf("%v://%v/ns/%v/sa/%v", istioURIPrefix, domain, ns, saname) +func kubeToIstioServiceAccount(saname string, ns string) string { + return spiffe.MustGenSpiffeURI(ns, saname) } // KeyFunc is the internal API key function that returns "namespace"/"name" or diff --git a/pilot/pkg/serviceregistry/kube/conversion_test.go b/pilot/pkg/serviceregistry/kube/conversion_test.go index cdf61dbdb500..e54e9415795a 100644 --- a/pilot/pkg/serviceregistry/kube/conversion_test.go +++ b/pilot/pkg/serviceregistry/kube/conversion_test.go @@ -24,6 +24,7 @@ import ( "k8s.io/apimachinery/pkg/util/intstr" "istio.io/istio/pilot/pkg/model" + "istio.io/istio/pkg/spiffe" ) var ( @@ -71,6 +72,10 @@ func TestServiceConversion(t *testing.T) { saC := "spiffe://accounts.google.com/serviceaccountC@cloudservices.gserviceaccount.com" saD := "spiffe://accounts.google.com/serviceaccountD@developer.gserviceaccount.com" + oldTrustDomain := spiffe.GetTrustDomain() + spiffe.SetTrustDomain(domainSuffix) + defer spiffe.SetTrustDomain(oldTrustDomain) + ip := "10.0.0.1" tnow := time.Now() diff --git a/pilot/pkg/serviceregistry/memory/discovery.go b/pilot/pkg/serviceregistry/memory/discovery.go index 16e6df8ba289..7cd80800f78b 100644 --- a/pilot/pkg/serviceregistry/memory/discovery.go +++ b/pilot/pkg/serviceregistry/memory/discovery.go @@ -19,6 +19,8 @@ import ( "net" "time" + "istio.io/istio/pkg/spiffe" + "istio.io/istio/pilot/pkg/model" ) @@ -261,8 +263,8 @@ func (sd *ServiceDiscovery) WorkloadHealthCheckInfo(addr string) model.ProbeList func (sd *ServiceDiscovery) GetIstioServiceAccounts(hostname model.Hostname, ports []int) []string { if hostname == "world.default.svc.cluster.local" { return []string{ - "spiffe://cluster.local/ns/default/sa/serviceaccount1", - "spiffe://cluster.local/ns/default/sa/serviceaccount2", + spiffe.MustGenSpiffeURI("default", "serviceaccount1"), + spiffe.MustGenSpiffeURI("default", "serviceaccount2"), } } return make([]string, 0) diff --git a/pilot/pkg/serviceregistry/memory/discovery_mock.go b/pilot/pkg/serviceregistry/memory/discovery_mock.go index 0553e8f55d26..5adb846a342e 100644 --- a/pilot/pkg/serviceregistry/memory/discovery_mock.go +++ b/pilot/pkg/serviceregistry/memory/discovery_mock.go @@ -43,7 +43,7 @@ var ( // HelloProxyV0 is a mock proxy v0 of HelloService HelloProxyV0 = model.Proxy{ - Type: model.Sidecar, + Type: model.SidecarProxy, IPAddresses: []string{HelloInstanceV0}, ID: "v0.default", DNSDomain: "default.svc.cluster.local", @@ -51,7 +51,7 @@ var ( // HelloProxyV1 is a mock proxy v1 of HelloService HelloProxyV1 = model.Proxy{ - Type: model.Sidecar, + Type: model.SidecarProxy, IPAddresses: []string{HelloInstanceV1}, ID: "v1.default", DNSDomain: "default.svc.cluster.local", diff --git a/pilot/pkg/serviceregistry/platform.go b/pilot/pkg/serviceregistry/platform.go index 16e157c612e4..7fae6ee5f7af 100644 --- a/pilot/pkg/serviceregistry/platform.go +++ b/pilot/pkg/serviceregistry/platform.go @@ -24,4 +24,6 @@ const ( KubernetesRegistry ServiceRegistry = "Kubernetes" // ConsulRegistry is a service registry backed by Consul ConsulRegistry ServiceRegistry = "Consul" + // MCPRegistry is a service registry backed by MCP ServiceEntries + MCPRegistry ServiceRegistry = "MCP" ) diff --git a/pkg/adsc/adsc.go b/pkg/adsc/adsc.go index 56849fc228a7..9b6d90864021 100644 --- a/pkg/adsc/adsc.go +++ b/pkg/adsc/adsc.go @@ -50,7 +50,10 @@ type Config struct { // NodeType defaults to sidecar. "ingress" and "router" are also supported. NodeType string - IP string + + // IP is currently the primary key used to locate inbound configs. It is sent by client, + // must match a known endpoint IP. Tests can use a ServiceEntry to register fake IPs. + IP string } // ADSC implements a basic client for ADS, for use in stress tests and tools @@ -65,18 +68,36 @@ type ADSC struct { // NodeID is the node identity sent to Pilot. nodeID string + done chan error + certDir string url string watchTime time.Time + // InitialLoad tracks the time to receive the initial configuration. InitialLoad time.Duration - TCPListeners map[string]*xdsapi.Listener + // HTTPListeners contains received listeners with a http_connection_manager filter. HTTPListeners map[string]*xdsapi.Listener - Clusters map[string]*xdsapi.Cluster - Routes map[string]*xdsapi.RouteConfiguration - EDS map[string]*xdsapi.ClusterLoadAssignment + + // TCPListeners contains all listeners of type TCP (not-HTTP) + TCPListeners map[string]*xdsapi.Listener + + // All received clusters of type EDS, keyed by name + EDSClusters map[string]*xdsapi.Cluster + + // All received clusters of no-EDS type, keyed by name + Clusters map[string]*xdsapi.Cluster + + // All received routes, keyed by route name + Routes map[string]*xdsapi.RouteConfiguration + + // All received endpoints, keyed by cluster name + EDS map[string]*xdsapi.ClusterLoadAssignment + + // DumpCfg will print all received config + DumpCfg bool // Metadata has the node metadata to send to pilot. // If nil, the defaults will be used. @@ -112,6 +133,7 @@ var ( // Dial connects to a ADS server, with optional MTLS authentication if a cert dir is specified. func Dial(url string, certDir string, opts *Config) (*ADSC, error) { adsc := &ADSC{ + done: make(chan error), Updates: make(chan string, 100), VersionInfo: map[string]string{}, certDir: certDir, @@ -129,6 +151,7 @@ func Dial(url string, certDir string, opts *Config) (*ADSC, error) { if opts.Workload == "" { opts.Workload = "test-1" } + adsc.Metadata = opts.Meta adsc.nodeID = fmt.Sprintf("sidecar~%s~%s.%s~%s.svc.cluster.local", opts.IP, opts.Workload, opts.Namespace, opts.Namespace) @@ -322,7 +345,11 @@ func (a *ADSC) handleLDS(ll []*xdsapi.Listener) { // Getting from config is too painful.. port := l.Address.GetSocketAddress().GetPortValue() - routes = append(routes, fmt.Sprintf("%d", port)) + if port == 15002 { + routes = append(routes, "http_proxy") + } else { + routes = append(routes, fmt.Sprintf("%d", port)) + } //log.Printf("HTTP: %s -> %d", l.Name, port) } else if f0.Name == "envoy.mongo_proxy" { // ignore for now @@ -335,6 +362,10 @@ func (a *ADSC) handleLDS(ll []*xdsapi.Listener) { } log.Println("LDS: http=", len(lh), "tcp=", len(lt), "size=", ldsSize) + if a.DumpCfg { + b, _ := json.MarshalIndent(ll, " ", " ") + log.Println(string(b)) + } a.mutex.Lock() defer a.mutex.Unlock() if len(routes) > 0 { @@ -349,19 +380,102 @@ func (a *ADSC) handleLDS(ll []*xdsapi.Listener) { } } +// compact representations, for simplified debugging/testing + +// TCPListener extracts the core elements from envoy Listener. +type TCPListener struct { + // Address is the address, as expected by go Dial and Listen, including port + Address string + + // LogFile is the access log address for the listener + LogFile string + + // Target is the destination cluster. + Target string +} + +type Target struct { + + // Address is a go address, extracted from the mangled cluster name. + Address string + + // Endpoints are the resolved endpoints from EDS or cluster static. + Endpoints map[string]Endpoint +} + +type Endpoint struct { + // Weight extracted from EDS + Weight int +} + +// Save will save the json configs to files, using the base directory +func (a *ADSC) Save(base string) error { + strResponse, err := json.MarshalIndent(a.TCPListeners, " ", " ") + if err != nil { + return err + } + err = ioutil.WriteFile(base+"_lds_tcp.json", strResponse, 0644) + if err != nil { + return err + } + strResponse, err = json.MarshalIndent(a.HTTPListeners, " ", " ") + if err != nil { + return err + } + err = ioutil.WriteFile(base+"_lds_http.json", strResponse, 0644) + if err != nil { + return err + } + strResponse, err = json.MarshalIndent(a.Routes, " ", " ") + if err != nil { + return err + } + err = ioutil.WriteFile(base+"_rds.json", strResponse, 0644) + if err != nil { + return err + } + strResponse, err = json.MarshalIndent(a.EDSClusters, " ", " ") + if err != nil { + return err + } + err = ioutil.WriteFile(base+"_ecds.json", strResponse, 0644) + if err != nil { + return err + } + strResponse, err = json.MarshalIndent(a.Clusters, " ", " ") + if err != nil { + return err + } + err = ioutil.WriteFile(base+"_cds.json", strResponse, 0644) + if err != nil { + return err + } + strResponse, err = json.MarshalIndent(a.EDS, " ", " ") + if err != nil { + return err + } + err = ioutil.WriteFile(base+"_eds.json", strResponse, 0644) + if err != nil { + return err + } + + return err +} + func (a *ADSC) handleCDS(ll []*xdsapi.Cluster) { cn := []string{} cdsSize := 0 + edscds := map[string]*xdsapi.Cluster{} cds := map[string]*xdsapi.Cluster{} for _, c := range ll { cdsSize += c.Size() if c.Type != xdsapi.Cluster_EDS { - // TODO: save them + cds[c.Name] = c continue } cn = append(cn, c.Name) - cds[c.Name] = c + edscds[c.Name] = c } log.Println("CDS: ", len(cn), "size=", cdsSize) @@ -369,9 +483,14 @@ func (a *ADSC) handleCDS(ll []*xdsapi.Cluster) { if len(cn) > 0 { a.sendRsc(endpointType, cn) } + if a.DumpCfg { + b, _ := json.MarshalIndent(ll, " ", " ") + log.Println(string(b)) + } a.mutex.Lock() defer a.mutex.Unlock() + a.EDSClusters = edscds a.Clusters = cds select { @@ -402,16 +521,27 @@ func (a *ADSC) node() *core.Node { return n } +func (a *ADSC) Send(req *xdsapi.DiscoveryRequest) error { + req.Node = a.node() + req.ResponseNonce = time.Now().String() + return a.stream.Send(req) +} + func (a *ADSC) handleEDS(eds []*xdsapi.ClusterLoadAssignment) { la := map[string]*xdsapi.ClusterLoadAssignment{} edsSize := 0 + ep := 0 for _, cla := range eds { edsSize += cla.Size() la[cla.ClusterName] = cla + ep += len(cla.Endpoints) } - log.Println("EDS: ", len(eds), "size=", edsSize) - + log.Println("EDS: ", len(eds), "size=", edsSize, "ep=", ep) + if a.DumpCfg { + b, _ := json.MarshalIndent(eds, " ", " ") + log.Println(string(b)) + } if a.InitialLoad == 0 { // first load - Envoy loads listeners after endpoints a.stream.Send(&xdsapi.DiscoveryRequest{ @@ -446,7 +576,7 @@ func (a *ADSC) handleRDS(configurations []*xdsapi.RouteConfiguration) { for _, rt := range h.Routes { rcount++ // Example: match: route:{{$context.Error}} {{else}} -

Item {{ $context.Collection }}/{{ $context.Key }}

+

Item {{ $context.TypeURL }}/{{ $context.Key }}

{{ $context.Value }}
@@ -139,7 +139,7 @@ var _assetsTemplatesCollectionListHtml = []byte(`{{ define "content" }} {{ if ne $context.Error "" }} {{$context.Error}} {{else}} -

Collection {{ $context.Collection }}

+

TypeURL {{ $context.TypeURL }}

FieldTypeDescription
payloadTemplatestring +

A golang text/template template that will be executed to construct the payload for this log entry. +It will be given the full set of variables for the log to use to construct its result.

+
@@ -153,7 +153,7 @@ var _assetsTemplatesCollectionListHtml = []byte(`{{ define "content" }} {{ range $index, $key := $context.Keys }} - + {{ end }} @@ -408,7 +408,7 @@ var _assetsTemplatesMemHtml = []byte(`{{ define "content" }}
{{$index}}{{$key}}{{$key}}

- + {{ template "last-refresh" .}} diff --git a/pkg/features/pilot/pilot.go b/pkg/features/pilot/pilot.go index 8ed8dc0d671a..04ad3d145331 100644 --- a/pkg/features/pilot/pilot.go +++ b/pkg/features/pilot/pilot.go @@ -73,21 +73,16 @@ var ( // 'admin' namespaces. Using services from any other namespaces will require the new NetworkScope // config. In most cases 'istio-system' should be included. Comma separated (ns1,ns2,istio-system) NetworkScopes = os.Getenv("DEFAULT_NAMESPACE_DEPENDENCIES") -) - -const ( - // NodeMetadataNetwork defines the network the node belongs to. It is an optional metadata, - // set at injection time. When set, the Endpoints returned to a note and not on same network - // will be replaced with the gateway defined in the settings. - NodeMetadataNetwork = "NETWORK" + // BaseDir is the base directory for locating configs. + // File based certificates are located under $BaseDir/etc/certs/. If not set, the original 1.0 locations will + // be used, "/" + BaseDir = "BASE" +) - // AZLabel indicates the region/zone of an instance. It is used if the native - // registry doesn't provide one. - AZLabel = "istio-az" +var ( + // TODO: define all other default ports here, add docs - // ServiceConfigScopeAnnotation configs the scope the service visible to. - // "PUBLIC" which is the default, indicates it is reachable within the mesh - // "PRIVATE" indicates it is reachable within its namespace - ServiceConfigScopeAnnotation = "networking.istio.io/configScope" + // DefaultPortHTTPProxy is used as for HTTP PROXY mode. Can be overridden by ProxyHttpPort in mesh config. + DefaultPortHTTPProxy = 15002 ) diff --git a/pkg/keepalive/options.go b/pkg/keepalive/options.go index 013cf07e0562..d3e24893bf39 100644 --- a/pkg/keepalive/options.go +++ b/pkg/keepalive/options.go @@ -15,25 +15,44 @@ package keepalive import ( + "math" "time" "github.com/spf13/cobra" ) +const ( + // Infinity is the maximum possible duration for keepalive values + Infinity = time.Duration(math.MaxInt64) +) + // Options defines the set of options used for grpc keepalive. +// The Time and Timeout options are used for both client and server connections, +// whereas MaxServerConnectionAge* options are applicable on the server side only +// (as implied by the options' name...) type Options struct { // After a duration of this time if the server/client doesn't see any activity it pings the peer to see if the transport is still alive. Time time.Duration // After having pinged for keepalive check, the server waits for a duration of Timeout and if no activity is seen even after that // the connection is closed. Timeout time.Duration + // MaxServerConnectionAge is a duration for the maximum amount of time a + // connection may exist before it will be closed by the server sending a GoAway. + // A random jitter is added to spread out connection storms. + // See https://github.com/grpc/grpc-go/blob/bd0b3b2aa2a9c87b323ee812359b0e9cda680dad/keepalive/keepalive.go#L49 + MaxServerConnectionAge time.Duration // default value is infinity + // MaxServerConnectionAgeGrace is an additive period after MaxServerConnectionAge + // after which the connection will be forcibly closed by the server. + MaxServerConnectionAgeGrace time.Duration // default value is infinity } // DefaultOption returns the default keepalive options. func DefaultOption() *Options { return &Options{ - Time: 30 * time.Second, - Timeout: 10 * time.Second, + Time: 30 * time.Second, + Timeout: 10 * time.Second, + MaxServerConnectionAge: Infinity, + MaxServerConnectionAgeGrace: Infinity, } } @@ -47,4 +66,9 @@ func (o *Options) AttachCobraFlags(cmd *cobra.Command) { cmd.PersistentFlags().DurationVar(&o.Timeout, "keepaliveTimeout", o.Timeout, "After having pinged for keepalive check, the client/server waits for a duration of keepaliveTimeout "+ "and if no activity is seen even after that the connection is closed.") + cmd.PersistentFlags().DurationVar(&o.MaxServerConnectionAge, "keepaliveMaxServerConnectionAge", + o.MaxServerConnectionAge, "Maximum duration a connection will be kept open on the server before a graceful close.") + cmd.PersistentFlags().DurationVar(&o.MaxServerConnectionAgeGrace, "keepaliveMaxServerConnectionAgeGrace", + o.MaxServerConnectionAgeGrace, "Grace duration allowed before a server connection is forcibly closed "+ + "after MaxServerConnectionAge expires.") } diff --git a/pkg/keepalive/options_test.go b/pkg/keepalive/options_test.go new file mode 100644 index 000000000000..dea8245c7d54 --- /dev/null +++ b/pkg/keepalive/options_test.go @@ -0,0 +1,48 @@ +package keepalive_test + +import ( + "bytes" + "fmt" + "testing" + "time" + + "github.com/spf13/cobra" + + "istio.io/istio/pkg/keepalive" +) + +// Test default maximum connection age is set to infinite, preserving previous +// unbounded lifetime behavior. +func TestAgeDefaultsToInfinite(t *testing.T) { + ko := keepalive.DefaultOption() + + if ko.MaxServerConnectionAge != keepalive.Infinity { + t.Errorf("%s maximum connection age %v", t.Name(), ko.MaxServerConnectionAge) + } else if ko.MaxServerConnectionAgeGrace != keepalive.Infinity { + t.Errorf("%s maximum connection age grace %v", t.Name(), ko.MaxServerConnectionAgeGrace) + } +} + +// Confirm maximum connection age parameters can be set from the command line. +func TestSetConnectionAgeCommandlineOptions(t *testing.T) { + ko := keepalive.DefaultOption() + cmd := &cobra.Command{} + ko.AttachCobraFlags(cmd) + + buf := new(bytes.Buffer) + cmd.SetOutput(buf) + sec := 1 * time.Second + cmd.SetArgs([]string{ + fmt.Sprintf("--keepaliveMaxServerConnectionAge=%v", sec), + fmt.Sprintf("--keepaliveMaxServerConnectionAgeGrace=%v", sec), + }) + + if err := cmd.Execute(); err != nil { + t.Errorf("%s %s", t.Name(), err.Error()) + } + if ko.MaxServerConnectionAge != sec { + t.Errorf("%s maximum connection age %v", t.Name(), ko.MaxServerConnectionAge) + } else if ko.MaxServerConnectionAgeGrace != sec { + t.Errorf("%s maximum connection age grace %v", t.Name(), ko.MaxServerConnectionAgeGrace) + } +} diff --git a/pkg/mcp/client/client.go b/pkg/mcp/client/client.go index ebedc7a73d0e..213c786b51d3 100644 --- a/pkg/mcp/client/client.go +++ b/pkg/mcp/client/client.go @@ -21,13 +21,14 @@ import ( "sync" "time" - "github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/types" "github.com/gogo/status" "google.golang.org/grpc/codes" mcp "istio.io/api/mcp/v1alpha1" "istio.io/istio/pkg/log" + "istio.io/istio/pkg/mcp/monitoring" + "istio.io/istio/pkg/mcp/sink" ) var ( @@ -37,60 +38,6 @@ var ( scope = log.RegisterScope("mcp", "mcp debugging", 0) ) -// Object contains a decoded versioned object with metadata received from the server. -type Object struct { - TypeURL string - Metadata *mcp.Metadata - Resource proto.Message -} - -// Change is a collection of configuration objects of the same protobuf type. -type Change struct { - TypeURL string - Objects []*Object - - // TODO(ayj) add incremental add/remove enum when the mcp protocol supports it. -} - -// Updater provides configuration changes in batches of the same protobuf message type. -type Updater interface { - // Apply is invoked when the client receives new configuration updates - // from the server. The caller should return an error if any of the provided - // configuration resources are invalid or cannot be applied. The client will - // propagate errors back to the server accordingly. - Apply(*Change) error -} - -// InMemoryUpdater is an implementation of Updater that keeps a simple in-memory state. -type InMemoryUpdater struct { - items map[string][]*Object - itemsMutex sync.Mutex -} - -var _ Updater = &InMemoryUpdater{} - -// NewInMemoryUpdater returns a new instance of InMemoryUpdater -func NewInMemoryUpdater() *InMemoryUpdater { - return &InMemoryUpdater{ - items: make(map[string][]*Object), - } -} - -// Apply the change to the InMemoryUpdater. -func (u *InMemoryUpdater) Apply(c *Change) error { - u.itemsMutex.Lock() - defer u.itemsMutex.Unlock() - u.items[c.TypeURL] = c.Objects - return nil -} - -// Get current state for the given Type URL. -func (u *InMemoryUpdater) Get(typeURL string) []*Object { - u.itemsMutex.Lock() - defer u.itemsMutex.Unlock() - return u.items[typeURL] -} - type perTypeState struct { sync.Mutex lastVersion string @@ -120,94 +67,40 @@ func (s *perTypeState) version() string { // // - Decoding the received configuration updates and providing them to the user via a batched set of changes. type Client struct { - client mcp.AggregatedMeshConfigServiceClient - stream mcp.AggregatedMeshConfigService_StreamAggregatedResourcesClient - state map[string]*perTypeState - clientInfo *mcp.Client - updater Updater + client mcp.AggregatedMeshConfigServiceClient + stream mcp.AggregatedMeshConfigService_StreamAggregatedResourcesClient + state map[string]*perTypeState + nodeInfo *mcp.SinkNode + updater sink.Updater - journal recentRequestsJournal + journal *sink.RecentRequestsJournal metadata map[string]string - reporter MetricReporter -} - -// RecentRequestInfo is metadata about a request that the client has sent. -type RecentRequestInfo struct { - Time time.Time - Request *mcp.MeshConfigRequest -} - -// Acked indicates whether the message was an ack or not. -func (r RecentRequestInfo) Acked() bool { - return r.Request.ErrorDetail != nil -} - -// recentRequestsJournal captures debug metadata about the latest requests that was sent by this client. -type recentRequestsJournal struct { - itemsMutex sync.Mutex - items []RecentRequestInfo -} - -// MetricReporter is used to report metrics for an MCP client. -type MetricReporter interface { - RecordSendError(err error, code codes.Code) - RecordRecvError(err error, code codes.Code) - RecordRequestAck(typeURL string) - RecordRequestNack(typeURL string, err error) - RecordStreamCreateSuccess() -} - -func (r *recentRequestsJournal) record(req *mcp.MeshConfigRequest) { // nolint:interfacer - r.itemsMutex.Lock() - defer r.itemsMutex.Unlock() - - item := RecentRequestInfo{ - Time: time.Now(), - Request: proto.Clone(req).(*mcp.MeshConfigRequest), - } - - r.items = append(r.items, item) - for len(r.items) > 20 { - r.items = r.items[1:] - } -} - -func (r *recentRequestsJournal) snapshot() []RecentRequestInfo { - r.itemsMutex.Lock() - defer r.itemsMutex.Unlock() - - result := make([]RecentRequestInfo, len(r.items)) - copy(result, r.items) - - return result + reporter monitoring.Reporter } // New creates a new instance of the MCP client for the specified message types. -func New(mcpClient mcp.AggregatedMeshConfigServiceClient, supportedTypeURLs []string, updater Updater, id string, metadata map[string]string, reporter MetricReporter) *Client { // nolint: lll - clientInfo := &mcp.Client{ - Id: id, - Metadata: &types.Struct{ - Fields: map[string]*types.Value{}, - }, +func New(mcpClient mcp.AggregatedMeshConfigServiceClient, options *sink.Options) *Client { // nolint: lll + nodeInfo := &mcp.SinkNode{ + Id: options.ID, + Annotations: map[string]string{}, } - for k, v := range metadata { - clientInfo.Metadata.Fields[k] = &types.Value{ - Kind: &types.Value_StringValue{StringValue: v}, - } + for k, v := range options.Metadata { + nodeInfo.Annotations[k] = v } state := make(map[string]*perTypeState) - for _, typeURL := range supportedTypeURLs { - state[typeURL] = &perTypeState{} + for _, collection := range options.CollectionOptions { + state[collection.Name] = &perTypeState{} } return &Client{ - client: mcpClient, - state: state, - clientInfo: clientInfo, - updater: updater, - metadata: metadata, - reporter: reporter, + client: mcpClient, + state: state, + nodeInfo: nodeInfo, + updater: options.Updater, + metadata: options.Metadata, + reporter: options.Reporter, + journal: sink.NewRequestJournal(), } } @@ -215,12 +108,14 @@ func New(mcpClient mcp.AggregatedMeshConfigServiceClient, supportedTypeURLs []st var handleResponseDoneProbe = func() {} func (c *Client) sendNACKRequest(response *mcp.MeshConfigResponse, version string, err error) *mcp.MeshConfigRequest { + errorDetails, _ := status.FromError(err) + scope.Errorf("MCP: sending NACK for version=%v nonce=%v: error=%q", version, response.Nonce, err) - c.reporter.RecordRequestNack(response.TypeUrl, err) - errorDetails, _ := status.FromError(err) + c.reporter.RecordRequestNack(response.TypeUrl, 0, errorDetails.Code()) + req := &mcp.MeshConfigRequest{ - Client: c.clientInfo, + SinkNode: c.nodeInfo, TypeUrl: response.TypeUrl, VersionInfo: version, ResponseNonce: response.Nonce, @@ -234,33 +129,28 @@ func (c *Client) handleResponse(response *mcp.MeshConfigResponse) *mcp.MeshConfi defer handleResponseDoneProbe() } - state, ok := c.state[response.TypeUrl] + collection := response.TypeUrl + + state, ok := c.state[collection] if !ok { - errDetails := status.Errorf(codes.Unimplemented, "unsupported type_url: %v", response.TypeUrl) + errDetails := status.Errorf(codes.Unimplemented, "unsupported collection: %v", collection) return c.sendNACKRequest(response, "", errDetails) } - change := &Change{ - TypeURL: response.TypeUrl, - Objects: make([]*Object, 0, len(response.Envelopes)), + change := &sink.Change{ + Collection: collection, + Objects: make([]*sink.Object, 0, len(response.Resources)), } - for _, envelope := range response.Envelopes { + for _, resource := range response.Resources { var dynamicAny types.DynamicAny - if err := types.UnmarshalAny(envelope.Resource, &dynamicAny); err != nil { + if err := types.UnmarshalAny(resource.Body, &dynamicAny); err != nil { return c.sendNACKRequest(response, state.version(), err) } - if response.TypeUrl != envelope.Resource.TypeUrl { - errDetails := status.Errorf(codes.InvalidArgument, - "response type_url(%v) does not match resource type_url(%v)", - response.TypeUrl, envelope.Resource.TypeUrl) - return c.sendNACKRequest(response, state.version(), errDetails) - } - - object := &Object{ - TypeURL: response.TypeUrl, - Metadata: envelope.Metadata, - Resource: dynamicAny.Message, + object := &sink.Object{ + TypeURL: resource.Body.TypeUrl, + Metadata: resource.Metadata, + Body: dynamicAny.Message, } change.Objects = append(change.Objects, object) } @@ -270,11 +160,11 @@ func (c *Client) handleResponse(response *mcp.MeshConfigResponse) *mcp.MeshConfi return c.sendNACKRequest(response, state.version(), errDetails) } - // ACK - c.reporter.RecordRequestAck(response.TypeUrl) + c.reporter.RecordRequestAck(collection, 0) + req := &mcp.MeshConfigRequest{ - Client: c.clientInfo, - TypeUrl: response.TypeUrl, + SinkNode: c.nodeInfo, + TypeUrl: collection, VersionInfo: response.VersionInfo, ResponseNonce: response.Nonce, } @@ -291,10 +181,10 @@ func (c *Client) Run(ctx context.Context) { // for rules to ensure stream resources are not leaked. initRequests := make([]*mcp.MeshConfigRequest, 0, len(c.state)) - for typeURL := range c.state { + for collection := range c.state { initRequests = append(initRequests, &mcp.MeshConfigRequest{ - Client: c.clientInfo, - TypeUrl: typeURL, + SinkNode: c.nodeInfo, + TypeUrl: collection, }) } @@ -349,7 +239,7 @@ func (c *Client) Run(ctx context.Context) { req = c.handleResponse(response) } - c.journal.record(req) + c.journal.RecordMeshConfigRequest(req) if err := c.stream.Send(req); err != nil { c.reporter.RecordSendError(err, status.Code(err)) @@ -365,8 +255,9 @@ func (c *Client) Run(ctx context.Context) { break } } else { - if req.ErrorDetail == nil && req.TypeUrl != "" { - if state, ok := c.state[req.TypeUrl]; ok { + collection := req.TypeUrl + if req.ErrorDetail == nil && collection != "" { + if state, ok := c.state[collection]; ok { state.setVersion(version) } } @@ -376,8 +267,8 @@ func (c *Client) Run(ctx context.Context) { } // SnapshotRequestInfo returns a snapshot of the last known set of request results. -func (c *Client) SnapshotRequestInfo() []RecentRequestInfo { - return c.journal.snapshot() +func (c *Client) SnapshotRequestInfo() []sink.RecentRequestInfo { + return c.journal.Snapshot() } // Metadata that is originally supplied when creating this client. @@ -392,11 +283,11 @@ func (c *Client) Metadata() map[string]string { // ID is the node id for this client. func (c *Client) ID() string { - return c.clientInfo.Id + return c.nodeInfo.Id } -// SupportedTypeURLs returns the TypeURLs that this client requests. -func (c *Client) SupportedTypeURLs() []string { +// Collections returns the collections that this client requests. +func (c *Client) Collections() []string { result := make([]string, 0, len(c.state)) for k := range c.state { diff --git a/pkg/mcp/client/client_test.go b/pkg/mcp/client/client_test.go index e86418129e02..ca4d95d7c597 100644 --- a/pkg/mcp/client/client_test.go +++ b/pkg/mcp/client/client_test.go @@ -26,18 +26,20 @@ import ( "time" "github.com/gogo/protobuf/proto" - "github.com/gogo/protobuf/types" "github.com/gogo/status" + "github.com/google/go-cmp/cmp" "google.golang.org/grpc" "google.golang.org/grpc/codes" mcp "istio.io/api/mcp/v1alpha1" - mcptestmon "istio.io/istio/pkg/mcp/testing/monitoring" + "istio.io/istio/pkg/mcp/internal/test" + "istio.io/istio/pkg/mcp/sink" + "istio.io/istio/pkg/mcp/testing/monitoring" ) type testStream struct { sync.Mutex - change map[string]*Change + change map[string]*sink.Change requestC chan *mcp.MeshConfigRequest // received from client responseC chan *mcp.MeshConfigResponse // to-be-sent to client @@ -55,7 +57,7 @@ func newTestStream() *testStream { requestC: make(chan *mcp.MeshConfigRequest, 10), responseC: make(chan *mcp.MeshConfigResponse, 10), responseClosedC: make(chan struct{}, 10), - change: make(map[string]*Change), + change: make(map[string]*sink.Change), } } @@ -116,13 +118,13 @@ func (ts *testStream) Recv() (*mcp.MeshConfigResponse, error) { } } -func (ts *testStream) Apply(change *Change) error { +func (ts *testStream) Apply(change *sink.Change) error { if ts.updateError { return errors.New("update error") } ts.Lock() defer ts.Unlock() - ts.change[change.TypeURL] = change + ts.change[change.Collection] = change return nil } @@ -139,121 +141,12 @@ func checkRequest(got *mcp.MeshConfigRequest, want *mcp.MeshConfigRequest) error return nil } -var _ Updater = &testStream{} +var _ sink.Updater = &testStream{} -// fake protobuf types - -type fakeTypeBase struct{ Info string } - -func (f *fakeTypeBase) Reset() {} -func (f *fakeTypeBase) String() string { return f.Info } -func (f *fakeTypeBase) ProtoMessage() {} -func (f *fakeTypeBase) Marshal() ([]byte, error) { return []byte(f.Info), nil } -func (f *fakeTypeBase) Unmarshal(in []byte) error { - f.Info = string(in) - return nil -} - -type fakeType0 struct{ fakeTypeBase } -type fakeType1 struct{ fakeTypeBase } -type fakeType2 struct{ fakeTypeBase } - -type unmarshalErrorType struct{ fakeTypeBase } - -func (f *unmarshalErrorType) Unmarshal(in []byte) error { return errors.New("unmarshal_error") } - -const ( - typePrefix = "type.googleapis.com/" - fakeType0MessageName = "istio.io.galley.pkg.mcp.server.fakeType0" - fakeType1MessageName = "istio.io.galley.pkg.mcp.server.fakeType1" - fakeType2MessageName = "istio.io.galley.pkg.mcp.server.fakeType2" - unmarshalErrorMessageName = "istio.io.galley.pkg.mcp.server.unmarshalErrorType" - - fakeType0TypeURL = typePrefix + fakeType0MessageName - fakeType1TypeURL = typePrefix + fakeType1MessageName - fakeType2TypeURL = typePrefix + fakeType2MessageName -) - -var ( - key = "node-id" - metadata = map[string]string{"foo": "bar"} - client *mcp.Client - - supportedTypeUrls = []string{ - fakeType0TypeURL, - fakeType1TypeURL, - fakeType2TypeURL, - } - - fake0_0 = &fakeType0{fakeTypeBase{"f0_0"}} - fake0_1 = &fakeType0{fakeTypeBase{"f0_1"}} - fake0_2 = &fakeType0{fakeTypeBase{"f0_2"}} - fake1 = &fakeType1{fakeTypeBase{"f1"}} - fake2 = &fakeType2{fakeTypeBase{"f2"}} - badUnmarshal = &unmarshalErrorType{fakeTypeBase{"unmarshal_error"}} - - // initialized in init() - fakeResource0_0 *mcp.Envelope - fakeResource0_1 *mcp.Envelope - fakeResource0_2 *mcp.Envelope - fakeResource1 *mcp.Envelope - fakeResource2 *mcp.Envelope - badUnmarshalEnvelope *mcp.Envelope -) - -func mustMarshalAny(pb proto.Message) *types.Any { - a, err := types.MarshalAny(pb) - if err != nil { - panic(err.Error()) - } - return a -} - -func init() { - proto.RegisterType((*fakeType0)(nil), fakeType0MessageName) - proto.RegisterType((*fakeType1)(nil), fakeType1MessageName) - proto.RegisterType((*fakeType2)(nil), fakeType2MessageName) - proto.RegisterType((*fakeType2)(nil), fakeType2MessageName) - proto.RegisterType((*unmarshalErrorType)(nil), unmarshalErrorMessageName) - - fakeResource0_0 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f0_0", Version: "type0/v0"}, - Resource: mustMarshalAny(fake0_0), - } - fakeResource0_1 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f0_1", Version: "type0/v1"}, - Resource: mustMarshalAny(fake0_1), - } - fakeResource0_2 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f0_2", Version: "type0/v2"}, - Resource: mustMarshalAny(fake0_2), - } - fakeResource1 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f1", Version: "type1/v0"}, - Resource: mustMarshalAny(fake1), - } - fakeResource2 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f2", Version: "type2/v0"}, - Resource: mustMarshalAny(fake2), - } - badUnmarshalEnvelope = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "unmarshal_error"}, - Resource: mustMarshalAny(badUnmarshal), - } - - client = &mcp.Client{ - Id: key, - Metadata: &types.Struct{Fields: map[string]*types.Value{}}, - } - for k, v := range metadata { - client.Metadata.Fields[k] = &types.Value{Kind: &types.Value_StringValue{StringValue: v}} - } -} - -func makeRequest(typeURL, version, nonce string, errorCode codes.Code) *mcp.MeshConfigRequest { +func makeRequest(collection, version, nonce string, errorCode codes.Code) *mcp.MeshConfigRequest { req := &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: typeURL, + SinkNode: test.Node, + TypeUrl: collection, VersionInfo: version, ResponseNonce: nonce, } @@ -263,14 +156,14 @@ func makeRequest(typeURL, version, nonce string, errorCode codes.Code) *mcp.Mesh return req } -func makeResponse(typeURL, version, nonce string, envelopes ...*mcp.Envelope) *mcp.MeshConfigResponse { +func makeResponse(collection, version, nonce string, resources ...*mcp.Resource) *mcp.MeshConfigResponse { r := &mcp.MeshConfigResponse{ - TypeUrl: typeURL, + TypeUrl: collection, VersionInfo: version, Nonce: nonce, } - for _, envelope := range envelopes { - r.Envelopes = append(r.Envelopes, *envelope) + for _, resource := range resources { + r.Resources = append(r.Resources, *resource) } return r } @@ -278,7 +171,14 @@ func makeResponse(typeURL, version, nonce string, envelopes ...*mcp.Envelope) *m func TestSingleTypeCases(t *testing.T) { ts := newTestStream() - c := New(ts, supportedTypeUrls, ts, key, metadata, mcptestmon.NewInMemoryClientStatsContext()) + options := &sink.Options{ + CollectionOptions: sink.CollectionOptionsFromSlice(test.SupportedCollections), + Updater: ts, + ID: test.NodeID, + Metadata: test.NodeMetadata, + Reporter: monitoring.NewInMemoryStatsContext(), + } + c := New(ts, options) ctx, cancelClient := context.WithCancel(context.Background()) var wg sync.WaitGroup @@ -294,24 +194,24 @@ func TestSingleTypeCases(t *testing.T) { }() // Check metadata fields first - if !reflect.DeepEqual(c.Metadata(), metadata) { - t.Fatalf("metadata mismatch: got:\n%v\nwanted:\n%v\n", c.metadata, metadata) + if !reflect.DeepEqual(c.Metadata(), test.NodeMetadata) { + t.Fatalf("metadata mismatch: got:\n%v\nwanted:\n%v\n", c.metadata, test.NodeMetadata) } - if c.ID() != key { - t.Fatalf("id mismatch: got\n%v\nwanted:\n%v\n", c.ID(), key) + if c.ID() != test.NodeID { + t.Fatalf("id mismatch: got\n%v\nwanted:\n%v\n", c.ID(), test.NodeID) } - if !reflect.DeepEqual(c.SupportedTypeURLs(), supportedTypeUrls) { - t.Fatalf("type url mismatch: got:\n%v\nwanted:\n%v\n", c.SupportedTypeURLs(), supportedTypeUrls) + if !reflect.DeepEqual(c.Collections(), test.SupportedCollections) { + t.Fatalf("type url mismatch: got:\n%v\nwanted:\n%v\n", c.Collections(), test.SupportedCollections) } wantInitial := make(map[string]*mcp.MeshConfigRequest) - for _, typeURL := range supportedTypeUrls { - wantInitial[typeURL] = makeRequest(typeURL, "", "", codes.OK) + for _, collection := range test.SupportedCollections { + wantInitial[collection] = makeRequest(collection, "", "", codes.OK) } gotInitial := make(map[string]*mcp.MeshConfigRequest) - for i := 0; i < len(supportedTypeUrls); i++ { + for i := 0; i < len(test.SupportedCollections); i++ { select { case got := <-ts.requestC: gotInitial[got.TypeUrl] = got @@ -327,116 +227,106 @@ func TestSingleTypeCases(t *testing.T) { name string sendResponse *mcp.MeshConfigResponse wantRequest *mcp.MeshConfigRequest - wantChange *Change - wantJournal []RecentRequestInfo + wantChange *sink.Change + wantJournal []sink.RecentRequestInfo updateError bool }{ { - name: "ACK request (type0)", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v0", "type0/n0", fakeResource0_0), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v0", "type0/n0", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, + name: "ACK request (type0)", + + sendResponse: makeResponse(test.FakeType0Collection, "type0/v0", "type0/n0", test.Type0A[0].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "type0/v0", "type0/n0", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[0].Metadata, + Body: test.Type0A[0].Proto, }}, }, }, { - name: "ACK request (type1)", - sendResponse: makeResponse(fakeType1TypeURL, "type1/v0", "type1/n0", fakeResource1), - wantRequest: makeRequest(fakeType1TypeURL, "type1/v0", "type1/n0", codes.OK), - wantChange: &Change{ - TypeURL: fakeType1TypeURL, - Objects: []*Object{{ - TypeURL: fakeType1TypeURL, - Metadata: fakeResource1.Metadata, - Resource: fake1, + name: "ACK request (type1)", + + sendResponse: makeResponse(test.FakeType1Collection, "type1/v0", "type1/n0", test.Type1A[0].Resource), + wantRequest: makeRequest(test.FakeType1Collection, "type1/v0", "type1/n0", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType1Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType1TypeURL, + Metadata: test.Type1A[0].Metadata, + Body: test.Type1A[0].Proto, }}, }, }, { - name: "ACK request (type2)", - sendResponse: makeResponse(fakeType2TypeURL, "type2/v0", "type2/n0", fakeResource2), - wantRequest: makeRequest(fakeType2TypeURL, "type2/v0", "type2/n0", codes.OK), - wantChange: &Change{ - TypeURL: fakeType2TypeURL, - Objects: []*Object{{ - TypeURL: fakeType2TypeURL, - Metadata: fakeResource2.Metadata, - Resource: fake2, + name: "ACK request (type2)", + + sendResponse: makeResponse(test.FakeType2Collection, "type2/v0", "type2/n0", test.Type2A[0].Resource), + wantRequest: makeRequest(test.FakeType2Collection, "type2/v0", "type2/n0", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType2Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType2TypeURL, + Metadata: test.Type2A[0].Metadata, + Body: test.Type2A[0].Proto, }}, }, }, { name: "NACK request (unsupported type_url)", - sendResponse: makeResponse(fakeType0TypeURL+"Garbage", "type0/v1", "type0/n1", fakeResource0_0), - wantRequest: makeRequest(fakeType0TypeURL+"Garbage", "", "type0/n1", codes.Unimplemented), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, + sendResponse: makeResponse(test.FakeType0Collection+"Garbage", "type0/v1", "type0/n1", test.Type0A[0].Resource), + wantRequest: makeRequest(test.FakeType0Collection+"Garbage", "", "type0/n1", codes.Unimplemented), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[0].Metadata, + Body: test.Type0A[0].Proto, }}, }, }, { name: "NACK request (unmarshal error)", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v1", "type0/n2", badUnmarshalEnvelope), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v0", "type0/n2", codes.Unknown), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, - }}, - }, - }, - { - name: "NACK request (response type_url does not match resource type_url)", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v1", "type0/n3", fakeResource1), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v0", "type0/n3", codes.InvalidArgument), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v1", "type0/n2", test.BadUnmarshal.Resource), + wantRequest: makeRequest(test.FakeType0Collection, "type0/v0", "type0/n2", codes.Unknown), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[0].Metadata, + Body: test.Type0A[0].Proto, }}, }, }, { name: "NACK request (client updater rejected changes)", updateError: true, - sendResponse: makeResponse(fakeType0TypeURL, "type0/v1", "type0/n3", fakeResource0_0), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v0", "type0/n3", codes.InvalidArgument), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v1", "type0/n3", test.Type0A[0].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "type0/v0", "type0/n3", codes.InvalidArgument), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[0].Metadata, + Body: test.Type0A[0].Proto, }}, }, }, { name: "ACK request after previous NACKs", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v1", "type0/n3", fakeResource0_1, fakeResource0_2), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v1", "type0/n3", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_1.Metadata, - Resource: fake0_1, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v1", "type0/n3", test.Type0A[1].Resource, test.Type0A[2].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "type0/v1", "type0/n3", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[1].Metadata, + Body: test.Type0A[1].Proto, }, { - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_2.Metadata, - Resource: fake0_2, + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[2].Metadata, + Body: test.Type0A[2].Proto, }}, }, wantJournal: nil, @@ -453,9 +343,9 @@ func TestSingleTypeCases(t *testing.T) { ts.sendResponseToClient(step.sendResponse) <-responseDone - if !reflect.DeepEqual(ts.change[step.wantChange.TypeURL], step.wantChange) { - t.Fatalf("%v: bad client change: \n got %#v \nwant %#v", - step.name, ts.change[step.wantChange.TypeURL], step.wantChange) + if diff := cmp.Diff(ts.change[step.wantChange.Collection], step.wantChange); diff != "" { + t.Fatalf("%v: bad client change: \n got %#v \nwant %#v\n diff %v", + step.name, ts.change[step.wantChange.Collection], step.wantChange, diff) } if err := ts.wantRequest(step.wantRequest); err != nil { @@ -466,8 +356,9 @@ func TestSingleTypeCases(t *testing.T) { if len(entries) == 0 { t.Fatal("No journal entries not found.") } + lastEntry := entries[len(entries)-1] - if err := checkRequest(lastEntry.Request, step.wantRequest); err != nil { + if err := checkRequest(lastEntry.Request.ToMeshConfigRequest(), step.wantRequest); err != nil { t.Fatalf("%v: failed to publish the right journal entries: %v", step.name, err) } } @@ -476,7 +367,14 @@ func TestSingleTypeCases(t *testing.T) { func TestReconnect(t *testing.T) { ts := newTestStream() - c := New(ts, []string{fakeType0TypeURL}, ts, key, metadata, mcptestmon.NewInMemoryClientStatsContext()) + options := &sink.Options{ + CollectionOptions: []sink.CollectionOptions{{Name: test.FakeType0Collection}}, + Updater: ts, + ID: test.NodeID, + Metadata: test.NodeMetadata, + Reporter: monitoring.NewInMemoryStatsContext(), + } + c := New(ts, options) ctx, cancelClient := context.WithCancel(context.Background()) var wg sync.WaitGroup @@ -495,87 +393,88 @@ func TestReconnect(t *testing.T) { name string sendResponse *mcp.MeshConfigResponse wantRequest *mcp.MeshConfigRequest - wantChange *Change + wantChange *sink.Change sendError bool recvError bool }{ { name: "Initial request (type0)", sendResponse: nil, // client initiates the exchange - wantRequest: makeRequest(fakeType0TypeURL, "", "", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, + + wantRequest: makeRequest(test.FakeType0Collection, "", "", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[0].Metadata, + Body: test.Type0A[0].Proto, }}, }, }, { name: "ACK request (type0)", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v0", "type0/n0", fakeResource0_0), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v0", "type0/n0", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v0", "type0/n0", test.Type0A[0].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "type0/v0", "type0/n0", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[0].Metadata, + Body: test.Type0A[0].Proto, }}, }, }, { name: "send error", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v1", "type0/n1", fakeResource0_1), - wantRequest: makeRequest(fakeType0TypeURL, "", "", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_0.Metadata, - Resource: fake0_0, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v1", "type0/n1", test.Type0A[1].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "", "", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[0].Metadata, + Body: test.Type0A[0].Proto, }}, }, sendError: true, }, { name: "ACK request after reconnect on send error", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v1", "type0/n1", fakeResource0_1), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v1", "type0/n1", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_1.Metadata, - Resource: fake0_1, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v1", "type0/n1", test.Type0A[1].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "type0/v1", "type0/n1", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[1].Metadata, + Body: test.Type0A[1].Proto, }}, }, }, { name: "recv error", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v2", "type0/n2", fakeResource0_2), - wantRequest: makeRequest(fakeType0TypeURL, "", "", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_1.Metadata, - Resource: fake0_1, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v2", "type0/n2", test.Type0A[2].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "", "", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[1].Metadata, + Body: test.Type0A[1].Proto, }}, }, recvError: true, }, { name: "ACK request after reconnect on recv error", - sendResponse: makeResponse(fakeType0TypeURL, "type0/v2", "type0/n2", fakeResource0_2), - wantRequest: makeRequest(fakeType0TypeURL, "type0/v2", "type0/n2", codes.OK), - wantChange: &Change{ - TypeURL: fakeType0TypeURL, - Objects: []*Object{{ - TypeURL: fakeType0TypeURL, - Metadata: fakeResource0_2.Metadata, - Resource: fake0_2, + sendResponse: makeResponse(test.FakeType0Collection, "type0/v2", "type0/n2", test.Type0A[2].Resource), + wantRequest: makeRequest(test.FakeType0Collection, "type0/v2", "type0/n2", codes.OK), + wantChange: &sink.Change{ + Collection: test.FakeType0Collection, + Objects: []*sink.Object{{ + TypeURL: test.FakeType0TypeURL, + Metadata: test.Type0A[2].Metadata, + Body: test.Type0A[2].Proto, }}, }, }, @@ -612,9 +511,9 @@ func TestReconnect(t *testing.T) { } if !step.sendError { - if !reflect.DeepEqual(ts.change[step.wantChange.TypeURL], step.wantChange) { + if !reflect.DeepEqual(ts.change[step.wantChange.Collection], step.wantChange) { t.Fatalf("%v: bad client change: \n got %#v \nwant %#v", - step.name, ts.change[step.wantChange.TypeURL].Objects[0], step.wantChange.Objects[0]) + step.name, ts.change[step.wantChange.Collection].Objects[0], step.wantChange.Objects[0]) } } } @@ -624,39 +523,3 @@ func TestReconnect(t *testing.T) { } } } - -func TestInMemoryUpdater(t *testing.T) { - u := NewInMemoryUpdater() - - o := u.Get("foo") - if len(o) != 0 { - t.Fatalf("Unexpected items in updater: %v", o) - } - - c := Change{ - TypeURL: "foo", - Objects: []*Object{ - { - TypeURL: "foo", - Metadata: &mcp.Metadata{ - Name: "bar", - }, - Resource: &types.Empty{}, - }, - }, - } - - err := u.Apply(&c) - if err != nil { - t.Fatalf("Unexpected error: %v", err) - } - - o = u.Get("foo") - if len(o) != 1 { - t.Fatalf("expected item not found: %v", o) - } - - if o[0].Metadata.Name != "bar" { - t.Fatalf("expected name not found on object: %v", o) - } -} diff --git a/pkg/mcp/client/monitoring.go b/pkg/mcp/client/monitoring.go deleted file mode 100644 index 54d86482f6be..000000000000 --- a/pkg/mcp/client/monitoring.go +++ /dev/null @@ -1,173 +0,0 @@ -// Copyright 2018 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package client - -import ( - "context" - "strconv" - "strings" - - "go.opencensus.io/stats" - "go.opencensus.io/stats/view" - "go.opencensus.io/tag" - "google.golang.org/grpc/codes" -) - -const ( - typeURL = "typeURL" - errorCode = "code" - errorStr = "error" -) - -// StatsContext enables metric collection backed by OpenCensus. -type StatsContext struct { - requestAcksTotal *stats.Int64Measure - requestNacksTotal *stats.Int64Measure - sendFailuresTotal *stats.Int64Measure - recvFailuresTotal *stats.Int64Measure - streamCreateSuccessTotal *stats.Int64Measure -} - -func (s *StatsContext) recordError(err error, code codes.Code, stat *stats.Int64Measure) { - ctx, ctxErr := tag.New(context.Background(), - tag.Insert(ErrorTag, err.Error()), - tag.Insert(ErrorCodeTag, strconv.FormatUint(uint64(code), 10))) - if ctxErr != nil { - scope.Errorf("MCP: error creating monitoring context. %v", ctxErr) - return - } - stats.Record(ctx, stat.M(1)) -} - -// RecordSendError records an error during a network send with its error -// string and code. -func (s *StatsContext) RecordSendError(err error, code codes.Code) { - s.recordError(err, code, s.sendFailuresTotal) -} - -// RecordRecvError records an error during a network recv with its error -// string and code. -func (s *StatsContext) RecordRecvError(err error, code codes.Code) { - s.recordError(err, code, s.recvFailuresTotal) -} - -// RecordRequestAck records an ACK message for a type URL on a connection. -func (s *StatsContext) RecordRequestAck(typeURL string) { - ctx, ctxErr := tag.New(context.Background(), tag.Insert(TypeURLTag, typeURL)) - if ctxErr != nil { - scope.Errorf("MCP: error creating monitoring context. %v", ctxErr) - return - } - stats.Record(ctx, s.requestAcksTotal.M(1)) -} - -// RecordRequestNack records a NACK message for a type URL on a connection. -func (s *StatsContext) RecordRequestNack(typeURL string, err error) { - ctx, ctxErr := tag.New(context.Background(), - tag.Insert(TypeURLTag, typeURL), - tag.Insert(ErrorTag, err.Error())) - if ctxErr != nil { - scope.Errorf("MCP: error creating monitoring context. %v", ctxErr) - return - } - stats.Record(ctx, s.requestNacksTotal.M(1)) -} - -// RecordStreamCreateSuccess records a successful stream connection. -func (s *StatsContext) RecordStreamCreateSuccess() { - stats.Record(context.Background(), s.streamCreateSuccessTotal.M(1)) -} - -// NewStatsContext creates a new context for recording metrics using -// OpenCensus. The specified prefix is prepended to all metric names and must -// be a non-empty string. -func NewStatsContext(prefix string) *StatsContext { - if len(prefix) == 0 { - panic("must specify prefix for MCP client monitoring.") - } else if !strings.HasSuffix(prefix, "/") { - prefix += "/" - } - ctx := &StatsContext{ - requestAcksTotal: stats.Int64( - prefix+"mcp/client/request_acks", - "The number of request acks sent by the client.", - stats.UnitDimensionless), - - requestNacksTotal: stats.Int64( - prefix+"mcp/client/request_nacks", - "The number of request nacks sent by the client.", - stats.UnitDimensionless), - - sendFailuresTotal: stats.Int64( - prefix+"mcp/client/send_failures", - "The number of send failures in the client.", - stats.UnitDimensionless), - - recvFailuresTotal: stats.Int64( - prefix+"mcp/client/recv_failures", - "The number of recv failures in the client.", - stats.UnitDimensionless), - - streamCreateSuccessTotal: stats.Int64( - prefix+"mcp/client/reconnections", - "The number of times the client has reconnected.", - stats.UnitDimensionless), - } - - err := view.Register( - newView(ctx.requestAcksTotal, []tag.Key{TypeURLTag}, view.Count()), - newView(ctx.requestNacksTotal, []tag.Key{ErrorTag, TypeURLTag}, view.Count()), - newView(ctx.sendFailuresTotal, []tag.Key{ErrorCodeTag, ErrorTag}, view.Count()), - newView(ctx.recvFailuresTotal, []tag.Key{ErrorCodeTag, ErrorTag}, view.Count()), - newView(ctx.streamCreateSuccessTotal, []tag.Key{}, view.Count()), - ) - - if err != nil { - panic(err) - } - return ctx -} - -var ( - // TypeURLTag holds the type URL for the context. - TypeURLTag tag.Key - // ErrorCodeTag holds the gRPC error code for the context. - ErrorCodeTag tag.Key - // ErrorTag holds the error string for the context. - ErrorTag tag.Key -) - -func newView(measure stats.Measure, keys []tag.Key, aggregation *view.Aggregation) *view.View { - return &view.View{ - Name: measure.Name(), - Description: measure.Description(), - Measure: measure, - TagKeys: keys, - Aggregation: aggregation, - } -} - -func init() { - var err error - if TypeURLTag, err = tag.NewKey(typeURL); err != nil { - panic(err) - } - if ErrorCodeTag, err = tag.NewKey(errorCode); err != nil { - panic(err) - } - if ErrorTag, err = tag.NewKey(errorStr); err != nil { - panic(err) - } -} diff --git a/pkg/mcp/configz/assets.gen.go b/pkg/mcp/configz/assets.gen.go index e44c6afba66b..a52518b5a20a 100644 --- a/pkg/mcp/configz/assets.gen.go +++ b/pkg/mcp/configz/assets.gen.go @@ -13,7 +13,6 @@ import ( "strings" "time" ) - type asset struct { bytes []byte info os.FileInfo @@ -95,11 +94,11 @@ var _assetsTemplatesConfigHtml = []byte(`{{ define "content" }} - + - {{ range $value := .SupportedTypeURLs }} + {{ range $value := .Collections }} {{end}} @@ -115,8 +114,7 @@ var _assetsTemplatesConfigHtml = []byte(`{{ define "content" }} - - + @@ -126,8 +124,7 @@ var _assetsTemplatesConfigHtml = []byte(`{{ define "content" }} {{ range $entry := .LatestRequests }} - - + @@ -314,7 +311,6 @@ type bintree struct { Func func() (*asset, error) Children map[string]*bintree } - var _bintree = &bintree{nil, map[string]*bintree{ "assets": &bintree{nil, map[string]*bintree{ "templates": &bintree{nil, map[string]*bintree{ @@ -369,3 +365,4 @@ func _filePath(dir, name string) string { cannonicalName := strings.Replace(name, "\\", "/", -1) return filepath.Join(append([]string{dir}, strings.Split(cannonicalName, "/")...)...) } + diff --git a/pkg/mcp/configz/assets/templates/config.html b/pkg/mcp/configz/assets/templates/config.html index 364a9d013ddb..b59ab0560100 100644 --- a/pkg/mcp/configz/assets/templates/config.html +++ b/pkg/mcp/configz/assets/templates/config.html @@ -48,11 +48,11 @@
Suported Type URLsSuported Collections
{{$value}}
TimeTypeVersionCollection Acked Nonce
{{$entry.Time.Format "2006-01-02T15:04:05Z07:00"}}{{$entry.Request.TypeUrl}}{{$entry.Request.VersionInfo}}{{$entry.Request.Collection}} {{$entry.Acked}} {{$entry.Request.ResponseNonce}}
- + - {{ range $value := .SupportedTypeURLs }} + {{ range $value := .Collections }} {{end}} @@ -68,8 +68,7 @@ - - + @@ -79,8 +78,7 @@ {{ range $entry := .LatestRequests }} - - + diff --git a/pkg/mcp/configz/configz.go b/pkg/mcp/configz/configz.go index e12759bf4f7b..9eb9dc08496e 100644 --- a/pkg/mcp/configz/configz.go +++ b/pkg/mcp/configz/configz.go @@ -20,29 +20,37 @@ import ( "istio.io/istio/pkg/ctrlz" "istio.io/istio/pkg/ctrlz/fw" - "istio.io/istio/pkg/mcp/client" + "istio.io/istio/pkg/mcp/sink" ) -// configzTopic topic is a Topic fw.implementation that exposes the state info about an MCP client. +// configzTopic topic is a Topic fw.implementation that exposes the state info about an MCP sink. type configzTopic struct { tmpl *template.Template - cl *client.Client + topic SinkTopic } var _ fw.Topic = &configzTopic{} -// Register the Configz topic for the given client. +// SinkTopic defines the expected interface for producing configz data from an MCP sink. +type SinkTopic interface { + SnapshotRequestInfo() []sink.RecentRequestInfo + Metadata() map[string]string + ID() string + Collections() []string +} + +// Register the Configz topic for the given sink. // TODO: Multi-client registration is currently not supported. We should update the topic, so that we can // show output from multiple clients. -func Register(c *client.Client) { - ctrlz.RegisterTopic(CreateTopic(c)) +func Register(topic SinkTopic) { + ctrlz.RegisterTopic(CreateTopic(topic)) } // CreateTopic creates and returns a configz topic from the given MCP client. It does not do any registration. -func CreateTopic(c *client.Client) fw.Topic { +func CreateTopic(topic SinkTopic) fw.Topic { return &configzTopic{ - cl: c, + topic: topic, } } @@ -57,11 +65,11 @@ func (c *configzTopic) Prefix() string { } type data struct { - ID string - Metadata map[string]string - SupportedTypeURLs []string + ID string + Metadata map[string]string + Collections []string - LatestRequests []client.RecentRequestInfo + LatestRequests []sink.RecentRequestInfo } // Activate is implementation of Topic.Activate. @@ -82,9 +90,9 @@ func (c *configzTopic) Activate(context fw.TopicContext) { func (c *configzTopic) collectData() *data { return &data{ - ID: c.cl.ID(), - Metadata: c.cl.Metadata(), - SupportedTypeURLs: c.cl.SupportedTypeURLs(), - LatestRequests: c.cl.SnapshotRequestInfo(), + ID: c.topic.ID(), + Metadata: c.topic.Metadata(), + Collections: c.topic.Collections(), + LatestRequests: c.topic.SnapshotRequestInfo(), } } diff --git a/pkg/mcp/configz/configz_test.go b/pkg/mcp/configz/configz_test.go index dbf68938bfbd..caf0ad84e2dd 100644 --- a/pkg/mcp/configz/configz_test.go +++ b/pkg/mcp/configz/configz_test.go @@ -24,6 +24,9 @@ import ( "testing" "time" + "istio.io/istio/pkg/mcp/source" + "istio.io/istio/pkg/mcp/testing/monitoring" + "github.com/gogo/protobuf/types" "google.golang.org/grpc" @@ -31,24 +34,26 @@ import ( "istio.io/istio/pkg/ctrlz" "istio.io/istio/pkg/ctrlz/fw" "istio.io/istio/pkg/mcp/client" + "istio.io/istio/pkg/mcp/sink" "istio.io/istio/pkg/mcp/snapshot" mcptest "istio.io/istio/pkg/mcp/testing" - mcptestmon "istio.io/istio/pkg/mcp/testing/monitoring" ) type updater struct { } -func (u *updater) Apply(c *client.Change) error { +func (u *updater) Apply(c *sink.Change) error { return nil } +const testEmptyCollection = "/test/collection/empty" + func TestConfigZ(t *testing.T) { - s, err := mcptest.NewServer(0, []string{"type.googleapis.com/google.protobuf.Empty"}) + s, err := mcptest.NewServer(0, []source.CollectionOptions{{Name: testEmptyCollection}}) if err != nil { t.Fatal(err) } - defer s.Close() + defer func() { _ = s.Close() }() cc, err := grpc.Dial(fmt.Sprintf("localhost:%d", s.Port), grpc.WithInsecure()) if err != nil { @@ -57,16 +62,23 @@ func TestConfigZ(t *testing.T) { u := &updater{} clnt := mcp.NewAggregatedMeshConfigServiceClient(cc) - cl := client.New(clnt, []string{"type.googleapis.com/google.protobuf.Empty"}, u, - snapshot.DefaultGroup, map[string]string{"foo": "bar"}, - mcptestmon.NewInMemoryClientStatsContext()) + + options := &sink.Options{ + CollectionOptions: []sink.CollectionOptions{{Name: testEmptyCollection}}, + Updater: u, + ID: snapshot.DefaultGroup, + Metadata: map[string]string{"foo": "bar"}, + Reporter: monitoring.NewInMemoryStatsContext(), + } + cl := client.New(clnt, options) ctx, cancel := context.WithCancel(context.Background()) go cl.Run(ctx) defer cancel() o := ctrlz.DefaultOptions() - ctrlz, _ := ctrlz.Run(o, []fw.Topic{CreateTopic(cl)}) + cz, _ := ctrlz.Run(o, []fw.Topic{CreateTopic(cl)}) + defer cz.Close() baseURL := fmt.Sprintf("http://%s:%d", o.Address, o.Port) @@ -83,8 +95,8 @@ func TestConfigZ(t *testing.T) { t.Run("configz with initial requests", func(tt *testing.T) { testConfigZWithNoRequest(tt, baseURL) }) b := snapshot.NewInMemoryBuilder() - b.SetVersion("type.googleapis.com/google.protobuf.Empty", "23") - err = b.SetEntry("type.googleapis.com/google.protobuf.Empty", "foo", "v0", time.Time{}, &types.Empty{}) + b.SetVersion(testEmptyCollection, "23") + err = b.SetEntry(testEmptyCollection, "foo", "v0", time.Time{}, nil, nil, &types.Empty{}) if err != nil { t.Fatalf("Setting an entry should not have failed: %v", err) } @@ -104,8 +116,6 @@ func TestConfigZ(t *testing.T) { t.Run("configz with 2 request", func(tt *testing.T) { testConfigZWithOneRequest(tt, baseURL) }) t.Run("configj with 2 request", func(tt *testing.T) { testConfigJWithOneRequest(tt, baseURL) }) - - ctrlz.Close() } func testConfigZWithNoRequest(t *testing.T, baseURL string) { @@ -117,27 +127,33 @@ func testConfigZWithNoRequest(t *testing.T, baseURL string) { if !strings.Contains(data, "foo") || !strings.Contains(data, "bar") { t.Fatalf("Metadata should have been displayed: %q", data) } - if !strings.Contains(data, "type.googleapis.com/google.protobuf.Empty") { - t.Fatalf("Supported urls should have been displayed: %q", data) + if !strings.Contains(data, testEmptyCollection) { + t.Fatalf("Collections should have been displayed: %q", data) } - if strings.Count(data, "type.googleapis.com/google.protobuf.Empty") != 2 { - t.Fatalf("Only supported urls and the initial ACK request should have been displayed: %q", data) + want := 2 + if got := strings.Count(data, testEmptyCollection); got != want { + t.Fatalf("Only the collection and initial ACK request should have been displayed: got %v want %v: %q", + got, want, data) } } func testConfigZWithOneRequest(t *testing.T, baseURL string) { + t.Helper() + for i := 0; i < 10; i++ { data := request(t, baseURL+"/configz") - if strings.Count(data, "type.googleapis.com/google.protobuf.Empty") != 3 { + if strings.Count(data, testEmptyCollection) != 3 { time.Sleep(time.Millisecond * 100) continue } return } - t.Fatal("Both supported urls, the initial request, and a recent ACK request should have been displayed") + t.Fatal("Both collections, the initial request, and a recent ACK request should have been displayed") } func testConfigJWithOneRequest(t *testing.T, baseURL string) { + t.Helper() + data := request(t, baseURL+"/configj/") m := make(map[string]interface{}) @@ -154,9 +170,9 @@ func testConfigJWithOneRequest(t *testing.T, baseURL string) { t.Fatalf("Should have contained metadata: %v", data) } - if len(m["SupportedTypeURLs"].([]interface{})) != 1 || - m["SupportedTypeURLs"].([]interface{})[0].(string) != "type.googleapis.com/google.protobuf.Empty" { - t.Fatalf("Should have contained supported type urls: %v", data) + if len(m["Collections"].([]interface{})) != 1 || + m["Collections"].([]interface{})[0].(string) != testEmptyCollection { + t.Fatalf("Should have contained supported collections: %v", data) } if len(m["LatestRequests"].([]interface{})) != 2 { @@ -173,7 +189,7 @@ func request(t *testing.T, url string) string { time.Sleep(time.Millisecond * 100) continue } - defer resp.Body.Close() + defer func() { _ = resp.Body.Close() }() body, err := ioutil.ReadAll(resp.Body) if err != nil { e = err diff --git a/pkg/mcp/creds/pollingWatcher.go b/pkg/mcp/creds/pollingWatcher.go index 1abc0153f311..316a4b409b0b 100644 --- a/pkg/mcp/creds/pollingWatcher.go +++ b/pkg/mcp/creds/pollingWatcher.go @@ -61,12 +61,16 @@ func (p *pollingWatcher) certPool() *x509.CertPool { // // Internally PollFolder will call PollFiles. func PollFolder(stop <-chan struct{}, folder string) (CertificateWatcher, error) { + return pollFolder(stop, folder, time.Minute) +} + +func pollFolder(stop <-chan struct{}, folder string, interval time.Duration) (CertificateWatcher, error) { cred := &Options{ CertificateFile: path.Join(folder, defaultCertificateFile), KeyFile: path.Join(folder, defaultKeyFile), CACertificateFile: path.Join(folder, defaultCACertificateFile), } - return PollFiles(stop, cred) + return pollFiles(stop, cred, interval) } // PollFiles loads certificate & key files from the file system. The method will start a background diff --git a/pkg/mcp/creds/pollingWatcher_test.go b/pkg/mcp/creds/pollingWatcher_test.go index b83afc87bdcf..b392816a35e4 100644 --- a/pkg/mcp/creds/pollingWatcher_test.go +++ b/pkg/mcp/creds/pollingWatcher_test.go @@ -368,7 +368,8 @@ func newFixture(t *testing.T) *watcherFixture { func (f *watcherFixture) newWatcher() (err error) { ch := make(chan struct{}) - w, err := PollFolder(ch, f.folder) + // Reduce poll time for quick turnaround of events + w, err := pollFolder(ch, f.folder, time.Millisecond) if err != nil { return err } @@ -376,9 +377,6 @@ func (f *watcherFixture) newWatcher() (err error) { f.ch = ch f.w = w - // Reduce poll time for quick turnaround of events - f.w.(*pollingWatcher).pollInterval = time.Millisecond - return nil } diff --git a/pkg/mcp/env/env.go b/pkg/mcp/env/env.go new file mode 100644 index 000000000000..51c1decdddac --- /dev/null +++ b/pkg/mcp/env/env.go @@ -0,0 +1,48 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package env + +import ( + "os" + "strconv" + "time" +) + +// NOTE: use of this environment variables to configure MCP is discouraged +// in favor of command flags and settings file (via ConfigMap). + +// Integer returns the integer value of an environment variable. The default +// value is returned if the environment variable is not set or could not be +// parsed as an integer. +func Integer(name string, def int) int { + if v := os.Getenv(name); v != "" { + if a, err := strconv.Atoi(v); err == nil { + return a + } + } + return def +} + +// Duration returns the duration value of an environment variable. The default +// value is returned if the environment variable is not set or could not be +// parsed as a valid duration. +func Duration(name string, def time.Duration) time.Duration { + if v := os.Getenv(name); v != "" { + if d, err := time.ParseDuration(v); err == nil { + return d + } + } + return def +} diff --git a/pkg/mcp/env/env_test.go b/pkg/mcp/env/env_test.go new file mode 100644 index 000000000000..84945856c3c1 --- /dev/null +++ b/pkg/mcp/env/env_test.go @@ -0,0 +1,77 @@ +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package env + +import ( + "fmt" + "os" + "testing" + "time" +) + +func TestEnvDur(t *testing.T) { + cases := []struct { + name string + envVal string + def time.Duration + want time.Duration + }{ + {"TestEnvDur", "", time.Second, time.Second}, + {"TestEnvDur", "2s", time.Second, 2 * time.Second}, + {"TestEnvDur", "2blurbs", time.Second, time.Second}, + } + + for i, c := range cases { + t.Run(fmt.Sprintf("[%v] name=%v def=%v", i, c.name, c.def), func(tt *testing.T) { + + if c.envVal != "" { + prev := os.Getenv(c.name) + if err := os.Setenv(c.name, c.envVal); err != nil { + tt.Fatal(err) + } + defer os.Setenv(c.name, prev) + } + if got := Duration(c.name, c.def); got != c.want { + tt.Fatalf("got %v want %v", got, c.want) + } + }) + } +} + +func TestEnvInt(t *testing.T) { + cases := []struct { + name string + envVal string + def int + want int + }{ + {"TestEnvInt", "", 3, 3}, + {"TestEnvInt", "6", 5, 6}, + {"TestEnvInt", "six", 5, 5}, + } + + for i, c := range cases { + t.Run(fmt.Sprintf("[%v] name=%v def=%v", i, c.name, c.def), func(tt *testing.T) { + + if c.envVal != "" { + prev := os.Getenv(c.name) + if err := os.Setenv(c.name, c.envVal); err != nil { + tt.Fatal(err) + } + defer os.Setenv(c.name, prev) + } + if got := Integer(c.name, c.def); got != c.want { + tt.Fatalf("got %v want %v", got, c.want) + } + }) + } +} diff --git a/pkg/mcp/internal/test/auth_checker.go b/pkg/mcp/internal/test/auth_checker.go new file mode 100644 index 000000000000..9a001a5dac2e --- /dev/null +++ b/pkg/mcp/internal/test/auth_checker.go @@ -0,0 +1,41 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package test + +import ( + "fmt" + + "google.golang.org/grpc/credentials" +) + +type FakeAuthChecker struct { + AllowError error +} + +func NewFakeAuthChecker() *FakeAuthChecker { + return &FakeAuthChecker{} +} + +func (f *FakeAuthChecker) Check(authInfo credentials.AuthInfo) error { + return f.AllowError +} + +func (f *FakeAuthChecker) AuthType() string { + fmt.Println("XXX", f.AllowError) + if f.AllowError != nil { + return "disallowed" + } + return "allowed" +} diff --git a/pkg/mcp/internal/test/types.go b/pkg/mcp/internal/test/types.go new file mode 100644 index 000000000000..a14561687ffa --- /dev/null +++ b/pkg/mcp/internal/test/types.go @@ -0,0 +1,198 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. +package test + +import ( + "errors" + "fmt" + "strings" + + "github.com/gogo/protobuf/proto" + "github.com/gogo/protobuf/types" + "github.com/gogo/status" + "google.golang.org/grpc/codes" + + mcp "istio.io/api/mcp/v1alpha1" +) + +type FakeTypeBase struct{ Info string } + +func (f *FakeTypeBase) Reset() {} +func (f *FakeTypeBase) String() string { return f.Info } +func (f *FakeTypeBase) ProtoMessage() {} +func (f *FakeTypeBase) Marshal() ([]byte, error) { return []byte(f.Info), nil } +func (f *FakeTypeBase) Unmarshal(in []byte) error { + f.Info = string(in) + return nil +} + +type FakeType0 struct{ FakeTypeBase } +type FakeType1 struct{ FakeTypeBase } +type FakeType2 struct{ FakeTypeBase } + +type UnmarshalErrorType struct{ FakeTypeBase } + +func (f *UnmarshalErrorType) Unmarshal(in []byte) error { return errors.New("unmarshal_error") } + +const ( + TypePrefix = "type.googleapis.com/" + FakeType0MessageName = "istio.io.galley.pkg.mcp.server.FakeType0" + FakeType1MessageName = "istio.io.galley.pkg.mcp.server.FakeType1" + FakeType2MessageName = "istio.io.galley.pkg.mcp.server.FakeType2" + UnmarshalErrorMessageName = "istio.io.galley.pkg.mcp.server.UnmarshalErrorType" + + FakeType0TypeURL = TypePrefix + FakeType0MessageName + FakeType1TypeURL = TypePrefix + FakeType1MessageName + FakeType2TypeURL = TypePrefix + FakeType2MessageName + UnmarshalErrorTypeURL = TypePrefix + UnmarshalErrorMessageName +) + +var ( + FakeType0Collection = strings.Replace(FakeType0MessageName, ".", "/", -1) + FakeType1Collection = strings.Replace(FakeType1MessageName, ".", "/", -1) + FakeType2Collection = strings.Replace(FakeType2MessageName, ".", "/", -1) + UnmarshalErrorCollection = strings.Replace(UnmarshalErrorMessageName, ".", "/", -1) +) + +type Fake struct { + Resource *mcp.Resource + Metadata *mcp.Metadata + Proto proto.Message + Collection string + TypeURL string +} + +func MakeRequest(collection, nonce string, errorCode codes.Code) *mcp.RequestResources { + req := &mcp.RequestResources{ + SinkNode: Node, + Collection: collection, + ResponseNonce: nonce, + } + if errorCode != codes.OK { + req.ErrorDetail = status.New(errorCode, "").Proto() + } + return req +} + +func MakeResources(collection, version, nonce string, removed []string, fakes ...*Fake) *mcp.Resources { + r := &mcp.Resources{ + Collection: collection, + Nonce: nonce, + RemovedResources: removed, + SystemVersionInfo: version, + } + for _, fake := range fakes { + r.Resources = append(r.Resources, *fake.Resource) + } + return r +} + +func MakeFakeResource(collection, typeURL, version, name, data string) *Fake { + var pb proto.Message + switch typeURL { + case FakeType0TypeURL: + pb = &FakeType0{FakeTypeBase{data}} + case FakeType1TypeURL: + pb = &FakeType1{FakeTypeBase{data}} + case FakeType2TypeURL: + pb = &FakeType2{FakeTypeBase{data}} + case UnmarshalErrorTypeURL: + pb = &UnmarshalErrorType{FakeTypeBase{data}} + default: + panic(fmt.Sprintf("unknown typeURL: %v", typeURL)) + } + + body, err := types.MarshalAny(pb) + if err != nil { + panic(fmt.Sprintf("could not marshal fake body: %v", err)) + } + + metadata := &mcp.Metadata{Name: name, Version: version} + + envelope := &mcp.Resource{ + Metadata: metadata, + Body: body, + } + + return &Fake{ + Resource: envelope, + Metadata: metadata, + Proto: pb, + TypeURL: typeURL, + Collection: collection, + } +} + +var ( + Type0A = []*Fake{} + Type0B = []*Fake{} + Type0C = []*Fake{} + Type1A = []*Fake{} + Type2A = []*Fake{} + + BadUnmarshal = MakeFakeResource(UnmarshalErrorCollection, UnmarshalErrorTypeURL, "v0", "bad", "data") + + SupportedCollections = []string{ + FakeType0Collection, + FakeType1Collection, + FakeType2Collection, + } + + NodeID = "test-node" + Node = &mcp.SinkNode{ + Id: NodeID, + Annotations: map[string]string{ + "foo": "bar", + }, + } + NodeMetadata = map[string]string{"foo": "bar"} +) + +func init() { + proto.RegisterType((*FakeType0)(nil), FakeType0MessageName) + proto.RegisterType((*FakeType1)(nil), FakeType1MessageName) + proto.RegisterType((*FakeType2)(nil), FakeType2MessageName) + proto.RegisterType((*FakeType2)(nil), FakeType2MessageName) + proto.RegisterType((*UnmarshalErrorType)(nil), UnmarshalErrorMessageName) + + Type0A = []*Fake{ + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v0", "a", "data-a0"), + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v1", "a", "data-a1"), + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v2", "a", "data-a2"), + } + Type0B = []*Fake{ + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v0", "b", "data-b0"), + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v1", "b", "data-b1"), + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v2", "b", "data-b2"), + } + Type0C = []*Fake{ + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v0", "c", "data-c0"), + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v1", "c", "data-c1"), + MakeFakeResource(FakeType0Collection, FakeType0TypeURL, "v2", "c", "data-c2"), + } + + Type1A = []*Fake{ + MakeFakeResource(FakeType1Collection, FakeType1TypeURL, "v0", "a", "data-a0"), + MakeFakeResource(FakeType1Collection, FakeType1TypeURL, "v1", "a", "data-a1"), + MakeFakeResource(FakeType1Collection, FakeType1TypeURL, "v2", "a", "data-a2"), + } + + Type2A = []*Fake{ + MakeFakeResource(FakeType2Collection, FakeType2TypeURL, "v0", "a", "data-a0"), + MakeFakeResource(FakeType2Collection, FakeType2TypeURL, "v1", "a", "data-a1"), + MakeFakeResource(FakeType2Collection, FakeType2TypeURL, "v2", "a", "data-a2"), + } + + BadUnmarshal = MakeFakeResource(UnmarshalErrorCollection, UnmarshalErrorTypeURL, "v0", "bad", "data") +} diff --git a/pkg/mcp/server/monitoring.go b/pkg/mcp/monitoring/monitoring.go similarity index 59% rename from pkg/mcp/server/monitoring.go rename to pkg/mcp/monitoring/monitoring.go index f85b6d3d17a4..aa7fd1139da0 100644 --- a/pkg/mcp/server/monitoring.go +++ b/pkg/mcp/monitoring/monitoring.go @@ -12,44 +12,72 @@ // See the License for the specific language governing permissions and // limitations under the License. -package server +package monitoring import ( "context" + "io" "strconv" - "strings" "go.opencensus.io/stats" "go.opencensus.io/stats/view" "go.opencensus.io/tag" "google.golang.org/grpc/codes" + + "istio.io/istio/pkg/log" + "istio.io/istio/pkg/mcp/testing/monitoring" ) const ( - typeURL = "typeURL" + collection = "collection" errorCode = "code" errorStr = "error" connectionID = "connectionID" + code = "code" +) + +var ( + scope = log.RegisterScope("mcp", "mcp debugging", 0) ) // StatsContext enables metric collection backed by OpenCensus. type StatsContext struct { - clientsTotal *stats.Int64Measure - requestSizesBytes *stats.Int64Measure - requestAcksTotal *stats.Int64Measure - requestNacksTotal *stats.Int64Measure - sendFailuresTotal *stats.Int64Measure - recvFailuresTotal *stats.Int64Measure + currentStreamCount *stats.Int64Measure + requestSizesBytes *stats.Int64Measure + requestAcksTotal *stats.Int64Measure + requestNacksTotal *stats.Int64Measure + sendFailuresTotal *stats.Int64Measure + recvFailuresTotal *stats.Int64Measure + streamCreateSuccessTotal *stats.Int64Measure views []*view.View } -var _ Reporter = &StatsContext{} +// Reporter is used to report metrics for an MCP server. +type Reporter interface { + io.Closer -// SetClientsTotal updates the current client count to the given argument. -func (s *StatsContext) SetClientsTotal(clients int64) { - stats.Record(context.Background(), s.clientsTotal.M(clients)) + RecordSendError(err error, code codes.Code) + RecordRecvError(err error, code codes.Code) + RecordRequestSize(collection string, connectionID int64, size int) + RecordRequestAck(collection string, connectionID int64) + RecordRequestNack(collection string, connectionID int64, code codes.Code) + + SetStreamCount(clients int64) + RecordStreamCreateSuccess() +} + +var ( + _ Reporter = &StatsContext{} + + // verify here to avoid import cycles in the pkg/mcp/testing/monitoring package + _ Reporter = &monitoring.InMemoryStatsContext{} +) + +// SetStreamCount updates the current client count to the given argument. +func (s *StatsContext) SetStreamCount(clients int64) { + stats.Record(context.Background(), s.currentStreamCount.M(clients)) } func (s *StatsContext) recordError(err error, code codes.Code, stat *stats.Int64Measure) { @@ -76,9 +104,9 @@ func (s *StatsContext) RecordRecvError(err error, code codes.Code) { } // RecordRequestSize records the size of a request from a connection for a specific type URL. -func (s *StatsContext) RecordRequestSize(typeURL string, connectionID int64, size int) { +func (s *StatsContext) RecordRequestSize(collection string, connectionID int64, size int) { ctx, ctxErr := tag.New(context.Background(), - tag.Insert(TypeURLTag, typeURL), + tag.Insert(CollectionTag, collection), tag.Insert(ConnectionIDTag, strconv.FormatInt(connectionID, 10))) if ctxErr != nil { scope.Errorf("MCP: error creating monitoring context. %v", ctxErr) @@ -87,10 +115,10 @@ func (s *StatsContext) RecordRequestSize(typeURL string, connectionID int64, siz stats.Record(ctx, s.requestSizesBytes.M(int64(size))) } -// RecordRequestAck records an ACK message for a type URL on a connection. -func (s *StatsContext) RecordRequestAck(typeURL string, connectionID int64) { +// RecordRequestAck records an ACK message for a collection on a connection. +func (s *StatsContext) RecordRequestAck(collection string, connectionID int64) { ctx, ctxErr := tag.New(context.Background(), - tag.Insert(TypeURLTag, typeURL), + tag.Insert(CollectionTag, collection), tag.Insert(ConnectionIDTag, strconv.FormatInt(connectionID, 10))) if ctxErr != nil { scope.Errorf("MCP: error creating monitoring context. %v", ctxErr) @@ -99,11 +127,12 @@ func (s *StatsContext) RecordRequestAck(typeURL string, connectionID int64) { stats.Record(ctx, s.requestAcksTotal.M(1)) } -// RecordRequestNack records a NACK message for a type URL on a connection. -func (s *StatsContext) RecordRequestNack(typeURL string, connectionID int64) { +// RecordRequestNack records a NACK message for a collection on a connection. +func (s *StatsContext) RecordRequestNack(collection string, connectionID int64, code codes.Code) { ctx, ctxErr := tag.New(context.Background(), - tag.Insert(TypeURLTag, typeURL), - tag.Insert(ConnectionIDTag, strconv.FormatInt(connectionID, 10))) + tag.Insert(CollectionTag, collection), + tag.Insert(ConnectionIDTag, strconv.FormatInt(connectionID, 10)), + tag.Insert(CodeTag, code.String())) if ctxErr != nil { scope.Errorf("MCP: error creating monitoring context. %v", ctxErr) return @@ -111,6 +140,11 @@ func (s *StatsContext) RecordRequestNack(typeURL string, connectionID int64) { stats.Record(ctx, s.requestNacksTotal.M(1)) } +// RecordStreamCreateSuccess records a successful stream connection. +func (s *StatsContext) RecordStreamCreateSuccess() { + stats.Record(context.Background(), s.streamCreateSuccessTotal.M(1)) +} + func (s *StatsContext) Close() error { view.Unregister(s.views...) return nil @@ -121,54 +155,60 @@ func (s *StatsContext) Close() error { // be a non-empty string. func NewStatsContext(prefix string) *StatsContext { if len(prefix) == 0 { - panic("must specify prefix for MCP server monitoring.") + panic("must specify prefix for MCP monitoring.") } else if !strings.HasSuffix(prefix, "/") { prefix += "/" } ctx := &StatsContext{ - // ClientsTotal is a measure of the number of connected clients. - clientsTotal: stats.Int64( - prefix+"mcp/server/clients_total", - "The number of clients currently connected.", + // StreamTotal is a measure of the number of connected clients. + currentStreamCount: stats.Int64( + prefix+"clients_total", + "The number of streams currently connected.", stats.UnitDimensionless), // RequestSizesBytes is a distribution of incoming message sizes. requestSizesBytes: stats.Int64( - prefix+"mcp/server/message_sizes_bytes", + prefix+"message_sizes_bytes", "Size of messages received from clients.", stats.UnitBytes), // RequestAcksTotal is a measure of the number of received ACK requests. requestAcksTotal: stats.Int64( - prefix+"mcp/server/request_acks_total", - "The number of request acks received by the server.", + prefix+"request_acks_total", + "The number of request acks received by the source.", stats.UnitDimensionless), // RequestNacksTotal is a measure of the number of received NACK requests. requestNacksTotal: stats.Int64( - prefix+"mcp/server/request_nacks_total", - "The number of request nacks received by the server.", + prefix+"request_nacks_total", + "The number of request nacks received by the source.", stats.UnitDimensionless), // SendFailuresTotal is a measure of the number of network send failures. sendFailuresTotal: stats.Int64( - prefix+"mcp/server/send_failures_total", - "The number of send failures in the server.", + prefix+"send_failures_total", + "The number of send failures in the source.", stats.UnitDimensionless), // RecvFailuresTotal is a measure of the number of network recv failures. recvFailuresTotal: stats.Int64( - prefix+"mcp/server/recv_failures_total", - "The number of recv failures in the server.", + prefix+"recv_failures_total", + "The number of recv failures in the source.", + stats.UnitDimensionless), + + streamCreateSuccessTotal: stats.Int64( + prefix+"reconnections", + "The number of times the sink has reconnected.", stats.UnitDimensionless), } - ctx.addView(ctx.clientsTotal, []tag.Key{}, view.LastValue()) + ctx.addView(ctx.currentStreamCount, []tag.Key{}, view.LastValue()) ctx.addView(ctx.requestSizesBytes, []tag.Key{ConnectionIDTag}, view.Distribution(byteBuckets...)) - ctx.addView(ctx.requestAcksTotal, []tag.Key{TypeURLTag}, view.Count()) - ctx.addView(ctx.requestNacksTotal, []tag.Key{ErrorCodeTag, TypeURLTag}, view.Count()) + ctx.addView(ctx.requestAcksTotal, []tag.Key{CollectionTag, ConnectionIDTag}, view.Count()) + ctx.addView(ctx.requestNacksTotal, []tag.Key{CollectionTag, ConnectionIDTag, CodeTag}, view.Count()) ctx.addView(ctx.sendFailuresTotal, []tag.Key{ErrorCodeTag, ErrorTag}, view.Count()) ctx.addView(ctx.recvFailuresTotal, []tag.Key{ErrorCodeTag, ErrorTag}, view.Count()) + ctx.addView(ctx.streamCreateSuccessTotal, []tag.Key{}, view.Count()) return ctx } @@ -190,14 +230,16 @@ func (s *StatsContext) addView(measure stats.Measure, keys []tag.Key, aggregatio } var ( - // TypeURLTag holds the type URL for the context. - TypeURLTag tag.Key + // CollectionTag holds the collection for the context. + CollectionTag tag.Key // ErrorCodeTag holds the gRPC error code for the context. ErrorCodeTag tag.Key // ErrorTag holds the error string for the context. ErrorTag tag.Key // ConnectionIDTag holds the connection ID for the context. ConnectionIDTag tag.Key + // CodeTag holds the status code for the context. + CodeTag tag.Key // buckets are powers of 4 byteBuckets = []float64{1, 4, 16, 64, 256, 1024, 4096, 16384, 65536, 262144, 1048576, 4194304, 16777216, 67108864, 268435456, 1073741824} @@ -205,7 +247,7 @@ var ( func init() { var err error - if TypeURLTag, err = tag.NewKey(typeURL); err != nil { + if CollectionTag, err = tag.NewKey(collection); err != nil { panic(err) } if ErrorCodeTag, err = tag.NewKey(errorCode); err != nil { @@ -217,4 +259,7 @@ func init() { if ConnectionIDTag, err = tag.NewKey(connectionID); err != nil { panic(err) } + if CodeTag, err = tag.NewKey(code); err != nil { + panic(err) + } } diff --git a/pkg/mcp/server/listchecker.go b/pkg/mcp/server/listchecker.go index 6c81f862f07d..3eccc97bd9f5 100644 --- a/pkg/mcp/server/listchecker.go +++ b/pkg/mcp/server/listchecker.go @@ -21,7 +21,9 @@ import ( "sort" "strings" "sync" + "time" + "golang.org/x/time/rate" "google.golang.org/grpc/credentials" "istio.io/istio/security/pkg/pki/util" @@ -31,7 +33,7 @@ import ( type AllowAllChecker struct{} // NewAllowAllChecker creates a new AllowAllChecker. -func NewAllowAllChecker() AuthChecker { return &AllowAllChecker{} } +func NewAllowAllChecker() *AllowAllChecker { return &AllowAllChecker{} } // Check is an implementation of AuthChecker.Check that allows all check requests. func (*AllowAllChecker) Check(credentials.AuthInfo) error { return nil } @@ -53,18 +55,45 @@ type ListAuthChecker struct { idsMutex sync.RWMutex ids map[string]struct{} + checkFailureRecordLimiter *rate.Limiter + failureCountSinceLastRecord int + // overridable functions for testing extractIDsFn func(exts []pkix.Extension) ([]string, error) } -var _ AuthChecker = &ListAuthChecker{} +type ListAuthCheckerOptions struct { + // For the purposes of logging rate limiting authz failures, this controls how + // many authz failures are logged in a burst every AuthzFailureLogFreq. + AuthzFailureLogBurstSize int + + // For the purposes of logging rate limiting authz failures, this controls how + // frequently bursts of authz failures are logged. + AuthzFailureLogFreq time.Duration + + // AuthMode indicates the list checking mode + AuthMode AuthListMode +} + +func DefaultListAuthCheckerOptions() *ListAuthCheckerOptions { + return &ListAuthCheckerOptions{ + AuthzFailureLogBurstSize: 1, + AuthzFailureLogFreq: time.Minute, + AuthMode: AuthWhiteList, + } +} // NewListAuthChecker returns a new instance of ListAuthChecker -func NewListAuthChecker() *ListAuthChecker { +func NewListAuthChecker(options *ListAuthCheckerOptions) *ListAuthChecker { + // Initialize record limiter for the auth checker. + limit := rate.Every(options.AuthzFailureLogFreq) + limiter := rate.NewLimiter(limit, options.AuthzFailureLogBurstSize) + return &ListAuthChecker{ - mode: AuthWhiteList, - ids: make(map[string]struct{}), - extractIDsFn: util.ExtractIDs, + mode: options.AuthMode, + ids: make(map[string]struct{}), + extractIDsFn: util.ExtractIDs, + checkFailureRecordLimiter: limiter, } } @@ -109,15 +138,10 @@ func (l *ListAuthChecker) Allowed(id string) bool { l.idsMutex.RLock() defer l.idsMutex.RUnlock() - switch l.mode { - case AuthWhiteList: + if l.mode == AuthWhiteList { return l.contains(id) - case AuthBlackList: - return !l.contains(id) - default: - scope.Errorf("Unrecognized list auth check mode encountered: %v", l.mode) - return false } + return !l.contains(id) // AuthBlackList } func (l *ListAuthChecker) contains(id string) bool { @@ -142,8 +166,6 @@ func (l *ListAuthChecker) String() string { result += "Mode: whitelist\n" case AuthBlackList: result += "Mode: blacklist\n" - default: - result += "Mode: unknown\n" } result += "Known ids:\n" @@ -152,8 +174,21 @@ func (l *ListAuthChecker) String() string { return result } -// Check is an implementation of AuthChecker.Check. func (l *ListAuthChecker) Check(authInfo credentials.AuthInfo) error { + if err := l.check(authInfo); err != nil { + l.failureCountSinceLastRecord++ + if l.checkFailureRecordLimiter.Allow() { + scope.Warnf("NewConnection: auth check failed: %v (repeated %d times).", + err, l.failureCountSinceLastRecord) + l.failureCountSinceLastRecord = 0 + } + return err + } + return nil +} + +// Check is an implementation of AuthChecker.Check. +func (l *ListAuthChecker) check(authInfo credentials.AuthInfo) error { l.idsMutex.RLock() defer l.idsMutex.RUnlock() @@ -188,22 +223,14 @@ func (l *ListAuthChecker) Check(authInfo credentials.AuthInfo) error { case AuthBlackList: scope.Infof("Blocking access from peer with id: %s", id) return fmt.Errorf("id is blacklisted: %s", id) - default: - scope.Errorf("unrecognized mode in listchecker: %v", l.mode) - return fmt.Errorf("unrecognized mode in listchecker: %v", l.mode) } } } } } - switch l.mode { - case AuthWhiteList: + if l.mode == AuthWhiteList { return errors.New("no allowed identity found in peer's authentication info") - case AuthBlackList: - return nil - default: - scope.Errorf("unrecognized mode in listchecker: %v", l.mode) - return fmt.Errorf("unrecognized mode in listchecker: %v", l.mode) } + return nil // AuthBlackList } diff --git a/pkg/mcp/server/listchecker_test.go b/pkg/mcp/server/listchecker_test.go index 2ad4cd1f5b79..1ea8ba30a4f3 100644 --- a/pkg/mcp/server/listchecker_test.go +++ b/pkg/mcp/server/listchecker_test.go @@ -156,7 +156,7 @@ func TestListAuthChecker(t *testing.T) { for _, testCase := range testCases { t.Run(testCase.name, func(t *testing.T) { - c := NewListAuthChecker() + c := NewListAuthChecker(DefaultListAuthCheckerOptions()) c.SetMode(testCase.mode) if testCase.extractIDsFn != nil { c.extractIDsFn = testCase.extractIDsFn @@ -239,11 +239,12 @@ func TestListAuthChecker_Allowed(t *testing.T) { for i, c := range cases { t.Run(fmt.Sprintf("%d", i), func(t *testing.T) { - checker := NewListAuthChecker() + options := DefaultListAuthCheckerOptions() + options.AuthMode = c.mode + checker := NewListAuthChecker(options) if c.id != "" { checker.Set(c.id) } - checker.SetMode(c.mode) result := checker.Allowed(c.testid) if result != c.expect { @@ -260,8 +261,9 @@ func TestListAuthChecker_String(t *testing.T) { } }() - c := NewListAuthChecker() - c.SetMode(AuthBlackList) + options := DefaultListAuthCheckerOptions() + options.AuthMode = AuthBlackList + c := NewListAuthChecker(options) c.Set("1", "2", "3") diff --git a/pkg/mcp/server/server.go b/pkg/mcp/server/server.go index 03edaa5e3e7d..2024456e67e1 100644 --- a/pkg/mcp/server/server.go +++ b/pkg/mcp/server/server.go @@ -17,7 +17,6 @@ package server import ( "fmt" "io" - "os" "strconv" "sync" "sync/atomic" @@ -31,105 +30,44 @@ import ( mcp "istio.io/api/mcp/v1alpha1" "istio.io/istio/pkg/log" + "istio.io/istio/pkg/mcp/env" + "istio.io/istio/pkg/mcp/monitoring" + "istio.io/istio/pkg/mcp/source" ) -var ( - scope = log.RegisterScope("mcp", "mcp debugging", 0) -) - -func envInt(name string, def int) int { - if v := os.Getenv(name); v != "" { - if a, err := strconv.Atoi(v); err == nil { - return a - } - } - return def -} - -func envDur(name string, def time.Duration) time.Duration { - if v := os.Getenv(name); v != "" { - if d, err := time.ParseDuration(v); err == nil { - return d - } - } - return def -} +var scope = log.RegisterScope("mcp", "mcp debugging", 0) var ( // For the purposes of rate limiting new connections, this controls how many // new connections are allowed as a burst every NEW_CONNECTION_FREQ. - newConnectionBurstSize = envInt("NEW_CONNECTION_BURST_SIZE", 10) + newConnectionBurstSize = env.Integer("NEW_CONNECTION_BURST_SIZE", 10) // For the purposes of rate limiting new connections, this controls how // frequently new bursts of connections are allowed. - newConnectionFreq = envDur("NEW_CONNECTION_FREQ", 10*time.Millisecond) - - // For the purposes of logging rate limiting authz failures, this controls how - // many authz failus are logs as a burst every AUTHZ_FAILURE_LOG_FREQ. - authzFailureLogBurstSize = envInt("AUTHZ_FAILURE_LOG_BURST_SIZE", 1) - - // For the purposes of logging rate limiting authz failures, this controls how - // frequently bursts of authz failures are logged. - authzFailureLogFreq = envDur("AUTHZ_FAILURE_LOG_FREQ", time.Minute) + newConnectionFreq = env.Duration("NEW_CONNECTION_FREQ", 10*time.Millisecond) // Controls the rate limit frequency for re-pushing previously NACK'd pushes // for each type. - nackLimitFreq = envDur("NACK_LIMIT_FREQ", 1*time.Second) + nackLimitFreq = env.Duration("NACK_LIMIT_FREQ", 1*time.Second) // Controls the delay for re-retrying a configuration push if the previous // attempt was not possible, e.g. the lower-level serving layer was busy. This // should typically be set fairly small (order of milliseconds). - retryPushDelay = envDur("RETRY_PUSH_DELAY", 10*time.Millisecond) -) - -// WatchResponse contains a versioned collection of pre-serialized resources. -type WatchResponse struct { - TypeURL string - - // Version of the resources in the response for the given - // type. The client responses with this version in subsequent - // requests as an acknowledgment. - Version string - - // Enveloped resources to be included in the response. - Envelopes []*mcp.Envelope -} - -type ( - // CancelWatchFunc allows the consumer to cancel a previous watch, - // terminating the watch for the request. - CancelWatchFunc func() - - // PushResponseFunc allows the consumer to push a response for the - // corresponding watch. - PushResponseFunc func(*WatchResponse) + retryPushDelay = env.Duration("RETRY_PUSH_DELAY", 10*time.Millisecond) ) -// Watcher requests watches for configuration resources by node, last -// applied version, and type. The watch should send the responses when -// they are ready. The watch can be canceled by the consumer. -type Watcher interface { - // Watch returns a new open watch for a non-empty request. - // - // Cancel is an optional function to release resources in the - // producer. It can be called idempotently to cancel and release resources. - Watch(*mcp.MeshConfigRequest, PushResponseFunc) CancelWatchFunc -} - var _ mcp.AggregatedMeshConfigServiceServer = &Server{} // Server implements the Mesh Configuration Protocol (MCP) gRPC server. type Server struct { - watcher Watcher - supportedTypes []string - nextStreamID int64 + watcher source.Watcher + collections []source.CollectionOptions + nextStreamID int64 // for auth check - authCheck AuthChecker - checkFailureRecordLimiter *rate.Limiter - failureCountSinceLastRecord int - connections int64 - reporter Reporter - newConnectionLimiter *rate.Limiter + authCheck AuthChecker + connections int64 + reporter monitoring.Reporter + newConnectionLimiter *rate.Limiter } // AuthChecker is used to check the transport auth info that is associated with each stream. If the function @@ -161,7 +99,7 @@ type watch struct { newPushResponseReadyChan chan newPushResponseState mu sync.Mutex - newPushResponse *WatchResponse + newPushResponse *source.WatchResponse mostRecentNackedVersion string timer *time.Timer closed bool @@ -179,10 +117,10 @@ func (w *watch) delayedPush() { } } -// Try to schedule pushing a response to the client. The push may +// Try to schedule pushing a response to the node. The push may // be re-scheduled as needed. Additional care is taken to rate limit // re-pushing responses that were previously NACK'd. This avoid flooding -// the client with responses while also allowing transient NACK'd responses +// the node with responses while also allowing transient NACK'd responses // to be retried. func (w *watch) schedulePush() { w.mu.Lock() @@ -236,7 +174,7 @@ func (w *watch) schedulePush() { // may be re-schedule as necessary but this should be transparent to the // caller. The caller may provide a nil response to indicate that the watch // should be closed. -func (w *watch) saveResponseAndSchedulePush(response *WatchResponse) { +func (w *watch) saveResponseAndSchedulePush(response *source.WatchResponse) { w.mu.Lock() w.newPushResponse = response if response == nil { @@ -248,7 +186,7 @@ func (w *watch) saveResponseAndSchedulePush(response *WatchResponse) { } // connection maintains per-stream connection state for a -// client. Access to the stream and watch state is serialized +// node. Access to the stream and watch state is serialized // through request and response channels. type connection struct { peerAddr string @@ -262,38 +200,20 @@ type connection struct { requestC chan *mcp.MeshConfigRequest // a channel for receiving incoming requests reqError error // holds error if request channel is closed watches map[string]*watch // per-type watches - watcher Watcher + watcher source.Watcher - reporter Reporter -} - -// Reporter is used to report metrics for an MCP server. -type Reporter interface { - io.Closer - - SetClientsTotal(clients int64) - RecordSendError(err error, code codes.Code) - RecordRecvError(err error, code codes.Code) - RecordRequestSize(typeURL string, connectionID int64, size int) - RecordRequestAck(typeURL string, connectionID int64) - RecordRequestNack(typeURL string, connectionID int64) + reporter monitoring.Reporter } // New creates a new gRPC server that implements the Mesh Configuration Protocol (MCP). -func New(watcher Watcher, supportedTypes []string, authChecker AuthChecker, reporter Reporter) *Server { +func New(options *source.Options, authChecker AuthChecker) *Server { s := &Server{ - watcher: watcher, - supportedTypes: supportedTypes, + watcher: options.Watcher, + collections: options.CollectionsOptions, authCheck: authChecker, - reporter: reporter, + reporter: options.Reporter, newConnectionLimiter: rate.NewLimiter(rate.Every(newConnectionFreq), newConnectionBurstSize), } - - // Initialize record limiter for the auth checker. - limit := rate.Every(authzFailureLogFreq) - limiter := rate.NewLimiter(limit, authzFailureLogBurstSize) - s.checkFailureRecordLimiter = limiter - return s } @@ -314,11 +234,6 @@ func (s *Server) newConnection(stream mcp.AggregatedMeshConfigService_StreamAggr } if err := s.authCheck.Check(authInfo); err != nil { - s.failureCountSinceLastRecord++ - if s.checkFailureRecordLimiter.Allow() { - scope.Warnf("NewConnection: auth check failed: %v (repeated %d times).", err, s.failureCountSinceLastRecord) - s.failureCountSinceLastRecord = 0 - } return nil, status.Errorf(codes.Unauthenticated, "Authentication failure: %v", err) } @@ -333,16 +248,16 @@ func (s *Server) newConnection(stream mcp.AggregatedMeshConfigService_StreamAggr } var types []string - for _, typeURL := range s.supportedTypes { + for _, collection := range s.collections { w := &watch{ newPushResponseReadyChan: make(chan newPushResponseState, 1), nonceVersionMap: make(map[string]string), } - con.watches[typeURL] = w - types = append(types, typeURL) + con.watches[collection.Name] = w + types = append(types, collection.Name) } - s.reporter.SetClientsTotal(atomic.AddInt64(&s.connections, 1)) + s.reporter.SetStreamCount(atomic.AddInt64(&s.connections, 1)) scope.Debugf("MCP: connection %v: NEW, supported types: %#v", con, types) return con, nil @@ -417,7 +332,7 @@ func (s *Server) StreamAggregatedResources(stream mcp.AggregatedMeshConfigServic func (s *Server) closeConnection(con *connection) { con.close() - s.reporter.SetClientsTotal(atomic.AddInt64(&s.connections, -1)) + s.reporter.SetStreamCount(atomic.AddInt64(&s.connections, -1)) } // String implements Stringer.String. @@ -425,15 +340,15 @@ func (con *connection) String() string { return fmt.Sprintf("{addr=%v id=%v}", con.peerAddr, con.id) } -func (con *connection) send(resp *WatchResponse) (string, error) { - envelopes := make([]mcp.Envelope, 0, len(resp.Envelopes)) - for _, envelope := range resp.Envelopes { - envelopes = append(envelopes, *envelope) +func (con *connection) send(resp *source.WatchResponse) (string, error) { + resources := make([]mcp.Resource, 0, len(resp.Resources)) + for _, resource := range resp.Resources { + resources = append(resources, *resource) } msg := &mcp.MeshConfigResponse{ VersionInfo: resp.Version, - Envelopes: envelopes, - TypeUrl: resp.TypeURL, + Resources: resources, + TypeUrl: resp.Collection, } // increment nonce @@ -465,7 +380,12 @@ func (con *connection) receive() { con.reqError = err return } - con.requestC <- req + select { + case con.requestC <- req: + case <-con.stream.Context().Done(): + scope.Debugf("MCP: connection %v: stream done, err=%v", con, con.stream.Context().Err()) + return + } } } @@ -480,11 +400,13 @@ func (con *connection) close() { } func (con *connection) processClientRequest(req *mcp.MeshConfigRequest) error { - con.reporter.RecordRequestSize(req.TypeUrl, con.id, req.Size()) + collection := req.TypeUrl - w, ok := con.watches[req.TypeUrl] + con.reporter.RecordRequestSize(collection, con.id, req.Size()) + + w, ok := con.watches[collection] if !ok { - return status.Errorf(codes.InvalidArgument, "unsupported type_url %q", req.TypeUrl) + return status.Errorf(codes.InvalidArgument, "unsupported collection %q", collection) } // Reset on every request. Only the most recent NACK request (per @@ -494,12 +416,13 @@ func (con *connection) processClientRequest(req *mcp.MeshConfigRequest) error { // nonces can be reused across streams; we verify nonce only if nonce is not initialized if w.nonce == "" || w.nonce == req.ResponseNonce { if w.nonce == "" { - scope.Infof("MCP: connection %v: WATCH for %v", con, req.TypeUrl) + scope.Infof("MCP: connection %v: WATCH for %v", con, collection) } else { if req.ErrorDetail != nil { - scope.Warnf("MCP: connection %v: NACK type_url=%v version=%v with nonce=%q (w.nonce=%q) error=%#v", // nolint: lll - con, req.TypeUrl, req.VersionInfo, req.ResponseNonce, w.nonce, req.ErrorDetail) - con.reporter.RecordRequestNack(req.TypeUrl, con.id) + scope.Warnf("MCP: connection %v: NACK collection=%v version=%v with nonce=%q (w.nonce=%q) error=%#v", // nolint: lll + con, collection, req.VersionInfo, req.ResponseNonce, w.nonce, req.ErrorDetail) + + con.reporter.RecordRequestNack(collection, con.id, codes.Code(req.ErrorDetail.Code)) if version, ok := w.nonceVersionMap[req.ResponseNonce]; ok { w.mu.Lock() @@ -507,28 +430,34 @@ func (con *connection) processClientRequest(req *mcp.MeshConfigRequest) error { w.mu.Unlock() } } else { - scope.Debugf("MCP: connection %v ACK type_url=%q version=%q with nonce=%q", - con, req.TypeUrl, req.VersionInfo, req.ResponseNonce) - con.reporter.RecordRequestAck(req.TypeUrl, con.id) + scope.Debugf("MCP: connection %v ACK collection=%q version=%q with nonce=%q", + con, collection, req.VersionInfo, req.ResponseNonce) + con.reporter.RecordRequestAck(collection, con.id) } } if w.cancel != nil { w.cancel() } - w.cancel = con.watcher.Watch(req, w.saveResponseAndSchedulePush) + + sr := &source.Request{ + SinkNode: req.SinkNode, + Collection: collection, + VersionInfo: req.VersionInfo, + } + w.cancel = con.watcher.Watch(sr, w.saveResponseAndSchedulePush) } else { // This error path should not happen! Skip any requests that don't match the // latest watch's nonce value. These could be dup requests or out-of-order - // requests from a buggy client. + // requests from a buggy node. if req.ErrorDetail != nil { - scope.Errorf("MCP: connection %v: STALE NACK type_url=%v version=%v with nonce=%q (w.nonce=%q) error=%#v", // nolint: lll - con, req.TypeUrl, req.VersionInfo, req.ResponseNonce, w.nonce, req.ErrorDetail) - con.reporter.RecordRequestNack(req.TypeUrl, con.id) + scope.Errorf("MCP: connection %v: STALE NACK collection=%v version=%v with nonce=%q (w.nonce=%q) error=%#v", // nolint: lll + con, collection, req.VersionInfo, req.ResponseNonce, w.nonce, req.ErrorDetail) + con.reporter.RecordRequestNack(collection, con.id, codes.Code(req.ErrorDetail.Code)) } else { - scope.Errorf("MCP: connection %v: STALE ACK type_url=%v version=%v with nonce=%q (w.nonce=%q)", // nolint: lll - con, req.TypeUrl, req.VersionInfo, req.ResponseNonce, w.nonce) - con.reporter.RecordRequestAck(req.TypeUrl, con.id) + scope.Errorf("MCP: connection %v: STALE ACK collection=%v version=%v with nonce=%q (w.nonce=%q)", // nolint: lll + con, collection, req.VersionInfo, req.ResponseNonce, w.nonce) + con.reporter.RecordRequestAck(collection, con.id) } } @@ -537,7 +466,7 @@ func (con *connection) processClientRequest(req *mcp.MeshConfigRequest) error { return nil } -func (con *connection) pushServerResponse(w *watch, resp *WatchResponse) error { +func (con *connection) pushServerResponse(w *watch, resp *source.WatchResponse) error { nonce, err := con.send(resp) if err != nil { return err diff --git a/pkg/mcp/server/server_test.go b/pkg/mcp/server/server_test.go index f29e003d3c63..28acdaf32c32 100644 --- a/pkg/mcp/server/server_test.go +++ b/pkg/mcp/server/server_test.go @@ -25,8 +25,6 @@ import ( "testing" "time" - "github.com/gogo/protobuf/proto" - "github.com/gogo/protobuf/types" "github.com/gogo/status" "google.golang.org/grpc" "google.golang.org/grpc/codes" @@ -34,27 +32,29 @@ import ( "google.golang.org/grpc/peer" mcp "istio.io/api/mcp/v1alpha1" - mcptestmon "istio.io/istio/pkg/mcp/testing/monitoring" + "istio.io/istio/pkg/mcp/internal/test" + "istio.io/istio/pkg/mcp/source" + "istio.io/istio/pkg/mcp/testing/monitoring" ) type mockConfigWatcher struct { mu sync.Mutex counts map[string]int - responses map[string]*WatchResponse + responses map[string]*source.WatchResponse watchesCreated map[string]chan struct{} - watches map[string][]PushResponseFunc + watches map[string][]source.PushResponseFunc closeWatch bool } -func (config *mockConfigWatcher) Watch(req *mcp.MeshConfigRequest, pushResponse PushResponseFunc) CancelWatchFunc { +func (config *mockConfigWatcher) Watch(req *source.Request, pushResponse source.PushResponseFunc) source.CancelWatchFunc { config.mu.Lock() defer config.mu.Unlock() - config.counts[req.TypeUrl]++ + config.counts[req.Collection]++ - if rsp, ok := config.responses[req.TypeUrl]; ok { - rsp.TypeURL = req.TypeUrl + if rsp, ok := config.responses[req.Collection]; ok { + rsp.Collection = req.Collection pushResponse(rsp) return nil } else if config.closeWatch { @@ -62,21 +62,21 @@ func (config *mockConfigWatcher) Watch(req *mcp.MeshConfigRequest, pushResponse return nil } else { // save open watch channel for later - config.watches[req.TypeUrl] = append(config.watches[req.TypeUrl], pushResponse) + config.watches[req.Collection] = append(config.watches[req.Collection], pushResponse) } - if ch, ok := config.watchesCreated[req.TypeUrl]; ok { + if ch, ok := config.watchesCreated[req.Collection]; ok { ch <- struct{}{} } return func() {} } -func (config *mockConfigWatcher) setResponse(response *WatchResponse) { +func (config *mockConfigWatcher) setResponse(response *source.WatchResponse) { config.mu.Lock() defer config.mu.Unlock() - typeURL := response.TypeURL + typeURL := response.Collection if watches, ok := config.watches[typeURL]; ok { for _, watch := range watches { @@ -91,8 +91,8 @@ func makeMockConfigWatcher() *mockConfigWatcher { return &mockConfigWatcher{ counts: make(map[string]int), watchesCreated: make(map[string]chan struct{}), - watches: make(map[string][]PushResponseFunc), - responses: make(map[string]*WatchResponse), + watches: make(map[string][]source.PushResponseFunc), + responses: make(map[string]*source.WatchResponse), } } @@ -117,19 +117,13 @@ func (stream *mockStream) Send(resp *mcp.MeshConfigResponse) error { stream.t.Error("VersionInfo => got none, want non-empty") } // check resources are non-empty - if len(resp.Envelopes) == 0 { - stream.t.Error("Envelopes => got none, want non-empty") + if len(resp.Resources) == 0 { + stream.t.Error("Resources => got none, want non-empty") } // check that type URL matches in resources if resp.TypeUrl == "" { stream.t.Error("TypeUrl => got none, want non-empty") } - for _, envelope := range resp.Envelopes { - got := envelope.Resource.TypeUrl - if got != resp.TypeUrl { - stream.t.Errorf("TypeUrl => got %q, want %q", got, resp.TypeUrl) - } - } stream.sent <- resp if stream.sendError { return errors.New("send error") @@ -161,89 +155,28 @@ func makeMockStream(t *testing.T) *mockStream { } } -// fake protobuf types - -type fakeTypeBase struct{ Info string } - -func (f fakeTypeBase) Reset() {} -func (f fakeTypeBase) String() string { return f.Info } -func (f fakeTypeBase) ProtoMessage() {} -func (f fakeTypeBase) Marshal() ([]byte, error) { return []byte(f.Info), nil } - -type fakeType0 struct{ fakeTypeBase } -type fakeType1 struct{ fakeTypeBase } -type fakeType2 struct{ fakeTypeBase } - -const ( - typePrefix = "type.googleapis.com/" - fakeType0Prefix = "istio.io.galley.pkg.mcp.server.fakeType0" - fakeType1Prefix = "istio.io.galley.pkg.mcp.server.fakeType1" - fakeType2Prefix = "istio.io.galley.pkg.mcp.server.fakeType2" - - fakeType0TypeURL = typePrefix + fakeType0Prefix - fakeType1TypeURL = typePrefix + fakeType1Prefix - fakeType2TypeURL = typePrefix + fakeType2Prefix -) - -func mustMarshalAny(pb proto.Message) *types.Any { - a, err := types.MarshalAny(pb) - if err != nil { - panic(err.Error()) - } - return a -} - -func init() { - proto.RegisterType((*fakeType0)(nil), fakeType0Prefix) - proto.RegisterType((*fakeType1)(nil), fakeType1Prefix) - proto.RegisterType((*fakeType2)(nil), fakeType2Prefix) - - fakeEnvelope0 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f0"}, - Resource: mustMarshalAny(&fakeType0{fakeTypeBase{"f0"}}), - } - fakeEnvelope1 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f1"}, - Resource: mustMarshalAny(&fakeType1{fakeTypeBase{"f1"}}), - } - fakeEnvelope2 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f2"}, - Resource: mustMarshalAny(&fakeType2{fakeTypeBase{"f2"}}), - } -} - -var ( - client = &mcp.Client{ - Id: "test-id", - } - - fakeEnvelope0 *mcp.Envelope - fakeEnvelope1 *mcp.Envelope - fakeEnvelope2 *mcp.Envelope - - WatchResponseTypes = []string{ - fakeType0TypeURL, - fakeType1TypeURL, - fakeType2TypeURL, - } -) - func TestMultipleRequests(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) // make a request stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) go func() { if err := s.StreamAggregatedResources(stream); err != nil { t.Errorf("Stream() => got %v, want no error", err) @@ -255,7 +188,7 @@ func TestMultipleRequests(t *testing.T) { // check a response select { case rsp = <-stream.sent: - if want := map[string]int{fakeType0TypeURL: 1}; !reflect.DeepEqual(want, config.counts) { + if want := map[string]int{test.FakeType0Collection: 1}; !reflect.DeepEqual(want, config.counts) { t.Errorf("watch counts => got %v, want %v", config.counts, want) } case <-time.After(time.Second): @@ -263,8 +196,8 @@ func TestMultipleRequests(t *testing.T) { } stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, VersionInfo: rsp.VersionInfo, ResponseNonce: rsp.Nonce, } @@ -272,7 +205,7 @@ func TestMultipleRequests(t *testing.T) { // check a response select { case <-stream.sent: - if want := map[string]int{fakeType0TypeURL: 2}; !reflect.DeepEqual(want, config.counts) { + if want := map[string]int{test.FakeType0Collection: 2}; !reflect.DeepEqual(want, config.counts) { t.Errorf("watch counts => got %v, want %v", config.counts, want) } case <-time.After(time.Second): @@ -280,23 +213,38 @@ func TestMultipleRequests(t *testing.T) { } } +type fakeAuthChecker struct { + err error +} + +func (f *fakeAuthChecker) Check(authInfo credentials.AuthInfo) error { + return f.err +} + func TestAuthCheck_Failure(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) // make a request stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } - checker := NewListAuthChecker() - s := New(config, WatchResponseTypes, checker, mcptestmon.NewInMemoryServerStatsContext()) + checker := test.NewFakeAuthChecker() + checker.AllowError = errors.New("disallow") + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, checker) + wg := sync.WaitGroup{} wg.Add(1) go func() { @@ -309,28 +257,28 @@ func TestAuthCheck_Failure(t *testing.T) { wg.Wait() } -type fakeAuthChecker struct{} - -func (f *fakeAuthChecker) Check(authInfo credentials.AuthInfo) error { - return nil -} - func TestAuthCheck_Success(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) // make a request stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } - s := New(config, WatchResponseTypes, &fakeAuthChecker{}, mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) go func() { if err := s.StreamAggregatedResources(stream); err != nil { t.Errorf("Stream() => got %v, want no error", err) @@ -342,7 +290,7 @@ func TestAuthCheck_Success(t *testing.T) { // check a response select { case rsp = <-stream.sent: - if want := map[string]int{fakeType0TypeURL: 1}; !reflect.DeepEqual(want, config.counts) { + if want := map[string]int{test.FakeType0Collection: 1}; !reflect.DeepEqual(want, config.counts) { t.Errorf("watch counts => got %v, want %v", config.counts, want) } case <-time.After(time.Second): @@ -350,8 +298,8 @@ func TestAuthCheck_Success(t *testing.T) { } stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, VersionInfo: rsp.VersionInfo, ResponseNonce: rsp.Nonce, } @@ -359,7 +307,7 @@ func TestAuthCheck_Success(t *testing.T) { // check a response select { case <-stream.sent: - if want := map[string]int{fakeType0TypeURL: 2}; !reflect.DeepEqual(want, config.counts) { + if want := map[string]int{test.FakeType0Collection: 2}; !reflect.DeepEqual(want, config.counts) { t.Errorf("watch counts => got %v, want %v", config.counts, want) } case <-time.After(time.Second): @@ -369,34 +317,39 @@ func TestAuthCheck_Success(t *testing.T) { func TestWatchBeforeResponsesAvailable(t *testing.T) { config := makeMockConfigWatcher() - config.watchesCreated[fakeType0TypeURL] = make(chan struct{}) + config.watchesCreated[test.FakeType0Collection] = make(chan struct{}) // make a request stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) go func() { if err := s.StreamAggregatedResources(stream); err != nil { t.Errorf("Stream() => got %v, want no error", err) } }() - <-config.watchesCreated[fakeType0TypeURL] - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + <-config.watchesCreated[test.FakeType0Collection] + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) // check a response select { case <-stream.sent: close(stream.recv) - if want := map[string]int{fakeType0TypeURL: 1}; !reflect.DeepEqual(want, config.counts) { + if want := map[string]int{test.FakeType0Collection: 1}; !reflect.DeepEqual(want, config.counts) { t.Errorf("watch counts => got %v, want %v", config.counts, want) } case <-time.After(time.Second): @@ -411,12 +364,17 @@ func TestWatchClosed(t *testing.T) { // make a request stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } // check that response fails since watch gets closed - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) if err := s.StreamAggregatedResources(stream); err == nil { t.Error("Stream() => got no error, want watch failed") } @@ -425,21 +383,26 @@ func TestWatchClosed(t *testing.T) { func TestSendError(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) // make a request stream := makeMockStream(t) stream.sendError = true stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) // check that response fails since watch gets closed if err := s.StreamAggregatedResources(stream); err == nil { t.Error("Stream() => got no error, want send error") @@ -450,22 +413,28 @@ func TestSendError(t *testing.T) { func TestReceiveError(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) // make a request stream := makeMockStream(t) stream.recvError = status.Error(codes.Internal, "internal receive error") stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } // check that response fails since watch gets closed - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) if err := s.StreamAggregatedResources(stream); err == nil { t.Error("Stream() => got no error, want send error") } @@ -475,21 +444,27 @@ func TestReceiveError(t *testing.T) { func TestUnsupportedTypeError(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: "unsupportedType", - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + + config.setResponse(&source.WatchResponse{ + Collection: "unsupportedCollection", + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) // make a request stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: "unsupportedtype", + SinkNode: test.Node, + TypeUrl: "unsupportedtype", } // check that response fails since watch gets closed - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) if err := s.StreamAggregatedResources(stream); err == nil { t.Error("Stream() => got no error, want send error") } @@ -499,25 +474,31 @@ func TestUnsupportedTypeError(t *testing.T) { func TestStaleNonce(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } stop := make(chan struct{}) - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) go func() { if err := s.StreamAggregatedResources(stream); err != nil { t.Errorf("StreamAggregatedResources() => got %v, want no error", err) } // should be two watches called - if want := map[string]int{fakeType0TypeURL: 2}; !reflect.DeepEqual(want, config.counts) { + if want := map[string]int{test.FakeType0Collection: 2}; !reflect.DeepEqual(want, config.counts) { t.Errorf("watch counts => got %v, want %v", config.counts, want) } close(stop) @@ -526,15 +507,15 @@ func TestStaleNonce(t *testing.T) { case <-stream.sent: // stale request stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, ResponseNonce: "xyz", } // fresh request stream.recv <- &mcp.MeshConfigRequest{ VersionInfo: "1", - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, ResponseNonce: "1", } close(stream.recv) @@ -546,37 +527,43 @@ func TestStaleNonce(t *testing.T) { func TestAggregatedHandlers(t *testing.T) { config := makeMockConfigWatcher() - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: "1", - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: "1", + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) - config.setResponse(&WatchResponse{ - TypeURL: fakeType1TypeURL, - Version: "2", - Envelopes: []*mcp.Envelope{fakeEnvelope1}, + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType1Collection, + Version: "2", + Resources: []*mcp.Resource{test.Type1A[0].Resource}, }) - config.setResponse(&WatchResponse{ - TypeURL: fakeType2TypeURL, - Version: "3", - Envelopes: []*mcp.Envelope{fakeEnvelope2}, + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType2Collection, + Version: "3", + Resources: []*mcp.Resource{test.Type2A[0].Resource}, }) stream := makeMockStream(t) stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType0TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType0Collection, } stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType1TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType1Collection, } stream.recv <- &mcp.MeshConfigRequest{ - Client: client, - TypeUrl: fakeType2TypeURL, + SinkNode: test.Node, + TypeUrl: test.FakeType2Collection, } - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) go func() { if err := s.StreamAggregatedResources(stream); err != nil { t.Errorf("StreamAggregatedResources() => got %v, want no error", err) @@ -584,9 +571,9 @@ func TestAggregatedHandlers(t *testing.T) { }() want := map[string]int{ - fakeType0TypeURL: 1, - fakeType1TypeURL: 1, - fakeType2TypeURL: 1, + test.FakeType0Collection: 1, + test.FakeType1Collection: 1, + test.FakeType2Collection: 1, } count := 0 @@ -612,9 +599,14 @@ func TestAggregateRequestType(t *testing.T) { config := makeMockConfigWatcher() stream := makeMockStream(t) - stream.recv <- &mcp.MeshConfigRequest{Client: client} + stream.recv <- &mcp.MeshConfigRequest{SinkNode: test.Node} - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) if err := s.StreamAggregatedResources(stream); err == nil { t.Error("StreamAggregatedResources() => got nil, want an error") } @@ -629,7 +621,12 @@ func TestRateLimitNACK(t *testing.T) { }() config := makeMockConfigWatcher() - s := New(config, WatchResponseTypes, NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + options := &source.Options{ + Watcher: config, + CollectionsOptions: source.CollectionOptionsFromSlice(test.SupportedCollections), + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := New(options, test.NewFakeAuthChecker()) stream := makeMockStream(t) go func() { @@ -692,7 +689,7 @@ func TestRateLimitNACK(t *testing.T) { sendRequest := func(typeURL, nonce, version string, err error) { req := &mcp.MeshConfigRequest{ - Client: client, + SinkNode: test.Node, TypeUrl: typeURL, ResponseNonce: nonce, VersionInfo: version, @@ -705,7 +702,7 @@ func TestRateLimitNACK(t *testing.T) { } // initial watch request - sendRequest(fakeType0TypeURL, "", "", nil) + sendRequest(test.FakeType0Collection, "", "", nil) nonces := make(map[string]bool) var prevNonce string @@ -717,10 +714,11 @@ func TestRateLimitNACK(t *testing.T) { for { if first { first = false - config.setResponse(&WatchResponse{ - TypeURL: fakeType0TypeURL, - Version: s.pushedVersion, - Envelopes: []*mcp.Envelope{fakeEnvelope0}, + + config.setResponse(&source.WatchResponse{ + Collection: test.FakeType0Collection, + Version: s.pushedVersion, + Resources: []*mcp.Resource{test.Type0A[0].Resource}, }) } @@ -742,7 +740,7 @@ func TestRateLimitNACK(t *testing.T) { prevNonce = response.Nonce - sendRequest(fakeType0TypeURL, prevNonce, s.ackedVersion, s.errDetails) + sendRequest(test.FakeType0Collection, prevNonce, s.ackedVersion, s.errDetails) if time.Now().After(finish) { break diff --git a/pkg/mcp/sink/journal.go b/pkg/mcp/sink/journal.go new file mode 100644 index 000000000000..3fe437cabbec --- /dev/null +++ b/pkg/mcp/sink/journal.go @@ -0,0 +1,141 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sink + +import ( + "sync" + "time" + + "github.com/gogo/googleapis/google/rpc" + + mcp "istio.io/api/mcp/v1alpha1" +) + +// JournaledRequest is a common structure for journaling +// both mcp.MeshConfigRequest and mcp.RequestResources. It can be replaced with +// mcp.RequestResources once we fully switch over to the new API. +type JournaledRequest struct { + VersionInfo string + Collection string + ResponseNonce string + ErrorDetail *rpc.Status + SinkNode *mcp.SinkNode +} + +func (jr *JournaledRequest) ToMeshConfigRequest() *mcp.MeshConfigRequest { + return &mcp.MeshConfigRequest{ + TypeUrl: jr.Collection, + VersionInfo: jr.VersionInfo, + ResponseNonce: jr.ResponseNonce, + ErrorDetail: jr.ErrorDetail, + SinkNode: jr.SinkNode, + } +} + +// RecentRequestInfo is metadata about a request that the client has sent. +type RecentRequestInfo struct { + Time time.Time + Request *JournaledRequest +} + +// Acked indicates whether the message was an ack or not. +func (r RecentRequestInfo) Acked() bool { + return r.Request.ErrorDetail == nil +} + +const journalDepth = 32 + +// RecentRequestsJournal captures debug metadata about the latest requests that was sent by this client. +type RecentRequestsJournal struct { + itemsMutex sync.Mutex + items []RecentRequestInfo + next int + size int +} + +func NewRequestJournal() *RecentRequestsJournal { + return &RecentRequestsJournal{ + items: make([]RecentRequestInfo, journalDepth), + } +} + +func (r *RecentRequestsJournal) RecordMeshConfigRequest(req *mcp.MeshConfigRequest) { // nolint:interfacer + r.itemsMutex.Lock() + defer r.itemsMutex.Unlock() + + item := RecentRequestInfo{ + Time: time.Now(), + Request: &JournaledRequest{ + VersionInfo: req.VersionInfo, + Collection: req.TypeUrl, + ResponseNonce: req.ResponseNonce, + ErrorDetail: req.ErrorDetail, + SinkNode: req.SinkNode, + }, + } + + r.items[r.next] = item + + r.next++ + if r.next == cap(r.items) { + r.next = 0 + } + if r.size < cap(r.items) { + r.size++ + } +} + +func (r *RecentRequestsJournal) RecordRequestResources(req *mcp.RequestResources) { // nolint:interfacer + item := RecentRequestInfo{ + Time: time.Now(), + Request: &JournaledRequest{ + Collection: req.Collection, + ResponseNonce: req.ResponseNonce, + ErrorDetail: req.ErrorDetail, + SinkNode: req.SinkNode, + }, + } + + r.itemsMutex.Lock() + defer r.itemsMutex.Unlock() + + r.items[r.next] = item + + r.next++ + if r.next == cap(r.items) { + r.next = 0 + } + if r.size < cap(r.items) { + r.size++ + } +} + +func (r *RecentRequestsJournal) Snapshot() []RecentRequestInfo { + r.itemsMutex.Lock() + defer r.itemsMutex.Unlock() + + var result []RecentRequestInfo + + if r.size < cap(r.items) { + result = make([]RecentRequestInfo, r.next) + copy(result, r.items[0:r.next]) + } else { + result = make([]RecentRequestInfo, len(r.items)) + copy(result, r.items[r.next:]) + copy(result[cap(r.items)-r.next:], r.items[0:r.next]) + } + + return result +} diff --git a/pkg/mcp/sink/journal_test.go b/pkg/mcp/sink/journal_test.go new file mode 100644 index 000000000000..b892503f3e2e --- /dev/null +++ b/pkg/mcp/sink/journal_test.go @@ -0,0 +1,81 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sink + +import ( + "errors" + "fmt" + "testing" + "time" + + "github.com/gogo/status" + "github.com/google/go-cmp/cmp" + + mcp "istio.io/api/mcp/v1alpha1" +) + +func TestJournal(t *testing.T) { + j := NewRequestJournal() + + wrap := 2 + var want []RecentRequestInfo + for i := 0; i < journalDepth+wrap; i++ { + req := &mcp.RequestResources{ + Collection: "foo", + ResponseNonce: fmt.Sprintf("nonce-%v", i), + } + + j.RecordRequestResources(req) + + want = append(want, RecentRequestInfo{Request: &JournaledRequest{ + Collection: req.Collection, + ResponseNonce: req.ResponseNonce, + ErrorDetail: req.ErrorDetail, + SinkNode: req.SinkNode, + }}) + } + want = want[wrap:] + + ignoreTimeOption := cmp.Comparer(func(x, y time.Time) bool { return true }) + + got := j.Snapshot() + if diff := cmp.Diff(got, want, ignoreTimeOption); diff != "" { + t.Fatalf("wrong Snapshot: \n got %v \nwant %v \ndiff %v", got, want, diff) + } + + errorDetails, _ := status.FromError(errors.New("error")) + req := &mcp.RequestResources{ + Collection: "foo", + ResponseNonce: fmt.Sprintf("nonce-error"), + ErrorDetail: errorDetails.Proto(), + } + j.RecordRequestResources(req) + + want = append(want, RecentRequestInfo{Request: &JournaledRequest{ + Collection: req.Collection, + ResponseNonce: req.ResponseNonce, + ErrorDetail: req.ErrorDetail, + SinkNode: req.SinkNode, + }}) + want = want[1:] + + got = j.Snapshot() + if diff := cmp.Diff(got, want, ignoreTimeOption); diff != "" { + t.Fatalf("wrong Snapshot: \n got %v \nwant %v \ndiff %v", got, want, diff) + } + if got[len(got)-1].Acked() { + t.Fatal("last request should be a NACK") + } +} diff --git a/pkg/mcp/sink/sink.go b/pkg/mcp/sink/sink.go new file mode 100644 index 000000000000..242490f91c70 --- /dev/null +++ b/pkg/mcp/sink/sink.go @@ -0,0 +1,125 @@ +// Copyright 2019 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sink + +import ( + "sync" + + "github.com/gogo/protobuf/proto" + + mcp "istio.io/api/mcp/v1alpha1" + "istio.io/istio/pkg/mcp/monitoring" +) + +// Object contains a decoded versioned object with metadata received from the server. +type Object struct { + TypeURL string + Metadata *mcp.Metadata + Body proto.Message +} + +// changes is a collection of configuration objects of the same protobuf type. +type Change struct { + Collection string + + // List of resources to add/update. The interpretation of this field depends + // on the value of Incremental. + // + // When Incremental=True, the list only includes new/updated resources. + // + // When Incremental=False, the list includes the full list of resources. + // Any previously received resources not in this list should be deleted. + Objects []*Object + + // List of deleted resources by name. The resource name corresponds to the + // resource's metadata name. + // + // Ignore when Incremental=false. + Removed []string + + // When true, the set of changes represents an incremental resource update. The + // `Objects` is a list of added/update resources and `Removed` is a list of delete + // resources. + // + // When false, the set of changes represents a full-state update for the specified + // type. Any previous resources not included in this update should be removed. + Incremental bool +} + +// Updater provides configuration changes in batches of the same protobuf message type. +type Updater interface { + // Apply is invoked when the node receives new configuration updates + // from the server. The caller should return an error if any of the provided + // configuration resources are invalid or cannot be applied. The node will + // propagate errors back to the server accordingly. + Apply(*Change) error +} + +// InMemoryUpdater is an implementation of Updater that keeps a simple in-memory state. +type InMemoryUpdater struct { + items map[string][]*Object + itemsMutex sync.Mutex +} + +var _ Updater = &InMemoryUpdater{} + +// NewInMemoryUpdater returns a new instance of InMemoryUpdater +func NewInMemoryUpdater() *InMemoryUpdater { + return &InMemoryUpdater{ + items: make(map[string][]*Object), + } +} + +// Apply the change to the InMemoryUpdater. +func (u *InMemoryUpdater) Apply(c *Change) error { + u.itemsMutex.Lock() + defer u.itemsMutex.Unlock() + u.items[c.Collection] = c.Objects + return nil +} + +// Get current state for the given collection. +func (u *InMemoryUpdater) Get(collection string) []*Object { + u.itemsMutex.Lock() + defer u.itemsMutex.Unlock() + return u.items[collection] +} + +// CollectionOptions configures the per-collection updates. +type CollectionOptions struct { + // Name of the collection, e.g. istio/networking/v1alpha3/VirtualService + Name string +} + +// CollectionsOptionsFromSlice returns a slice of collection options from +// a slice of collection names. +func CollectionOptionsFromSlice(names []string) []CollectionOptions { + options := make([]CollectionOptions, 0, len(names)) + for _, name := range names { + options = append(options, CollectionOptions{ + Name: name, + }) + } + return options +} + +// Options contains options for configuring MCP sinks. +type Options struct { + CollectionOptions []CollectionOptions + Updater Updater + ID string + Metadata map[string]string + Reporter monitoring.Reporter +} diff --git a/pkg/mcp/sink/sink_test.go b/pkg/mcp/sink/sink_test.go new file mode 100644 index 000000000000..bc42f00ed8ac --- /dev/null +++ b/pkg/mcp/sink/sink_test.go @@ -0,0 +1,59 @@ +// Copyright 2019 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package sink + +import ( + "testing" + + "github.com/gogo/protobuf/types" + + mcp "istio.io/api/mcp/v1alpha1" +) + +func TestInMemoryUpdater(t *testing.T) { + u := NewInMemoryUpdater() + + o := u.Get("foo") + if len(o) != 0 { + t.Fatalf("Unexpected items in updater: %v", o) + } + + c := Change{ + Collection: "foo", + Objects: []*Object{ + { + TypeURL: "foo", + Metadata: &mcp.Metadata{ + Name: "bar", + }, + Body: &types.Empty{}, + }, + }, + } + + err := u.Apply(&c) + if err != nil { + t.Fatalf("Unexpected error: %v", err) + } + + o = u.Get("foo") + if len(o) != 1 { + t.Fatalf("expected item not found: %v", o) + } + + if o[0].Metadata.Name != "bar" { + t.Fatalf("expected name not found on object: %v", o) + } +} diff --git a/pkg/mcp/snapshot/inmemory.go b/pkg/mcp/snapshot/inmemory.go index c2a022710b1f..660bd4dfcbec 100644 --- a/pkg/mcp/snapshot/inmemory.go +++ b/pkg/mcp/snapshot/inmemory.go @@ -29,7 +29,7 @@ import ( // InMemory Snapshot implementation type InMemory struct { - envelopes map[string][]*mcp.Envelope + resources map[string][]*mcp.Resource versions map[string]string } @@ -43,7 +43,7 @@ type InMemoryBuilder struct { // NewInMemoryBuilder creates and returns a new InMemoryBuilder. func NewInMemoryBuilder() *InMemoryBuilder { snapshot := &InMemory{ - envelopes: make(map[string][]*mcp.Envelope), + resources: make(map[string][]*mcp.Resource), versions: make(map[string]string), } @@ -52,16 +52,17 @@ func NewInMemoryBuilder() *InMemoryBuilder { } } -// Set the values for a given type. If Set is called after a call to Freeze, then this method panics. -func (b *InMemoryBuilder) Set(typeURL string, version string, resources []*mcp.Envelope) { - b.snapshot.envelopes[typeURL] = resources - b.snapshot.versions[typeURL] = version +// Set the values for a given collection. If Set is called after a call to Freeze, then this method panics. +func (b *InMemoryBuilder) Set(collection, version string, resources []*mcp.Resource) { + b.snapshot.resources[collection] = resources + b.snapshot.versions[collection] = version } // SetEntry sets a single entry. Note that this is a slow operation, as update requires scanning // through existing entries. -func (b *InMemoryBuilder) SetEntry(typeURL, name, version string, createTime time.Time, m proto.Message) error { - contents, err := proto.Marshal(m) +func (b *InMemoryBuilder) SetEntry(collection, name, version string, createTime time.Time, labels, + annotations map[string]string, m proto.Message) error { + body, err := types.MarshalAny(m) if err != nil { return err } @@ -71,19 +72,18 @@ func (b *InMemoryBuilder) SetEntry(typeURL, name, version string, createTime tim return err } - e := &mcp.Envelope{ + e := &mcp.Resource{ Metadata: &mcp.Metadata{ - Name: name, - CreateTime: createTimeProto, - Version: version, - }, - Resource: &types.Any{ - Value: contents, - TypeUrl: typeURL, + Name: name, + CreateTime: createTimeProto, + Labels: labels, + Annotations: annotations, + Version: version, }, + Body: body, } - entries := b.snapshot.envelopes[typeURL] + entries := b.snapshot.resources[collection] for i, prev := range entries { if prev.Metadata.Name == e.Metadata.Name { @@ -93,14 +93,14 @@ func (b *InMemoryBuilder) SetEntry(typeURL, name, version string, createTime tim } entries = append(entries, e) - b.snapshot.envelopes[typeURL] = entries + b.snapshot.resources[collection] = entries return nil } -// DeleteEntry deletes the entry with the given typeuRL, name -func (b *InMemoryBuilder) DeleteEntry(typeURL string, name string) { +// DeleteEntry deletes the named entry within the given collection. +func (b *InMemoryBuilder) DeleteEntry(collection string, name string) { - entries, found := b.snapshot.envelopes[typeURL] + entries, found := b.snapshot.resources[collection] if !found { return } @@ -108,22 +108,22 @@ func (b *InMemoryBuilder) DeleteEntry(typeURL string, name string) { for i, e := range entries { if e.Metadata.Name == name { if len(entries) == 1 { - delete(b.snapshot.envelopes, typeURL) - delete(b.snapshot.versions, typeURL) + delete(b.snapshot.resources, collection) + delete(b.snapshot.versions, collection) return } entries = append(entries[:i], entries[i+1:]...) - b.snapshot.envelopes[typeURL] = entries + b.snapshot.resources[collection] = entries return } } } -// SetVersion sets the version for the given type URL. -func (b *InMemoryBuilder) SetVersion(typeURL string, version string) { - b.snapshot.versions[typeURL] = version +// SetVersion sets the version for the given collection +func (b *InMemoryBuilder) SetVersion(collection string, version string) { + b.snapshot.versions[collection] = version } // Build the snapshot and return. @@ -137,19 +137,19 @@ func (b *InMemoryBuilder) Build() *InMemory { } // Resources is an implementation of Snapshot.Resources -func (s *InMemory) Resources(typeURL string) []*mcp.Envelope { - return s.envelopes[typeURL] +func (s *InMemory) Resources(collection string) []*mcp.Resource { + return s.resources[collection] } // Version is an implementation of Snapshot.Version -func (s *InMemory) Version(typeURL string) string { - return s.versions[typeURL] +func (s *InMemory) Version(collection string) string { + return s.versions[collection] } // Clone this snapshot. func (s *InMemory) Clone() *InMemory { c := &InMemory{ - envelopes: make(map[string][]*mcp.Envelope), + resources: make(map[string][]*mcp.Resource), versions: make(map[string]string), } @@ -157,12 +157,12 @@ func (s *InMemory) Clone() *InMemory { c.versions[k] = v } - for k, v := range s.envelopes { - envs := make([]*mcp.Envelope, len(v)) + for k, v := range s.resources { + envs := make([]*mcp.Resource, len(v)) for i, e := range v { - envs[i] = proto.Clone(e).(*mcp.Envelope) + envs[i] = proto.Clone(e).(*mcp.Resource) } - c.envelopes[k] = envs + c.resources[k] = envs } return c @@ -181,25 +181,25 @@ func (s *InMemory) String() string { var b bytes.Buffer var messages []string - for message := range s.envelopes { + for message := range s.resources { messages = append(messages, message) } sort.Strings(messages) for i, n := range messages { - fmt.Fprintf(&b, "[%d] (%s @%s)\n", i, n, s.versions[n]) + _, _ = fmt.Fprintf(&b, "[%d] (%s @%s)\n", i, n, s.versions[n]) - envs := s.envelopes[n] + envs := s.resources[n] // Avoid mutating the original data - entries := make([]*mcp.Envelope, len(envs)) + entries := make([]*mcp.Resource, len(envs)) copy(entries, envs) sort.Slice(entries, func(i, j int) bool { return strings.Compare(entries[i].Metadata.Name, entries[j].Metadata.Name) == -1 }) for j, entry := range entries { - fmt.Fprintf(&b, " [%d] (%s)\n", j, entry.Metadata.Name) + _, _ = fmt.Fprintf(&b, " [%d] (%s)\n", j, entry.Metadata.Name) } } diff --git a/pkg/mcp/snapshot/inmemory_test.go b/pkg/mcp/snapshot/inmemory_test.go index 6267fce5cdfe..cb4fbdde2d2a 100644 --- a/pkg/mcp/snapshot/inmemory_test.go +++ b/pkg/mcp/snapshot/inmemory_test.go @@ -26,8 +26,12 @@ import ( mcp "istio.io/api/mcp/v1alpha1" ) -var fakeCreateTime = time.Date(2018, time.January, 1, 2, 3, 4, 5, time.UTC) -var fakeCreateTimeProto *types.Timestamp +var ( + fakeCreateTime = time.Date(2018, time.January, 1, 2, 3, 4, 5, time.UTC) + fakeLabels = map[string]string{"lk1": "lv1"} + fakeAnnotations = map[string]string{"ak1": "av1"} + fakeCreateTimeProto *types.Timestamp +) func init() { var err error @@ -41,8 +45,8 @@ func TestInMemoryBuilder(t *testing.T) { b := NewInMemoryBuilder() sn := b.Build() - if len(sn.envelopes) != 0 { - t.Fatal("Envelopes should have been empty") + if len(sn.resources) != 0 { + t.Fatal("Resources should have been empty") } if len(sn.versions) != 0 { @@ -53,16 +57,16 @@ func TestInMemoryBuilder(t *testing.T) { func TestInMemoryBuilder_Set(t *testing.T) { b := NewInMemoryBuilder() - items := []*mcp.Envelope{{Resource: &types.Any{}, Metadata: &mcp.Metadata{Name: "foo"}}} - b.Set("type", "v1", items) + items := []*mcp.Resource{{Body: &types.Any{}, Metadata: &mcp.Metadata{Name: "foo"}}} + b.Set("collection", "v1", items) sn := b.Build() - if sn.Version("type") != "v1" { + if sn.Version("collection") != "v1" { t.Fatalf("Unexpected version: %v", sn.Version("type")) } - actual := sn.Resources("type") - if !reflect.DeepEqual(items, sn.Resources("type")) { + actual := sn.Resources("collection") + if !reflect.DeepEqual(items, sn.Resources("collection")) { t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, items) } } @@ -70,21 +74,24 @@ func TestInMemoryBuilder_Set(t *testing.T) { func TestInMemoryBuilder_SetEntry_Add(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) sn := b.Build() - expected := []*mcp.Envelope{ + expected := []*mcp.Resource{ { Metadata: &mcp.Metadata{ - Name: "foo", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "foo", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type", Value: []byte{}}, + + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any", Value: []byte{}}, }, } - actual := sn.Resources("type") + actual := sn.Resources("collection") if !reflect.DeepEqual(expected, actual) { t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, expected) } @@ -93,22 +100,24 @@ func TestInMemoryBuilder_SetEntry_Add(t *testing.T) { func TestInMemoryBuilder_SetEntry_Update(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) sn := b.Build() - expected := []*mcp.Envelope{ + expected := []*mcp.Resource{ { Metadata: &mcp.Metadata{ - Name: "foo", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "foo", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type", Value: []byte{}}, + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any", Value: []byte{}}, }, } - actual := sn.Resources("type") + actual := sn.Resources("collection") if !reflect.DeepEqual(expected, actual) { t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, expected) } @@ -117,7 +126,7 @@ func TestInMemoryBuilder_SetEntry_Update(t *testing.T) { func TestInMemoryBuilder_SetEntry_Marshal_Error(t *testing.T) { b := NewInMemoryBuilder() - err := b.SetEntry("type", "foo", "v0", fakeCreateTime, nil) + err := b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, nil) if err == nil { t.Fatal("expected error not found") } @@ -126,21 +135,23 @@ func TestInMemoryBuilder_SetEntry_Marshal_Error(t *testing.T) { func TestInMemoryBuilder_DeleteEntry_EntryNotFound(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - b.DeleteEntry("type", "bar") + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + b.DeleteEntry("collection", "bar") sn := b.Build() - expected := []*mcp.Envelope{ + expected := []*mcp.Resource{ { Metadata: &mcp.Metadata{ - Name: "foo", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "foo", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type", Value: []byte{}}, + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any", Value: []byte{}}, }, } - actual := sn.Resources("type") + actual := sn.Resources("collection") if !reflect.DeepEqual(expected, actual) { t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, expected) } @@ -149,11 +160,11 @@ func TestInMemoryBuilder_DeleteEntry_EntryNotFound(t *testing.T) { func TestInMemoryBuilder_DeleteEntry_TypeNotFound(t *testing.T) { b := NewInMemoryBuilder() - b.DeleteEntry("type", "bar") + b.DeleteEntry("collection", "bar") sn := b.Build() - if len(sn.envelopes) != 0 { - t.Fatal("Envelopes should have been empty") + if len(sn.resources) != 0 { + t.Fatal("Resources should have been empty") } if len(sn.versions) != 0 { @@ -164,12 +175,12 @@ func TestInMemoryBuilder_DeleteEntry_TypeNotFound(t *testing.T) { func TestInMemoryBuilder_DeleteEntry_Single(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - b.DeleteEntry("type", "foo") + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + b.DeleteEntry("collection", "foo") sn := b.Build() - if len(sn.envelopes) != 0 { - t.Fatal("Envelopes should have been empty") + if len(sn.resources) != 0 { + t.Fatal("Resources should have been empty") } if len(sn.versions) != 0 { @@ -180,22 +191,24 @@ func TestInMemoryBuilder_DeleteEntry_Single(t *testing.T) { func TestInMemoryBuilder_DeleteEntry_Multiple(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - _ = b.SetEntry("type", "bar", "v0", fakeCreateTime, &types.Any{}) - b.DeleteEntry("type", "foo") + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + _ = b.SetEntry("collection", "bar", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + b.DeleteEntry("collection", "foo") sn := b.Build() - expected := []*mcp.Envelope{ + expected := []*mcp.Resource{ { Metadata: &mcp.Metadata{ - Name: "bar", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "bar", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type", Value: []byte{}}, + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any", Value: []byte{}}, }, } - actual := sn.Resources("type") + actual := sn.Resources("collection") if !reflect.DeepEqual(expected, actual) { t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, expected) } @@ -204,11 +217,11 @@ func TestInMemoryBuilder_DeleteEntry_Multiple(t *testing.T) { func TestInMemoryBuilder_SetVersion(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - b.SetVersion("type", "v1") + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + b.SetVersion("collection", "v1") sn := b.Build() - if sn.Version("type") != "v1" { + if sn.Version("collection") != "v1" { t.Fatalf("Unexpected version: %s", sn.Version("type")) } } @@ -216,33 +229,37 @@ func TestInMemoryBuilder_SetVersion(t *testing.T) { func TestInMemory_Clone(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - _ = b.SetEntry("type", "bar", "v0", fakeCreateTime, &types.Any{}) - b.SetVersion("type", "v1") + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + _ = b.SetEntry("collection", "bar", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + b.SetVersion("collection", "v1") sn := b.Build() sn2 := sn.Clone() - expected := []*mcp.Envelope{ + expected := []*mcp.Resource{ { Metadata: &mcp.Metadata{ - Name: "bar", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "bar", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type"}, + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any"}, }, { Metadata: &mcp.Metadata{ - Name: "foo", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "foo", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type"}, + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any"}, }, } - actual := sn2.Resources("type") + actual := sn2.Resources("collection") sort.Slice(actual, func(i, j int) bool { return strings.Compare( @@ -254,7 +271,7 @@ func TestInMemory_Clone(t *testing.T) { t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, expected) } - if sn2.Version("type") != "v1" { + if sn2.Version("collection") != "v1" { t.Fatalf("Unexpected version: %s", sn2.Version("type")) } } @@ -262,35 +279,39 @@ func TestInMemory_Clone(t *testing.T) { func TestInMemory_Builder(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - _ = b.SetEntry("type", "bar", "v0", fakeCreateTime, &types.Any{}) - b.SetVersion("type", "v1") + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + _ = b.SetEntry("collection", "bar", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + b.SetVersion("collection", "v1") sn := b.Build() b = sn.Builder() sn2 := b.Build() - expected := []*mcp.Envelope{ + expected := []*mcp.Resource{ { Metadata: &mcp.Metadata{ - Name: "bar", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "bar", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type"}, + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any"}, }, { Metadata: &mcp.Metadata{ - Name: "foo", - Version: "v0", - CreateTime: fakeCreateTimeProto, + Name: "foo", + Version: "v0", + CreateTime: fakeCreateTimeProto, + Labels: fakeLabels, + Annotations: fakeAnnotations, }, - Resource: &types.Any{TypeUrl: "type"}, + Body: &types.Any{TypeUrl: "type.googleapis.com/google.protobuf.Any"}, }, } - actual := sn2.Resources("type") + actual := sn2.Resources("collection") sort.Slice(actual, func(i, j int) bool { return strings.Compare( @@ -302,7 +323,7 @@ func TestInMemory_Builder(t *testing.T) { t.Fatalf("Mismatch:\nGot:\n%v\nWanted:\n%v\n", actual, expected) } - if sn2.Version("type") != "v1" { + if sn2.Version("collection") != "v1" { t.Fatalf("Unexpected version: %s", sn2.Version("type")) } } @@ -310,9 +331,9 @@ func TestInMemory_Builder(t *testing.T) { func TestInMemory_String(t *testing.T) { b := NewInMemoryBuilder() - _ = b.SetEntry("type", "foo", "v0", fakeCreateTime, &types.Any{}) - _ = b.SetEntry("type", "bar", "v0", fakeCreateTime, &types.Any{}) - b.SetVersion("type", "v1") + _ = b.SetEntry("collection", "foo", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + _ = b.SetEntry("collection", "bar", "v0", fakeCreateTime, fakeLabels, fakeAnnotations, &types.Any{}) + b.SetVersion("collection", "v1") sn := b.Build() // Shouldn't crash diff --git a/pkg/mcp/snapshot/snapshot.go b/pkg/mcp/snapshot/snapshot.go index c90fb6ff2716..8ac5f205ee14 100644 --- a/pkg/mcp/snapshot/snapshot.go +++ b/pkg/mcp/snapshot/snapshot.go @@ -20,15 +20,15 @@ import ( mcp "istio.io/api/mcp/v1alpha1" "istio.io/istio/pkg/log" - "istio.io/istio/pkg/mcp/server" + "istio.io/istio/pkg/mcp/source" ) var scope = log.RegisterScope("mcp", "mcp debugging", 0) -// Snapshot provides an immutable view of versioned envelopes. +// Snapshot provides an immutable view of versioned resources. type Snapshot interface { - Resources(typ string) []*mcp.Envelope - Version(typ string) string + Resources(collection string) []*mcp.Resource + Version(collection string) string } // Cache is a snapshot-based cache that maintains a single versioned @@ -43,15 +43,15 @@ type Cache struct { groupIndex GroupIndexFn } -// GroupIndexFn returns a stable group index for the given MCP client. -type GroupIndexFn func(client *mcp.Client) string +// GroupIndexFn returns a stable group index for the given MCP node. +type GroupIndexFn func(node *mcp.SinkNode) string // DefaultGroup is the default group when using the DefaultGroupIndex() function. const DefaultGroup = "default" // DefaultGroupIndex provides a default GroupIndexFn function that // is usable for testing and simple deployments. -func DefaultGroupIndex(_ *mcp.Client) string { +func DefaultGroupIndex(_ *mcp.SinkNode) string { return DefaultGroup } @@ -64,17 +64,17 @@ func New(groupIndex GroupIndexFn) *Cache { } } -var _ server.Watcher = &Cache{} +var _ source.Watcher = &Cache{} type responseWatch struct { - request *mcp.MeshConfigRequest // original request - pushResponse server.PushResponseFunc + request *source.Request + pushResponse source.PushResponseFunc } -// StatusInfo records watch status information of a remote client. +// StatusInfo records watch status information of a remote node. type StatusInfo struct { mu sync.RWMutex - client *mcp.Client + node *mcp.SinkNode lastWatchRequestTime time.Time // informational watches map[int64]*responseWatch } @@ -95,8 +95,8 @@ func (si *StatusInfo) LastWatchRequestTime() time.Time { } // Watch returns a watch for an MCP request. -func (c *Cache) Watch(request *mcp.MeshConfigRequest, pushResponse server.PushResponseFunc) server.CancelWatchFunc { // nolint: lll - group := c.groupIndex(request.Client) +func (c *Cache) Watch(request *source.Request, pushResponse source.PushResponseFunc) source.CancelWatchFunc { // nolint: lll + group := c.groupIndex(request.SinkNode) c.mu.Lock() defer c.mu.Unlock() @@ -104,7 +104,7 @@ func (c *Cache) Watch(request *mcp.MeshConfigRequest, pushResponse server.PushRe info, ok := c.status[group] if !ok { info = &StatusInfo{ - client: request.Client, + node: request.SinkNode, watches: make(map[int64]*responseWatch), } c.status[group] = info @@ -115,19 +115,22 @@ func (c *Cache) Watch(request *mcp.MeshConfigRequest, pushResponse server.PushRe info.lastWatchRequestTime = time.Now() info.mu.Unlock() + collection := request.Collection + // return an immediate response if a snapshot is available and the // requested version doesn't match. if snapshot, ok := c.snapshots[group]; ok { - version := snapshot.Version(request.TypeUrl) + + version := snapshot.Version(request.Collection) scope.Debugf("Found snapshot for group: %q for %v @ version: %q", - group, request.TypeUrl, version) + group, request.Collection, version) if version != request.VersionInfo { scope.Debugf("Responding to group %q snapshot:\n%v\n", group, snapshot) - response := &server.WatchResponse{ - TypeURL: request.TypeUrl, - Version: version, - Envelopes: snapshot.Resources(request.TypeUrl), + response := &source.WatchResponse{ + Collection: request.Collection, + Version: version, + Resources: snapshot.Resources(request.Collection), } pushResponse(response) return nil @@ -139,7 +142,7 @@ func (c *Cache) Watch(request *mcp.MeshConfigRequest, pushResponse server.PushRe watchID := c.watchCount scope.Infof("Watch(): created watch %d for %s from group %q, version %q", - watchID, request.TypeUrl, group, request.VersionInfo) + watchID, collection, group, request.VersionInfo) info.mu.Lock() info.watches[watchID] = &responseWatch{request: request, pushResponse: pushResponse} @@ -171,15 +174,15 @@ func (c *Cache) SetSnapshot(group string, snapshot Snapshot) { defer info.mu.Unlock() for id, watch := range info.watches { - version := snapshot.Version(watch.request.TypeUrl) + version := snapshot.Version(watch.request.Collection) if version != watch.request.VersionInfo { scope.Infof("SetSnapshot(): respond to watch %d for %v @ version %q", - id, watch.request.TypeUrl, version) + id, watch.request.Collection, version) - response := &server.WatchResponse{ - TypeURL: watch.request.TypeUrl, - Version: version, - Envelopes: snapshot.Resources(watch.request.TypeUrl), + response := &source.WatchResponse{ + Collection: watch.request.Collection, + Version: version, + Resources: snapshot.Resources(watch.request.Collection), } watch.pushResponse(response) @@ -187,7 +190,7 @@ func (c *Cache) SetSnapshot(group string, snapshot Snapshot) { delete(info.watches, id) scope.Debugf("SetSnapshot(): watch %d for %v @ version %q complete", - id, watch.request.TypeUrl, version) + id, watch.request.Collection, version) } } } diff --git a/pkg/mcp/snapshot/snapshot_test.go b/pkg/mcp/snapshot/snapshot_test.go index 96a1dd3cea45..434a20c0ff9a 100644 --- a/pkg/mcp/snapshot/snapshot_test.go +++ b/pkg/mcp/snapshot/snapshot_test.go @@ -23,114 +23,49 @@ import ( "testing" "time" - "github.com/gogo/protobuf/proto" - "github.com/gogo/protobuf/types" - mcp "istio.io/api/mcp/v1alpha1" - "istio.io/istio/pkg/mcp/server" + "istio.io/istio/pkg/mcp/internal/test" + "istio.io/istio/pkg/mcp/source" ) -// fake protobuf types that implements required resource interface - -type fakeTypeBase struct{ Info string } - -func (f fakeTypeBase) Reset() {} -func (f fakeTypeBase) String() string { return f.Info } -func (f fakeTypeBase) ProtoMessage() {} -func (f fakeTypeBase) Marshal() ([]byte, error) { return []byte(f.Info), nil } - -type fakeType0 struct{ fakeTypeBase } -type fakeType1 struct{ fakeTypeBase } -type fakeType2 struct{ fakeTypeBase } - -const ( - typePrefix = "type.googleapis.com/" - fakeType0Prefix = "istio.io.galley.pkg.mcp.server.fakeType0" - fakeType1Prefix = "istio.io.galley.pkg.mcp.server.fakeType1" - fakeType2Prefix = "istio.io.galley.pkg.mcp.server.fakeType2" - - fakeType0TypeURL = typePrefix + fakeType0Prefix - fakeType1TypeURL = typePrefix + fakeType1Prefix - fakeType2TypeURL = typePrefix + fakeType2Prefix -) - -func mustMarshalAny(pb proto.Message) *types.Any { - a, err := types.MarshalAny(pb) - if err != nil { - panic(err.Error()) - } - return a -} - -func init() { - proto.RegisterType((*fakeType0)(nil), fakeType0Prefix) - proto.RegisterType((*fakeType1)(nil), fakeType1Prefix) - proto.RegisterType((*fakeType2)(nil), fakeType2Prefix) - - fakeEnvelope0 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f0"}, - Resource: mustMarshalAny(&fakeType0{fakeTypeBase{"f0"}}), - } - fakeEnvelope1 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f1"}, - Resource: mustMarshalAny(&fakeType1{fakeTypeBase{"f1"}}), - } - fakeEnvelope2 = &mcp.Envelope{ - Metadata: &mcp.Metadata{Name: "f2"}, - Resource: mustMarshalAny(&fakeType2{fakeTypeBase{"f2"}}), - } -} - type fakeSnapshot struct { // read-only fields - no locking required - envelopes map[string][]*mcp.Envelope + resources map[string][]*mcp.Resource versions map[string]string } -func (fs *fakeSnapshot) Resources(typ string) []*mcp.Envelope { return fs.envelopes[typ] } -func (fs *fakeSnapshot) Version(typ string) string { return fs.versions[typ] } +func (fs *fakeSnapshot) Resources(collection string) []*mcp.Resource { return fs.resources[collection] } +func (fs *fakeSnapshot) Version(collection string) string { return fs.versions[collection] } func (fs *fakeSnapshot) copy() *fakeSnapshot { fsCopy := &fakeSnapshot{ - envelopes: make(map[string][]*mcp.Envelope), + resources: make(map[string][]*mcp.Resource), versions: make(map[string]string), } - for typeURL, envelopes := range fs.envelopes { - fsCopy.envelopes[typeURL] = append(fsCopy.envelopes[typeURL], envelopes...) - fsCopy.versions[typeURL] = fs.versions[typeURL] + for collection, resources := range fs.resources { + fsCopy.resources[collection] = append(fsCopy.resources[collection], resources...) + fsCopy.versions[collection] = fs.versions[collection] } return fsCopy } func makeSnapshot(version string) *fakeSnapshot { return &fakeSnapshot{ - envelopes: map[string][]*mcp.Envelope{ - fakeType0TypeURL: {fakeEnvelope0}, - fakeType1TypeURL: {fakeEnvelope1}, - fakeType2TypeURL: {fakeEnvelope2}, + resources: map[string][]*mcp.Resource{ + test.FakeType0Collection: {test.Type0A[0].Resource}, + test.FakeType1Collection: {test.Type1A[0].Resource}, + test.FakeType2Collection: {test.Type2A[0].Resource}, }, versions: map[string]string{ - fakeType0TypeURL: version, - fakeType1TypeURL: version, - fakeType2TypeURL: version, + test.FakeType0Collection: version, + test.FakeType1Collection: version, + test.FakeType2Collection: version, }, } } var _ Snapshot = &fakeSnapshot{} -var ( - fakeEnvelope0 *mcp.Envelope - fakeEnvelope1 *mcp.Envelope - fakeEnvelope2 *mcp.Envelope - - WatchResponseTypes = []string{ - fakeType0TypeURL, - fakeType1TypeURL, - fakeType2TypeURL, - } -) - // TODO - refactor tests to not rely on sleeps var ( asyncResponseTimeout = 200 * time.Millisecond @@ -142,16 +77,16 @@ func nextStrVersion(version *int64) string { } -func createTestWatch(c server.Watcher, typeURL, version string, responseC chan *server.WatchResponse, wantResponse, wantCancel bool) (*server.WatchResponse, server.CancelWatchFunc, error) { // nolint: lll - req := &mcp.MeshConfigRequest{ - TypeUrl: typeURL, +func createTestWatch(c source.Watcher, typeURL, version string, responseC chan *source.WatchResponse, wantResponse, wantCancel bool) (*source.WatchResponse, source.CancelWatchFunc, error) { // nolint: lll + req := &source.Request{ + Collection: typeURL, VersionInfo: version, - Client: &mcp.Client{ + SinkNode: &mcp.SinkNode{ Id: DefaultGroup, }, } - cancel := c.Watch(req, func(response *server.WatchResponse) { + cancel := c.Watch(req, func(response *source.WatchResponse) { responseC <- response }) @@ -185,7 +120,7 @@ func createTestWatch(c server.Watcher, typeURL, version string, responseC chan * return nil, cancel, nil } -func getAsyncResponse(responseC chan *server.WatchResponse) (*server.WatchResponse, bool) { +func getAsyncResponse(responseC chan *source.WatchResponse) (*source.WatchResponse, bool) { select { case got, more := <-responseC: if !more { @@ -206,18 +141,19 @@ func TestCreateWatch(t *testing.T) { c.SetSnapshot(DefaultGroup, snapshot) // verify immediate and async responses are handled independently across types. - for _, typeURL := range WatchResponseTypes { - t.Run(typeURL, func(t *testing.T) { - typeVersion := initVersion - responseC := make(chan *server.WatchResponse, 1) + + for _, collection := range test.SupportedCollections { + t.Run(collection, func(t *testing.T) { + collectionVersion := initVersion + responseC := make(chan *source.WatchResponse, 1) // verify immediate response - if _, _, err := createTestWatch(c, typeURL, "", responseC, true, false); err != nil { + if _, _, err := createTestWatch(c, collection, "", responseC, true, false); err != nil { t.Fatalf("CreateWatch() failed: %v", err) } // verify open watch, i.e. no immediate or async response - if _, _, err := createTestWatch(c, typeURL, typeVersion, responseC, false, true); err != nil { + if _, _, err := createTestWatch(c, collection, collectionVersion, responseC, false, true); err != nil { t.Fatalf("CreateWatch() failed: %v", err) } if gotResponse, _ := getAsyncResponse(responseC); gotResponse != nil { @@ -226,15 +162,15 @@ func TestCreateWatch(t *testing.T) { // verify async response snapshot = snapshot.copy() - typeVersion = nextStrVersion(&versionInt) - snapshot.versions[typeURL] = typeVersion + collectionVersion = nextStrVersion(&versionInt) + snapshot.versions[collection] = collectionVersion c.SetSnapshot(DefaultGroup, snapshot) if gotResponse, _ := getAsyncResponse(responseC); gotResponse != nil { - wantResponse := &server.WatchResponse{ - TypeURL: typeURL, - Version: typeVersion, - Envelopes: snapshot.Resources(typeURL), + wantResponse := &source.WatchResponse{ + Collection: collection, + Version: collectionVersion, + Resources: snapshot.Resources(collection), } if !reflect.DeepEqual(gotResponse, wantResponse) { t.Fatalf("received bad WatchResponse: got %v wantResponse %v", gotResponse, wantResponse) @@ -244,7 +180,7 @@ func TestCreateWatch(t *testing.T) { } // verify lack of immediate response after async response. - if _, _, err := createTestWatch(c, typeURL, typeVersion, responseC, false, true); err != nil { + if _, _, err := createTestWatch(c, collection, collectionVersion, responseC, false, true); err != nil { t.Fatalf("CreateWatch() failed after receiving prior response: %v", err) } @@ -263,18 +199,18 @@ func TestWatchCancel(t *testing.T) { c := New(DefaultGroupIndex) c.SetSnapshot(DefaultGroup, snapshot) - for _, typeURL := range WatchResponseTypes { - t.Run(typeURL, func(t *testing.T) { - typeVersion := initVersion - responseC := make(chan *server.WatchResponse, 1) + for _, collection := range test.SupportedCollections { + t.Run(collection, func(t *testing.T) { + collectionVersion := initVersion + responseC := make(chan *source.WatchResponse, 1) // verify immediate response - if _, _, err := createTestWatch(c, typeURL, "", responseC, true, false); err != nil { + if _, _, err := createTestWatch(c, collection, "", responseC, true, false); err != nil { t.Fatalf("CreateWatch failed: immediate response not received: %v", err) } // verify watch can be canceled - _, cancel, err := createTestWatch(c, typeURL, typeVersion, responseC, false, true) + _, cancel, err := createTestWatch(c, collection, collectionVersion, responseC, false, true) if err != nil { t.Fatalf("CreateWatche failed: %v", err) } @@ -282,8 +218,8 @@ func TestWatchCancel(t *testing.T) { // verify no response after watch is canceled snapshot = snapshot.copy() - typeVersion = nextStrVersion(&versionInt) - snapshot.versions[typeURL] = typeVersion + collectionVersion = nextStrVersion(&versionInt) + snapshot.versions[collection] = collectionVersion c.SetSnapshot(DefaultGroup, snapshot) if gotResponse, _ := getAsyncResponse(responseC); gotResponse != nil { @@ -301,27 +237,27 @@ func TestClearSnapshot(t *testing.T) { c := New(DefaultGroupIndex) c.SetSnapshot(DefaultGroup, snapshot) - for _, typeURL := range WatchResponseTypes { - t.Run(typeURL, func(t *testing.T) { - responseC := make(chan *server.WatchResponse, 1) + for _, collection := range test.SupportedCollections { + t.Run(collection, func(t *testing.T) { + responseC := make(chan *source.WatchResponse, 1) // verify no immediate response if snapshot is cleared. c.ClearSnapshot(DefaultGroup) - if _, _, err := createTestWatch(c, typeURL, "", responseC, false, true); err != nil { + if _, _, err := createTestWatch(c, collection, "", responseC, false, true); err != nil { t.Fatalf("CreateWatch() failed: %v", err) } // verify async response after new snapshot is added snapshot = snapshot.copy() typeVersion := nextStrVersion(&versionInt) - snapshot.versions[typeURL] = typeVersion + snapshot.versions[collection] = typeVersion c.SetSnapshot(DefaultGroup, snapshot) if gotResponse, _ := getAsyncResponse(responseC); gotResponse != nil { - wantResponse := &server.WatchResponse{ - TypeURL: typeURL, - Version: typeVersion, - Envelopes: snapshot.Resources(typeURL), + wantResponse := &source.WatchResponse{ + Collection: collection, + Version: typeVersion, + Resources: snapshot.Resources(collection), } if !reflect.DeepEqual(gotResponse, wantResponse) { t.Fatalf("received bad WatchResponse: got %v wantResponse %v", gotResponse, wantResponse) @@ -340,11 +276,11 @@ func TestClearStatus(t *testing.T) { c := New(DefaultGroupIndex) - for _, typeURL := range WatchResponseTypes { - t.Run(typeURL, func(t *testing.T) { - responseC := make(chan *server.WatchResponse, 1) + for _, collection := range test.SupportedCollections { + t.Run(collection, func(t *testing.T) { + responseC := make(chan *source.WatchResponse, 1) - if _, _, err := createTestWatch(c, typeURL, "", responseC, false, true); err != nil { + if _, _, err := createTestWatch(c, collection, "", responseC, false, true); err != nil { t.Fatalf("CreateWatch() failed: %v", err) } @@ -358,7 +294,7 @@ func TestClearStatus(t *testing.T) { // that any subsequent snapshot is not delivered. snapshot = snapshot.copy() typeVersion := nextStrVersion(&versionInt) - snapshot.versions[typeURL] = typeVersion + snapshot.versions[collection] = typeVersion c.SetSnapshot(DefaultGroup, snapshot) if gotResponse, timeout := getAsyncResponse(responseC); gotResponse != nil { diff --git a/pkg/mcp/source/source.go b/pkg/mcp/source/source.go new file mode 100644 index 000000000000..6a0eb747d8f5 --- /dev/null +++ b/pkg/mcp/source/source.go @@ -0,0 +1,97 @@ +// Copyright 2019 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package source + +import ( + "time" + + mcp "istio.io/api/mcp/v1alpha1" + "istio.io/istio/pkg/mcp/monitoring" +) + +const DefaultRetryPushDelay = 10 * time.Millisecond + +// Request is a temporary abstraction for the MCP client request which can +// be used with the mcp.MeshConfigRequest and mcp.RequestResources. It can +// be removed once we fully cutover to mcp.RequestResources. +type Request struct { + Collection string + VersionInfo string + SinkNode *mcp.SinkNode +} + +// WatchResponse contains a versioned collection of pre-serialized resources. +type WatchResponse struct { + Collection string + + // Version of the resources in the response for the given + // type. The client responses with this version in subsequent + // requests as an acknowledgment. + Version string + + // Resourced resources to be included in the response. + Resources []*mcp.Resource +} + +type ( + // CancelWatchFunc allows the consumer to cancel a previous watch, + // terminating the watch for the request. + CancelWatchFunc func() + + // PushResponseFunc allows the consumer to push a response for the + // corresponding watch. + PushResponseFunc func(*WatchResponse) +) + +// Watcher requests watches for configuration resources by node, last +// applied version, and type. The watch should send the responses when +// they are ready. The watch can be canceled by the consumer. +type Watcher interface { + // Watch returns a new open watch for a non-empty request. + // + // Cancel is an optional function to release resources in the + // producer. It can be called idempotently to cancel and release resources. + Watch(*Request, PushResponseFunc) CancelWatchFunc +} + +// CollectionOptions configures the per-collection updates. +type CollectionOptions struct { + // Name of the collection, e.g. istio/networking/v1alpha3/VirtualService + Name string +} + +// CollectionsOptionsFromSlice returns a slice of collection options from +// a slice of collection names. +func CollectionOptionsFromSlice(names []string) []CollectionOptions { + options := make([]CollectionOptions, 0, len(names)) + for _, name := range names { + options = append(options, CollectionOptions{ + Name: name, + }) + } + return options +} + +// Options contains options for configuring MCP sources. +type Options struct { + Watcher Watcher + CollectionsOptions []CollectionOptions + Reporter monitoring.Reporter + + // Controls the delay for re-retrying a configuration push if the previous + // attempt was not possible, e.errgrp. the lower-level serving layer was busy. This + // should typically be set fairly small (order of milliseconds). + RetryPushDelay time.Duration +} diff --git a/pkg/mcp/testing/monitoring/reporter.go b/pkg/mcp/testing/monitoring/reporter.go index ac3cfd30230b..8c1b61ab6226 100644 --- a/pkg/mcp/testing/monitoring/reporter.go +++ b/pkg/mcp/testing/monitoring/reporter.go @@ -12,7 +12,7 @@ // See the License for the specific language governing permissions and // limitations under the License. -package mcptestmon +package monitoring import ( "sync" @@ -30,28 +30,35 @@ type requestKey struct { connectionID int64 } -// InMemoryServerStatsContext enables MCP server metric collection which is +type nackKey struct { + typeURL string + connectionID int64 + code codes.Code +} + +// InMemoryStatsContext enables MCP server metric collection which is // stored in memory for testing purposes. -type InMemoryServerStatsContext struct { - mutex sync.Mutex - ClientsTotal int64 - RequestSizesBytes map[requestKey][]int64 - RequestAcksTotal map[requestKey]int64 - RequestNacksTotal map[requestKey]int64 - SendFailuresTotal map[errorCodeKey]int64 - RecvFailuresTotal map[errorCodeKey]int64 +type InMemoryStatsContext struct { + mutex sync.Mutex + StreamTotal int64 + RequestSizesBytes map[requestKey][]int64 + RequestAcksTotal map[requestKey]int64 + RequestNacksTotal map[nackKey]int64 + SendFailuresTotal map[errorCodeKey]int64 + RecvFailuresTotal map[errorCodeKey]int64 + StreamCreateSuccessTotal int64 } -// SetClientsTotal updates the current client count to the given argument. -func (s *InMemoryServerStatsContext) SetClientsTotal(clients int64) { +// SetStreamCount updates the current stream count to the given argument. +func (s *InMemoryStatsContext) SetStreamCount(clients int64) { s.mutex.Lock() - s.ClientsTotal = clients + s.StreamTotal = clients s.mutex.Unlock() } // RecordSendError records an error during a network send with its error // string and code. -func (s *InMemoryServerStatsContext) RecordSendError(err error, code codes.Code) { +func (s *InMemoryStatsContext) RecordSendError(err error, code codes.Code) { s.mutex.Lock() s.SendFailuresTotal[errorCodeKey{err.Error(), code}]++ s.mutex.Unlock() @@ -59,14 +66,14 @@ func (s *InMemoryServerStatsContext) RecordSendError(err error, code codes.Code) // RecordRecvError records an error during a network recv with its error // string and code. -func (s *InMemoryServerStatsContext) RecordRecvError(err error, code codes.Code) { +func (s *InMemoryStatsContext) RecordRecvError(err error, code codes.Code) { s.mutex.Lock() s.RecvFailuresTotal[errorCodeKey{err.Error(), code}]++ s.mutex.Unlock() } // RecordRequestSize records the size of a request from a connection for a specific type URL. -func (s *InMemoryServerStatsContext) RecordRequestSize(typeURL string, connectionID int64, size int) { +func (s *InMemoryStatsContext) RecordRequestSize(typeURL string, connectionID int64, size int) { key := requestKey{typeURL, connectionID} s.mutex.Lock() s.RequestSizesBytes[key] = append(s.RequestSizesBytes[key], int64(size)) @@ -74,94 +81,37 @@ func (s *InMemoryServerStatsContext) RecordRequestSize(typeURL string, connectio } // RecordRequestAck records an ACK message for a type URL on a connection. -func (s *InMemoryServerStatsContext) RecordRequestAck(typeURL string, connectionID int64) { +func (s *InMemoryStatsContext) RecordRequestAck(typeURL string, connectionID int64) { s.mutex.Lock() s.RequestAcksTotal[requestKey{typeURL, connectionID}]++ s.mutex.Unlock() } // RecordRequestNack records a NACK message for a type URL on a connection. -func (s *InMemoryServerStatsContext) RecordRequestNack(typeURL string, connectionID int64) { +func (s *InMemoryStatsContext) RecordRequestNack(typeURL string, connectionID int64, code codes.Code) { s.mutex.Lock() - s.RequestNacksTotal[requestKey{typeURL, connectionID}]++ - s.mutex.Unlock() -} - -// Close implements io.Closer. -func (s *InMemoryServerStatsContext) Close() error { - return nil -} - -// NewInMemoryServerStatsContext creates a new context for tracking metrics -// in memory. -func NewInMemoryServerStatsContext() *InMemoryServerStatsContext { - return &InMemoryServerStatsContext{ - RequestSizesBytes: make(map[requestKey][]int64), - RequestAcksTotal: make(map[requestKey]int64), - RequestNacksTotal: make(map[requestKey]int64), - SendFailuresTotal: make(map[errorCodeKey]int64), - RecvFailuresTotal: make(map[errorCodeKey]int64), - } -} - -type nackKey struct { - error string - typeURL string -} - -// InMemoryClientStatsContext enables MCP client metric collection which is -// stored in memory for testing purposes. -type InMemoryClientStatsContext struct { - mutex sync.Mutex - RequestAcksTotal map[string]int64 - RequestNacksTotal map[nackKey]int64 - SendFailuresTotal map[errorCodeKey]int64 - RecvFailuresTotal map[errorCodeKey]int64 - StreamCreateSuccessTotal int64 -} - -// RecordSendError records an error during a network send with its error -// string and code. -func (s *InMemoryClientStatsContext) RecordSendError(err error, code codes.Code) { - s.mutex.Lock() - s.RecvFailuresTotal[errorCodeKey{err.Error(), code}]++ - s.mutex.Unlock() -} - -// RecordRecvError records an error during a network recv with its error -// string and code. -func (s *InMemoryClientStatsContext) RecordRecvError(err error, code codes.Code) { - s.mutex.Lock() - s.RecvFailuresTotal[errorCodeKey{err.Error(), code}]++ - s.mutex.Unlock() -} - -// RecordRequestAck records an ACK message for a type URL on a connection. -func (s *InMemoryClientStatsContext) RecordRequestAck(typeURL string) { - s.mutex.Lock() - s.RequestAcksTotal[typeURL]++ - s.mutex.Unlock() -} - -// RecordRequestNack records a NACK message for a type URL on a connection. -func (s *InMemoryClientStatsContext) RecordRequestNack(typeURL string, err error) { - s.mutex.Lock() - s.RequestNacksTotal[nackKey{err.Error(), typeURL}]++ + s.RequestNacksTotal[nackKey{typeURL, connectionID, code}]++ s.mutex.Unlock() } // RecordStreamCreateSuccess records a successful stream connection. -func (s *InMemoryClientStatsContext) RecordStreamCreateSuccess() { +func (s *InMemoryStatsContext) RecordStreamCreateSuccess() { s.mutex.Lock() s.StreamCreateSuccessTotal++ s.mutex.Unlock() } -// NewInMemoryClientStatsContext creates a new context for tracking metrics +// Close implements io.Closer. +func (s *InMemoryStatsContext) Close() error { + return nil +} + +// NewInMemoryStatsContext creates a new context for tracking metrics // in memory. -func NewInMemoryClientStatsContext() *InMemoryClientStatsContext { - return &InMemoryClientStatsContext{ - RequestAcksTotal: make(map[string]int64), +func NewInMemoryStatsContext() *InMemoryStatsContext { + return &InMemoryStatsContext{ + RequestSizesBytes: make(map[requestKey][]int64), + RequestAcksTotal: make(map[requestKey]int64), RequestNacksTotal: make(map[nackKey]int64), SendFailuresTotal: make(map[errorCodeKey]int64), RecvFailuresTotal: make(map[errorCodeKey]int64), diff --git a/pkg/mcp/testing/server.go b/pkg/mcp/testing/server.go index 61ca8c2bf0e0..1e376f9afd0f 100644 --- a/pkg/mcp/testing/server.go +++ b/pkg/mcp/testing/server.go @@ -25,7 +25,8 @@ import ( mcp "istio.io/api/mcp/v1alpha1" "istio.io/istio/pkg/mcp/server" "istio.io/istio/pkg/mcp/snapshot" - mcptestmon "istio.io/istio/pkg/mcp/testing/monitoring" + "istio.io/istio/pkg/mcp/source" + "istio.io/istio/pkg/mcp/testing/monitoring" ) // Server is a simple MCP server, used for testing purposes. @@ -33,8 +34,8 @@ type Server struct { // The internal snapshot.Cache that the server is using. Cache *snapshot.Cache - // TypeURLs that were originally passed in. - TypeURLs []string + // Collections that were originally passed in. + Collections []source.CollectionOptions // Port that the service is listening on. Port int @@ -51,9 +52,15 @@ var _ io.Closer = &Server{} // NewServer creates and starts a new MCP Server. Returns a new Server instance upon success. // Specifying port as 0 will cause the server to bind to an arbitrary port. This port can be queried // from the Port field of the returned server struct. -func NewServer(port int, typeUrls []string) (*Server, error) { +func NewServer(port int, collections []source.CollectionOptions) (*Server, error) { cache := snapshot.New(snapshot.DefaultGroupIndex) - s := server.New(cache, typeUrls, server.NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + + options := &source.Options{ + Watcher: cache, + CollectionsOptions: collections, + Reporter: monitoring.NewInMemoryStatsContext(), + } + s := server.New(options, server.NewAllowAllChecker()) addr := fmt.Sprintf("127.0.0.1:%d", port) l, err := net.Listen("tcp", addr) @@ -75,12 +82,12 @@ func NewServer(port int, typeUrls []string) (*Server, error) { go func() { _ = gs.Serve(l) }() return &Server{ - Cache: cache, - TypeURLs: typeUrls, - Port: p, - URL: u, - gs: gs, - l: l, + Cache: cache, + Collections: collections, + Port: p, + URL: u, + gs: gs, + l: l, }, nil } @@ -93,7 +100,7 @@ func (t *Server) Close() (err error) { t.l = nil // gRPC stack will close this t.Cache = nil - t.TypeURLs = nil + t.Collections = nil t.Port = 0 return diff --git a/pkg/spiffe/spiffe.go b/pkg/spiffe/spiffe.go new file mode 100644 index 000000000000..429012b09ccc --- /dev/null +++ b/pkg/spiffe/spiffe.go @@ -0,0 +1,61 @@ +package spiffe + +import ( + "fmt" + "strings" + + "istio.io/istio/pkg/log" +) + +const ( + Scheme = "spiffe" + + // The default SPIFFE URL value for trust domain + defaultTrustDomain = "cluster.local" +) + +var trustDomain = defaultTrustDomain + +func SetTrustDomain(value string) { + trustDomain = value +} + +func GetTrustDomain() string { + return trustDomain +} + +func DetermineTrustDomain(commandLineTrustDomain string, domain string, isKubernetes bool) string { + + if len(commandLineTrustDomain) != 0 { + return commandLineTrustDomain + } + if len(domain) != 0 { + return domain + } + if isKubernetes { + return defaultTrustDomain + } + return domain +} + +// GenSpiffeURI returns the formatted uri(SPIFFEE format for now) for the certificate. +func GenSpiffeURI(ns, serviceAccount string) (string, error) { + var err error + if ns == "" || serviceAccount == "" { + err = fmt.Errorf( + "namespace or service account can't be empty ns=%v serviceAccount=%v", ns, serviceAccount) + } + + // replace specifial character in spiffe + trustDomain = strings.Replace(trustDomain, "@", ".", -1) + return fmt.Sprintf(Scheme+"://%s/ns/%s/sa/%s", trustDomain, ns, serviceAccount), err +} + +// MustGenSpiffeURI returns the formatted uri(SPIFFEE format for now) for the certificate and logs if there was an error. +func MustGenSpiffeURI(ns, serviceAccount string) string { + uri, err := GenSpiffeURI(ns, serviceAccount) + if err != nil { + log.Error(err.Error()) + } + return uri +} diff --git a/pkg/spiffe/spiffe_test.go b/pkg/spiffe/spiffe_test.go new file mode 100644 index 000000000000..5232f397292d --- /dev/null +++ b/pkg/spiffe/spiffe_test.go @@ -0,0 +1,83 @@ +package spiffe + +import ( + "strings" + "testing" +) + +func TestGenSpiffeURI(t *testing.T) { + oldTrustDomain := GetTrustDomain() + defer SetTrustDomain(oldTrustDomain) + + testCases := []struct { + namespace string + trustDomain string + serviceAccount string + expectedError string + expectedURI string + }{ + { + serviceAccount: "sa", + expectedError: "namespace or service account can't be empty", + }, + { + namespace: "ns", + expectedError: "namespace or service account can't be empty", + }, + { + namespace: "namespace-foo", + serviceAccount: "service-bar", + expectedURI: "spiffe://cluster.local/ns/namespace-foo/sa/service-bar", + }, + { + namespace: "foo", + serviceAccount: "bar", + expectedURI: "spiffe://cluster.local/ns/foo/sa/bar", + }, + { + namespace: "foo", + serviceAccount: "bar", + trustDomain: "kube-federating-id@testproj.iam.gserviceaccount.com", + expectedURI: "spiffe://kube-federating-id.testproj.iam.gserviceaccount.com/ns/foo/sa/bar", + }, + } + for id, tc := range testCases { + if tc.trustDomain == "" { + SetTrustDomain(defaultTrustDomain) + } else { + SetTrustDomain(tc.trustDomain) + } + + got, err := GenSpiffeURI(tc.namespace, tc.serviceAccount) + if tc.expectedError == "" && err != nil { + t.Errorf("teste case [%v] failed, error %v", id, tc) + } + if tc.expectedError != "" { + if err == nil { + t.Errorf("want get error %v, got nil", tc.expectedError) + } else if !strings.Contains(err.Error(), tc.expectedError) { + t.Errorf("want error contains %v, got error %v", tc.expectedError, err) + } + continue + } + if got != tc.expectedURI { + t.Errorf("unexpected subject name, want %v, got %v", tc.expectedURI, got) + } + + } +} + +func TestGetSetTrustDomain(t *testing.T) { + oldTrustDomain := GetTrustDomain() + defer SetTrustDomain(oldTrustDomain) + SetTrustDomain("test.local") + if GetTrustDomain() != "test.local" { + t.Errorf("Set/GetTrustDomain not working") + } +} + +func TestMustGenSpiffeURI(t *testing.T) { + if nonsense := MustGenSpiffeURI("", ""); nonsense != "spiffe://cluster.local/ns//sa/" { + t.Errorf("Unexpected spiffe URI for empty namespace and service account: %s", nonsense) + } +} diff --git a/pkg/test/application/echo/batch.go b/pkg/test/application/echo/batch.go index 76f02ad2de8a..a25581a15979 100644 --- a/pkg/test/application/echo/batch.go +++ b/pkg/test/application/echo/batch.go @@ -19,6 +19,7 @@ import ( "crypto/tls" "fmt" "log" + "net" "net/http" "strings" "time" @@ -39,6 +40,7 @@ type BatchOptions struct { QPS int Timeout time.Duration URL string + UDS string Header http.Header Message string CAFile string @@ -120,6 +122,18 @@ func NewBatch(ops BatchOptions) (*Batch, error) { } func newProtocol(ops BatchOptions) (protocol, error) { + var httpDialContext func(ctx context.Context, network, addr string) (net.Conn, error) + var wsDialContext func(network, addr string) (net.Conn, error) + if len(ops.UDS) > 0 { + httpDialContext = func(_ context.Context, _, _ string) (net.Conn, error) { + return net.Dial("unix", ops.UDS) + } + + wsDialContext = func(_, _ string) (net.Conn, error) { + return net.Dial("unix", ops.UDS) + } + } + if strings.HasPrefix(ops.URL, "http://") || strings.HasPrefix(ops.URL, "https://") { /* #nosec */ client := &http.Client{ @@ -127,6 +141,7 @@ func newProtocol(ops BatchOptions) (protocol, error) { TLSClientConfig: &tls.Config{ InsecureSkipVerify: true, }, + DialContext: httpDialContext, }, Timeout: ops.Timeout, } @@ -178,6 +193,7 @@ func newProtocol(ops BatchOptions) (protocol, error) { TLSClientConfig: &tls.Config{ InsecureSkipVerify: true, }, + NetDial: wsDialContext, HandshakeTimeout: ops.Timeout, } return &websocketProtocol{ diff --git a/pkg/test/application/echo/client/main.go b/pkg/test/application/echo/client/main.go index 48e12309b815..ef69f3e8a1fc 100644 --- a/pkg/test/application/echo/client/main.go +++ b/pkg/test/application/echo/client/main.go @@ -33,6 +33,7 @@ var ( timeout time.Duration qps int url string + uds string headerKey string headerVal string headers string @@ -46,6 +47,7 @@ func init() { flag.IntVar(&qps, "qps", 0, "Queries per second") flag.DurationVar(&timeout, "timeout", 15*time.Second, "Request timeout") flag.StringVar(&url, "url", "", "Specify URL") + flag.StringVar(&uds, "uds", "", "Specify the Unix Domain Socket to connect to") flag.StringVar(&headerKey, "key", "", "Header key (use Host for authority) - deprecated user headers instead") flag.StringVar(&headerVal, "val", "", "Header value - deprecated") flag.StringVar(&headers, "headers", "", "A list of http headers (use Host for authority) - name:value[,name:value]*") @@ -56,6 +58,7 @@ func init() { func newBatchOptions() (echo.BatchOptions, error) { ops := echo.BatchOptions{ URL: url, + UDS: uds, Timeout: timeout, Count: count, QPS: qps, diff --git a/pkg/test/application/echo/echo.go b/pkg/test/application/echo/echo.go index 66bd8edd6410..e18a52530aa0 100644 --- a/pkg/test/application/echo/echo.go +++ b/pkg/test/application/echo/echo.go @@ -18,6 +18,7 @@ import ( "fmt" "net" "net/http" + "os" multierror "github.com/hashicorp/go-multierror" "google.golang.org/grpc" @@ -31,10 +32,11 @@ import ( // Factory is a factory for echo applications. type Factory struct { - Ports model.PortList - TLSCert string - TLSCKey string - Version string + Ports model.PortList + TLSCert string + TLSCKey string + Version string + UDSServer string } // NewApplication implements the application.Factory interface. @@ -51,6 +53,7 @@ func (f *Factory) NewApplication(dialer application.Dialer) (application.Applica tlsCert: f.TLSCert, tlsCKey: f.TLSCKey, version: f.Version, + uds: f.UDSServer, dialer: dialer.Fill(), } if err := app.start(); err != nil { @@ -67,6 +70,7 @@ type echo struct { tlsCKey string version string dialer application.Dialer + uds string servers []serverInterface } @@ -87,8 +91,8 @@ func (a *echo) start() (err error) { return err } - a.servers = make([]serverInterface, len(a.ports)) - for i, p := range a.ports { + a.servers = make([]serverInterface, 0) + for _, p := range a.ports { handler := &handler{ version: a.version, caFile: a.tlsCert, @@ -100,24 +104,35 @@ func (a *echo) start() (err error) { case model.ProtocolHTTP: fallthrough case model.ProtocolHTTPS: - a.servers[i] = &httpServer{ + a.servers = append(a.servers, &httpServer{ port: p, h: handler, - } + }) case model.ProtocolHTTP2: fallthrough case model.ProtocolGRPC: - a.servers[i] = &grpcServer{ + a.servers = append(a.servers, &grpcServer{ port: p, h: handler, tlsCert: a.tlsCert, tlsCKey: a.tlsCKey, - } + }) default: return fmt.Errorf("unsupported protocol: %s", p.Protocol) } } + if len(a.uds) > 0 { + a.servers = append(a.servers, &httpServer{ + uds: a.uds, + h: &handler{ + version: a.version, + caFile: a.tlsCert, + dialer: a.dialer, + }, + }) + } + // Start the servers, updating port numbers as necessary. for _, s := range a.servers { if err := s.start(); err != nil { @@ -161,23 +176,39 @@ type serverInterface interface { type httpServer struct { server *http.Server port *model.Port + uds string h *handler } func (s *httpServer) start() error { - // Listen on the given port and update the port if it changed from what was passed in. - listener, p, err := listenOnPort(s.port.Port) + var listener net.Listener + var p int + var err error + + s.server = &http.Server{ + Handler: s.h, + } + + if len(s.uds) > 0 { + p = 0 + listener, err = listenOnUDS(s.uds) + } else { + // Listen on the given port and update the port if it changed from what was passed in. + listener, p, err = listenOnPort(s.port.Port) + // Store the actual listening port back to the argument. + s.port.Port = p + s.h.port = p + } + if err != nil { return err } - // Store the actual listening port back to the argument. - s.port.Port = p - s.h.port = p - fmt.Printf("Listening HTTP/1.1 on %v\n", p) - s.server = &http.Server{ - Addr: fmt.Sprintf(":%d", p), - Handler: s.h, + if len(s.uds) > 0 { + fmt.Printf("Listening HTTP/1.1 on %v\n", s.uds) + } else { + s.server.Addr = fmt.Sprintf(":%d", p) + fmt.Printf("Listening HTTP/1.1 on %v\n", p) } // Start serving HTTP traffic. @@ -240,3 +271,13 @@ func listenOnPort(port int) (net.Listener, int, error) { port = ln.Addr().(*net.TCPAddr).Port return ln, port, nil } + +func listenOnUDS(uds string) (net.Listener, error) { + os.Remove(uds) + ln, err := net.Listen("unix", uds) + if err != nil { + return nil, err + } + + return ln, nil +} diff --git a/pkg/test/application/echo/server/main.go b/pkg/test/application/echo/server/main.go index b63027b122e7..3e66f565814d 100644 --- a/pkg/test/application/echo/server/main.go +++ b/pkg/test/application/echo/server/main.go @@ -31,6 +31,7 @@ import ( var ( httpPorts []int grpcPorts []int + uds string version string crt string key string @@ -39,6 +40,7 @@ var ( func init() { flag.IntSliceVar(&httpPorts, "port", []int{8080}, "HTTP/1.1 ports") flag.IntSliceVar(&grpcPorts, "grpc", []int{7070}, "GRPC ports") + flag.StringVar(&uds, "uds", "", "HTTP server on unix domain socket") flag.StringVar(&version, "version", "", "Version string") flag.StringVar(&crt, "crt", "", "gRPC TLS server-side certificate") flag.StringVar(&key, "key", "", "gRPC TLS server-side key") @@ -69,10 +71,11 @@ func main() { } f := &echo.Factory{ - Ports: ports, - TLSCert: crt, - TLSCKey: key, - Version: version, + Ports: ports, + TLSCert: crt, + TLSCKey: key, + Version: version, + UDSServer: uds, } if _, err := f.NewApplication(application.Dialer{}); err != nil { log.Errora(err) diff --git a/pkg/test/config.go b/pkg/test/config.go index c0d0b0be5dfc..faa5f5b25d42 100644 --- a/pkg/test/config.go +++ b/pkg/test/config.go @@ -14,7 +14,10 @@ package test -import "strings" +import ( + "io/ioutil" + "strings" +) const separator = "\n---\n" @@ -34,3 +37,12 @@ func JoinConfigs(parts ...string) string { func SplitConfigs(cfg string) []string { return strings.Split(cfg, separator) } + +func ReadConfigFile(filepath string) (string, error) { + by, err := ioutil.ReadFile(filepath) + if err != nil { + return "", err + } + + return string(by), nil +} diff --git a/pkg/test/deployment/helm.go b/pkg/test/deployment/helm.go index e2c36d7cd6d6..d59eab2fff44 100644 --- a/pkg/test/deployment/helm.go +++ b/pkg/test/deployment/helm.go @@ -22,6 +22,7 @@ import ( "path/filepath" "time" + "istio.io/istio/pkg/test" "istio.io/istio/pkg/test/kube" "istio.io/istio/pkg/test/scopes" "istio.io/istio/pkg/test/shell" @@ -35,14 +36,18 @@ metadata: labels: istio-injection: disabled ` + zeroCRDInstallFile = "crd-10.yaml" + oneCRDInstallFile = "crd-11.yaml" + twoCRDInstallFile = "crd-certmanager-10.yaml" ) // HelmConfig configuration for a Helm-based deployment. type HelmConfig struct { - Accessor *kube.Accessor - Namespace string - WorkDir string - ChartDir string + Accessor *kube.Accessor + Namespace string + WorkDir string + ChartDir string + CrdsFilesDir string // Can be either a file name under ChartDir or an absolute file path. ValuesFile string @@ -81,8 +86,11 @@ func NewHelmDeployment(c HelmConfig) (*Instance, error) { // TODO: This is Istio deployment specific. We may need to remove/reconcile this as a parameter // when we support Helm deployment of non-Istio artifacts. namespaceData := fmt.Sprintf(namespaceTemplate, c.Namespace) - - generatedYaml = namespaceData + generatedYaml + crdsData, err := getCrdsYamlFiles(c) + if err != nil { + return nil, err + } + generatedYaml = test.JoinConfigs(namespaceData, crdsData, generatedYaml) if err = ioutil.WriteFile(yamlFilePath, []byte(generatedYaml), os.ModePerm); err != nil { return nil, fmt.Errorf("unable to write helm generated yaml: %v", err) @@ -92,6 +100,22 @@ func NewHelmDeployment(c HelmConfig) (*Instance, error) { return NewYamlDeployment(c.Namespace, yamlFilePath), nil } +func getCrdsYamlFiles(c HelmConfig) (string, error) { + // Note: When adding a CRD to the install, a new CRDFile* constant is needed + // This slice contains the list of CRD files installed during testing + istioCRDFileNames := []string{zeroCRDInstallFile, oneCRDInstallFile, twoCRDInstallFile} + // Get Joined Crds Yaml file + prevContent := "" + for _, yamlFileName := range istioCRDFileNames { + content, err := test.ReadConfigFile(path.Join(c.CrdsFilesDir, yamlFileName)) + if err != nil { + return "", err + } + prevContent = test.JoinConfigs(content, prevContent) + } + return prevContent, nil +} + // HelmTemplate calls "helm template". func HelmTemplate(deploymentName, namespace, chartDir, workDir, valuesFile string, values map[string]string) (string, error) { // Apply the overrides for the values file. diff --git a/pkg/test/env/istio.go b/pkg/test/env/istio.go index ddcd535eb76f..337117b447ad 100644 --- a/pkg/test/env/istio.go +++ b/pkg/test/env/istio.go @@ -70,6 +70,8 @@ var ( // IstioChartDir is the Kubernetes Helm chart directory in the repository IstioChartDir = path.Join(ChartsDir, "istio") + CrdsFilesDir = path.Join(ChartsDir, "istio-init/files") + // BookInfoRoot is the root folder for the bookinfo samples BookInfoRoot = path.Join(IstioRoot, "samples/bookinfo") diff --git a/pkg/test/envoy/envoy.go b/pkg/test/envoy/envoy.go index bb7694fb6a28..dc4627c715e2 100644 --- a/pkg/test/envoy/envoy.go +++ b/pkg/test/envoy/envoy.go @@ -166,8 +166,6 @@ func (e *Envoy) getCommandArgs() []string { args := []string{ "--base-id", strconv.FormatUint(uint64(e.baseID), 10), - // Always force v2 config. - "--v2-config-only", "--config-path", e.YamlFile, "--log-level", diff --git a/pkg/test/fakes/policy/backend.go b/pkg/test/fakes/policy/backend.go index f9d887958450..5c8eaa3e48dd 100644 --- a/pkg/test/fakes/policy/backend.go +++ b/pkg/test/fakes/policy/backend.go @@ -75,7 +75,7 @@ func (b *Backend) Port() int { // Start the gRPC service for the policy backend. func (b *Backend) Start() error { - scope.Infof("Starting Policy Backend at port: %d", b.port) + scope.Info("Starting Policy Backend") listener, err := net.Listen("tcp", fmt.Sprintf(":%d", b.port)) if err != nil { @@ -91,7 +91,7 @@ func (b *Backend) Start() error { RegisterControllerServiceServer(grpcServer, b) go func() { - scope.Info("Starting the GRPC service") + scope.Infof("Starting the GRPC service at port: %d", b.port) _ = grpcServer.Serve(listener) }() @@ -151,7 +151,8 @@ func (b *Backend) GetReports(ctx context.Context, req *GetReportsRequest) (*GetR func (b *Backend) Close() error { scope.Info("Backend.Close") b.server.Stop() - return b.listener.Close() + _ = b.listener.Close() + return nil } // Validate is an implementation InfrastructureBackendServer.Validate. diff --git a/pkg/test/framework/api/components/bookinfo.go b/pkg/test/framework/api/components/bookinfo.go new file mode 100644 index 000000000000..353befabd6b2 --- /dev/null +++ b/pkg/test/framework/api/components/bookinfo.go @@ -0,0 +1,38 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package components + +import ( + "testing" + + "istio.io/istio/pkg/test/framework/api/component" + "istio.io/istio/pkg/test/framework/api/context" + "istio.io/istio/pkg/test/framework/api/ids" + "istio.io/istio/pkg/test/framework/api/lifecycle" +) + +// Bookinfo represents a deployed Bookinfo app instance in a Kubernetes cluster. +type Bookinfo interface { + component.Instance + + DeployRatingsV2(ctx context.Instance, scope lifecycle.Scope) error + + DeployMongoDb(ctx context.Instance, scope lifecycle.Scope) error +} + +// GetBookinfo from the repository. +func GetBookinfo(e component.Repository, t testing.TB) Bookinfo { + return e.GetComponentOrFail(ids.BookInfo, t).(Bookinfo) +} diff --git a/pkg/test/framework/api/components/galley.go b/pkg/test/framework/api/components/galley.go index d7dc527efea2..bb8d2f7419b5 100644 --- a/pkg/test/framework/api/components/galley.go +++ b/pkg/test/framework/api/components/galley.go @@ -35,7 +35,7 @@ type Galley interface { SetMeshConfig(yamlText string) error // WaitForSnapshot waits until the given snapshot is observed for the given type URL. - WaitForSnapshot(typeURL string, snapshot ...map[string]interface{}) error + WaitForSnapshot(collection string, snapshot ...map[string]interface{}) error } // GetGalley from the repository diff --git a/pkg/test/framework/api/descriptors/descriptors.go b/pkg/test/framework/api/descriptors/descriptors.go index 17a77f3331e4..b10f60c37105 100644 --- a/pkg/test/framework/api/descriptors/descriptors.go +++ b/pkg/test/framework/api/descriptors/descriptors.go @@ -92,7 +92,7 @@ var ( // PolicyBackend component PolicyBackend = component.Descriptor{ ID: ids.PolicyBackend, - IsSystemComponent: false, + IsSystemComponent: true, Requires: []component.Requirement{ &ids.Mixer, &ids.Environment, diff --git a/pkg/test/framework/operations.go b/pkg/test/framework/operations.go index ec43be1a97ac..b05379405f74 100644 --- a/pkg/test/framework/operations.go +++ b/pkg/test/framework/operations.go @@ -25,8 +25,8 @@ var r = runtime.New() // Run is a helper for executing test main with appropriate resource allocation/doCleanup steps. // It allows us to do post-run doCleanup, and flag parsing. -func Run(testID string, m *testing.M) { - _, _ = r.Run(testID, m) +func Run(testID string, m *testing.M) (int, error) { + return r.Run(testID, m) } // GetContext resets and returns the environment. Should be called exactly once per test. diff --git a/pkg/test/framework/runtime/components/apps/agent/pilot_agent.go b/pkg/test/framework/runtime/components/apps/agent/pilot_agent.go index 78e7d2311348..80c2331db6a0 100644 --- a/pkg/test/framework/runtime/components/apps/agent/pilot_agent.go +++ b/pkg/test/framework/runtime/components/apps/agent/pilot_agent.go @@ -226,6 +226,7 @@ type pilotAgent struct { envoy *envoy.Envoy adminPort int ports []*MappedPort + nodeID string yamlFile string ownedDir string serviceEntry model.Config @@ -251,6 +252,12 @@ func (a *pilotAgent) GetPorts() []*MappedPort { return a.ports } +// GetNodeID returns the envoy metadata ID for pilot's service discovery. +func GetNodeID(agent Agent) string { + pa := agent.(*pilotAgent) + return pa.nodeID +} + // CheckConfiguredForService implements the agent.Agent interface. func (a *pilotAgent) CheckConfiguredForService(target Agent) error { cfg, err := envoy.GetConfigDump(a.GetAdminPort()) @@ -385,6 +392,7 @@ func (a *pilotAgent) start(serviceName, version string, serviceManager *service. } nodeID := f.generateServiceNode(serviceName) + a.nodeID = nodeID // Create the YAML configuration file for Envoy. if err = a.createYamlFile(serviceName, nodeID, f); err != nil { diff --git a/pkg/test/framework/runtime/components/apps/native.go b/pkg/test/framework/runtime/components/apps/native.go index 6f8a2b5a27fc..472ce69e904d 100644 --- a/pkg/test/framework/runtime/components/apps/native.go +++ b/pkg/test/framework/runtime/components/apps/native.go @@ -24,6 +24,8 @@ import ( "testing" "time" + xdsapi "github.com/envoyproxy/go-control-plane/envoy/api/v2" + "github.com/envoyproxy/go-control-plane/envoy/api/v2/core" multierror "github.com/hashicorp/go-multierror" "istio.io/istio/pilot/pkg/model" @@ -238,6 +240,17 @@ func configDumpStr(a components.App) (string, error) { return envoy.GetConfigDumpStr(a.(*nativeApp).agent.GetAdminPort()) } +// ConstructDiscoveryRequest returns an Envoy discovery request. +func ConstructDiscoveryRequest(a components.App, typeURL string) *xdsapi.DiscoveryRequest { + nodeID := agent.GetNodeID(a.(*nativeApp).agent) + return &xdsapi.DiscoveryRequest{ + Node: &core.Node{ + Id: nodeID, + }, + TypeUrl: typeURL, + } +} + type appConfig struct { serviceName string version string diff --git a/pkg/test/framework/runtime/components/bookinfo/configs.go b/pkg/test/framework/runtime/components/bookinfo/configs.go index 3d2f5afb41f0..7c08c175e380 100644 --- a/pkg/test/framework/runtime/components/bookinfo/configs.go +++ b/pkg/test/framework/runtime/components/bookinfo/configs.go @@ -15,11 +15,13 @@ package bookinfo import ( - "io/ioutil" "path" "testing" + "istio.io/istio/pkg/test" "istio.io/istio/pkg/test/env" + "istio.io/istio/pkg/test/framework/api/component" + "istio.io/istio/pkg/test/framework/runtime/components/environment/kube" ) // ConfigFile represents config yaml files for different bookinfo scenarios. @@ -32,9 +34,15 @@ const ( // NetworkingDestinationRuleAll uses "networking/destination-rule-all.yaml" NetworkingDestinationRuleAll ConfigFile = "networking/destination-rule-all.yaml" + // NetworkingDestinationRuleAllMtls uses "networking/destination-rule-all-mtls.yaml" + NetworkingDestinationRuleAllMtls ConfigFile = "networking/destination-rule-all-mtls.yaml" + // NetworkingVirtualServiceAllV1 uses "networking/virtual-service-all-v1.yaml" NetworkingVirtualServiceAllV1 ConfigFile = "networking/virtual-service-all-v1.yaml" + // NetworkingTcpDbRule uses "networking/virtual-service-ratings-db.yaml" + NetworkingTCPDbRule ConfigFile = "networking/virtual-service-ratings-db.yaml" + // MixerRuleRatingsRatelimit uses "policy/mixer-rule-ratings-ratelimit.yaml" MixerRuleRatingsRatelimit ConfigFile = "policy/mixer-rule-ratings-ratelimit.yaml" @@ -50,10 +58,21 @@ func (l ConfigFile) LoadOrFail(t testing.TB) string { t.Helper() p := path.Join(env.BookInfoRoot, string(l)) - by, err := ioutil.ReadFile(p) + content, err := test.ReadConfigFile(p) if err != nil { - t.Fatalf("Unable to load config %s at %v", l, p) + t.Fatalf("unable to load config %s at %v, err:%v", l, p, err) } - return string(by) + return content +} + +func GetDestinationRuleConfigFile(t testing.TB, ctx component.Repository) ConfigFile { + env, err := kube.GetEnvironment(ctx) + if err != nil { + t.Fatalf("Could not get test environment: %v", err) + } + if env.IsMtlsEnabled() { + return NetworkingDestinationRuleAllMtls + } + return NetworkingDestinationRuleAll } diff --git a/pkg/test/framework/runtime/components/bookinfo/kube.go b/pkg/test/framework/runtime/components/bookinfo/kube.go index fb8de6640bbd..d03edd5a6cd5 100644 --- a/pkg/test/framework/runtime/components/bookinfo/kube.go +++ b/pkg/test/framework/runtime/components/bookinfo/kube.go @@ -20,6 +20,7 @@ import ( "istio.io/istio/pkg/test/env" "istio.io/istio/pkg/test/framework/api/component" + "istio.io/istio/pkg/test/framework/api/components" "istio.io/istio/pkg/test/framework/api/context" "istio.io/istio/pkg/test/framework/api/descriptors" "istio.io/istio/pkg/test/framework/api/lifecycle" @@ -32,9 +33,12 @@ type bookInfoConfig string const ( // BookInfoConfig uses "bookinfo.yaml" - BookInfoConfig bookInfoConfig = "bookinfo.yaml" + BookInfoConfig bookInfoConfig = "bookinfo.yaml" + BookinfoRatingsv2 bookInfoConfig = "bookinfo-ratings-v2.yaml" + BookinfoDb bookInfoConfig = "bookinfo-db.yaml" ) +var _ components.Bookinfo = &kubeComponent{} var _ api.Component = &kubeComponent{} // NewKubeComponent factory function for the component @@ -58,22 +62,40 @@ func (c *kubeComponent) Scope() lifecycle.Scope { func (c *kubeComponent) Start(ctx context.Instance, scope lifecycle.Scope) (err error) { c.scope = scope + return deployBookInfoService(ctx, scope, string(BookInfoConfig)) +} + +// DeployRatingsV2 deploys ratings v2 service +func (c *kubeComponent) DeployRatingsV2(ctx context.Instance, scope lifecycle.Scope) (err error) { + c.scope = scope + + return deployBookInfoService(ctx, scope, string(BookinfoRatingsv2)) +} + +// DeployMongoDb deploys mongodb service +func (c *kubeComponent) DeployMongoDb(ctx context.Instance, scope lifecycle.Scope) (err error) { + c.scope = scope + + return deployBookInfoService(ctx, scope, string(BookinfoDb)) +} + +func deployBookInfoService(ctx component.Repository, scope lifecycle.Scope, bookinfoYamlFile string) (err error) { e, err := kube.GetEnvironment(ctx) if err != nil { return err } - scopes.CI.Info("=== BEGIN: Deploy BookInfoConfig (via Yaml File) ===") + scopes.CI.Infof("=== BEGIN: Deploy BookInfoConfig (via Yaml File - %s) ===", bookinfoYamlFile) defer func() { if err != nil { - err = fmt.Errorf("BookInfoConfig deployment failed: %v", err) // nolint:golint - scopes.CI.Infof("=== FAILED: Deploy BookInfoConfig ===") + err = fmt.Errorf("BookInfoConfig %s deployment failed: %v", bookinfoYamlFile, err) // nolint:golint + scopes.CI.Infof("=== FAILED: Deploy BookInfoConfig %s ===", bookinfoYamlFile) } else { - scopes.CI.Infof("=== SUCCEEDED: Deploy BookInfoConfig ===") + scopes.CI.Infof("=== SUCCEEDED: Deploy BookInfoConfig %s ===", bookinfoYamlFile) } }() - yamlFile := path.Join(env.BookInfoKube, string(BookInfoConfig)) + yamlFile := path.Join(env.BookInfoKube, bookinfoYamlFile) _, err = e.DeployYaml(yamlFile, scope) return } diff --git a/pkg/test/framework/runtime/components/environment/kube/environment.go b/pkg/test/framework/runtime/components/environment/kube/environment.go index a14d3e8ac57e..f40c987e8cc2 100644 --- a/pkg/test/framework/runtime/components/environment/kube/environment.go +++ b/pkg/test/framework/runtime/components/environment/kube/environment.go @@ -18,13 +18,16 @@ import ( "bytes" "fmt" "io" + "path/filepath" "strings" "testing" "text/template" "github.com/google/uuid" multierror "github.com/hashicorp/go-multierror" + yaml2 "gopkg.in/yaml.v2" + "istio.io/istio/pkg/test" "istio.io/istio/pkg/test/deployment" "istio.io/istio/pkg/test/framework/api/component" "istio.io/istio/pkg/test/framework/api/context" @@ -92,6 +95,34 @@ func (e *Environment) Scope() lifecycle.Scope { return e.scope } +// Is mtls enabled. Check in Values flag and Values file. +func (e *Environment) IsMtlsEnabled() bool { + if e.s.Values["global.mtls.enabled"] == "true" { + return true + } + + data, err := test.ReadConfigFile(filepath.Join(e.s.ChartDir, e.s.ValuesFile)) + if err != nil { + return false + } + m := make(map[interface{}]interface{}) + err = yaml2.Unmarshal([]byte(data), &m) + if err != nil { + return false + } + if m["global"] != nil { + switch globalVal := m["global"].(type) { + case map[interface{}]interface{}: + switch mtlsVal := globalVal["mtls"].(type) { + case map[interface{}]interface{}: + return mtlsVal["enabled"].(bool) + } + } + } + + return false +} + // Descriptor for this component func (e *Environment) Descriptor() component.Descriptor { return descriptors.KubernetesEnvironment @@ -179,7 +210,7 @@ func (e *Environment) Start(ctx context.Instance, scope lifecycle.Scope) error { name: e.s.TestNamespace, annotation: "istio-test", accessor: e.Accessor, - injectionEnabled: false, + injectionEnabled: true, } if err := e.systemNamespace.allocate(); err != nil { @@ -213,12 +244,13 @@ func (e *Environment) deployIstio() (err error) { }() e.deployment, err = deployment.NewHelmDeployment(deployment.HelmConfig{ - Accessor: e.Accessor, - Namespace: e.systemNamespace.allocatedName, - WorkDir: e.ctx.WorkDir(), - ChartDir: e.s.ChartDir, - ValuesFile: e.s.ValuesFile, - Values: e.s.Values, + Accessor: e.Accessor, + Namespace: e.systemNamespace.allocatedName, + WorkDir: e.ctx.WorkDir(), + ChartDir: e.s.ChartDir, + CrdsFilesDir: e.s.CrdsFilesDir, + ValuesFile: e.s.ValuesFile, + Values: e.s.Values, }) if err == nil { err = e.deployment.Deploy(e.Accessor, true, retry.Timeout(e.s.DeployTimeout)) diff --git a/pkg/test/framework/runtime/components/environment/kube/settings.go b/pkg/test/framework/runtime/components/environment/kube/settings.go index d474cce89f8d..68ac4ba4d173 100644 --- a/pkg/test/framework/runtime/components/environment/kube/settings.go +++ b/pkg/test/framework/runtime/components/environment/kube/settings.go @@ -64,6 +64,7 @@ var ( DeployTimeout: DefaultDeployTimeout, UndeployTimeout: DefaultUndeployTimeout, ChartDir: env.IstioChartDir, + CrdsFilesDir: env.CrdsFilesDir, ValuesFile: DefaultValuesFile, } @@ -111,6 +112,9 @@ type settings struct { // The top-level Helm chart dir. ChartDir string + // The top-level Helm Crds files dir. + CrdsFilesDir string + // The Helm values file to be used. ValuesFile string @@ -137,6 +141,10 @@ func newSettings() (*settings, error) { return nil, err } + if err := normalizeFile(&s.CrdsFilesDir); err != nil { + return nil, err + } + var err error s.Values, err = newHelmValues() if err != nil { diff --git a/pkg/test/framework/runtime/components/galley/client.go b/pkg/test/framework/runtime/components/galley/client.go index 1996703d9980..63822b189099 100644 --- a/pkg/test/framework/runtime/components/galley/client.go +++ b/pkg/test/framework/runtime/components/galley/client.go @@ -26,7 +26,8 @@ import ( mcp "istio.io/api/mcp/v1alpha1" mcpclient "istio.io/istio/pkg/mcp/client" - mcptestmon "istio.io/istio/pkg/mcp/testing/monitoring" + "istio.io/istio/pkg/mcp/sink" + "istio.io/istio/pkg/mcp/testing/monitoring" tcontext "istio.io/istio/pkg/test/framework/api/context" "istio.io/istio/pkg/test/scopes" "istio.io/istio/pkg/test/util/retry" @@ -37,34 +38,38 @@ type client struct { ctx tcontext.Instance } -func (c *client) waitForSnapshot(typeURL string, snapshot []map[string]interface{}) error { +func (c *client) waitForSnapshot(collection string, snapshot []map[string]interface{}) error { conn, err := c.dialGrpc() if err != nil { return err } defer func() { _ = conn.Close() }() - urls := []string{typeURL} - ctx, cancel := context.WithCancel(context.Background()) defer cancel() - u := mcpclient.NewInMemoryUpdater() + u := sink.NewInMemoryUpdater() cl := mcp.NewAggregatedMeshConfigServiceClient(conn) - mcpc := mcpclient.New(cl, urls, u, "", map[string]string{}, mcptestmon.NewInMemoryClientStatsContext()) + options := &sink.Options{ + CollectionOptions: sink.CollectionOptionsFromSlice([]string{collection}), + Updater: u, + ID: "", + Reporter: monitoring.NewInMemoryStatsContext(), + } + mcpc := mcpclient.New(cl, options) go mcpc.Run(ctx) var result *comparisonResult _, err = retry.Do(func() (interface{}, bool, error) { - items := u.Get(typeURL) + items := u.Get(collection) result, err = c.checkSnapshot(items, snapshot) if err != nil { return nil, false, err } err = result.generateError() return nil, err == nil, err - }, retry.Delay(time.Millisecond), retry.Timeout(time.Second*5)) + }, retry.Delay(time.Millisecond), retry.Timeout(time.Second*30)) return err } @@ -82,7 +87,7 @@ func (c *client) waitForStartup() (err error) { return } -func (c *client) checkSnapshot(actual []*mcpclient.Object, expected []map[string]interface{}) (*comparisonResult, error) { +func (c *client) checkSnapshot(actual []*sink.Object, expected []map[string]interface{}) (*comparisonResult, error) { expectedMap := make(map[string]interface{}) for _, e := range expected { name, err := extractName(e) diff --git a/pkg/test/framework/runtime/components/galley/comparison.go b/pkg/test/framework/runtime/components/galley/comparison.go index 841100d5b3e5..fdb607c4a480 100644 --- a/pkg/test/framework/runtime/components/galley/comparison.go +++ b/pkg/test/framework/runtime/components/galley/comparison.go @@ -42,12 +42,12 @@ func (r *comparisonResult) generateError() (err error) { } for _, n := range r.extraActual { - js, er := json.MarshalIndent(r.expected[n], "", " ") + js, er := json.MarshalIndent(r.actual[n], "", " ") if er != nil { return er } - err = multierror.Append(err, fmt.Errorf("extra resource not found: %s\n%v", n, string(js))) + err = multierror.Append(err, fmt.Errorf("extra resource found: %s\n%v", n, string(js))) } for _, n := range r.conflicting { @@ -65,7 +65,7 @@ func (r *comparisonResult) generateError() (err error) { A: difflib.SplitLines(string(ejs)), ToFile: fmt.Sprintf("Actual %q", n), B: difflib.SplitLines(string(ajs)), - Context: 10, + Context: 100, } text, er := difflib.GetUnifiedDiffString(diff) if er != nil { diff --git a/pkg/test/framework/runtime/components/galley/native.go b/pkg/test/framework/runtime/components/galley/native.go index 7d670d8ee432..56b38f4d12dc 100644 --- a/pkg/test/framework/runtime/components/galley/native.go +++ b/pkg/test/framework/runtime/components/galley/native.go @@ -87,7 +87,14 @@ func (c *nativeComponent) Scope() lifecycle.Scope { // SetMeshConfig applies the given mesh config yaml file via Galley. func (c *nativeComponent) SetMeshConfig(yamlText string) error { - return ioutil.WriteFile(c.meshConfigFile, []byte(yamlText), os.ModePerm) + if err := ioutil.WriteFile(c.meshConfigFile, []byte(yamlText), os.ModePerm); err != nil { + return err + } + if err := c.Close(); err != nil { + return err + } + + return c.restart() } // ClearConfig implements Galley.ClearConfig. @@ -119,8 +126,8 @@ func (c *nativeComponent) ApplyConfig(yamlText string) (err error) { } // WaitForSnapshot implements Galley.WaitForSnapshot. -func (c *nativeComponent) WaitForSnapshot(typeURL string, snapshot ...map[string]interface{}) error { - return c.client.waitForSnapshot(typeURL, snapshot) +func (c *nativeComponent) WaitForSnapshot(collection string, snapshot ...map[string]interface{}) error { + return c.client.waitForSnapshot(collection, snapshot) } // Start implements Component.Start. @@ -159,6 +166,10 @@ func (c *nativeComponent) Reset() error { return err } + return c.restart() +} + +func (c *nativeComponent) restart() error { a := server.DefaultArgs() a.Insecure = true a.EnableServer = true diff --git a/pkg/test/framework/runtime/components/prometheus/kube.go b/pkg/test/framework/runtime/components/prometheus/kube.go index 1543c0abf36c..430bfccfa07d 100644 --- a/pkg/test/framework/runtime/components/prometheus/kube.go +++ b/pkg/test/framework/runtime/components/prometheus/kube.go @@ -42,8 +42,8 @@ const ( ) var ( - retryTimeout = retry.Timeout(time.Second * 30) - retryDelay = retry.Delay(time.Second * 5) + retryTimeout = retry.Timeout(time.Second * 120) + retryDelay = retry.Delay(time.Second * 20) _ components.Prometheus = &kubeComponent{} _ api.Component = &kubeComponent{} diff --git a/pkg/test/kube/accessor.go b/pkg/test/kube/accessor.go index fc77478a1a9f..13aa99e64417 100644 --- a/pkg/test/kube/accessor.go +++ b/pkg/test/kube/accessor.go @@ -111,6 +111,11 @@ func (a *Accessor) GetPod(namespace, name string) (*kubeApiCore.Pod, error) { Pods(namespace).Get(name, kubeApiMeta.GetOptions{}) } +// DeletePod deletes the given pod. +func (a *Accessor) DeletePod(namespace, name string) error { + return a.set.CoreV1().Pods(namespace).Delete(name, &kubeApiMeta.DeleteOptions{}) +} + // FindPodBySelectors returns the first matching pod, given a namespace and a set of selectors. func (a *Accessor) FindPodBySelectors(namespace string, selectors ...string) (kubeApiCore.Pod, error) { list, err := a.GetPods(namespace, selectors...) @@ -164,7 +169,7 @@ func (a *Accessor) WaitUntilPodsAreReady(fetchFunc PodFetchFunc, opts ...retry.O for i, p := range pods { msg := "Ready" - if e := checkPodReady(&p); e != nil { + if e := CheckPodReady(&p); e != nil { msg = e.Error() err = multierror.Append(err, fmt.Errorf("%s/%s: %s", p.Namespace, p.Name, msg)) } @@ -288,6 +293,11 @@ func (a *Accessor) GetSecret(ns string) kubeClientCore.SecretInterface { return a.set.CoreV1().Secrets(ns) } +// GetEndpoints returns the endpoints for the given service. +func (a *Accessor) GetEndpoints(ns, service string, options kubeApiMeta.GetOptions) (*kubeApiCore.Endpoints, error) { + return a.set.CoreV1().Endpoints(ns).Get(service, options) +} + // CreateNamespace with the given name. Also adds an "istio-testing" annotation. func (a *Accessor) CreateNamespace(ns string, istioTestingAnnotation string, injectionEnabled bool) error { scopes.Framework.Debugf("Creating namespace: %s", ns) @@ -376,7 +386,8 @@ func (a *Accessor) Exec(namespace, pod, container, command string) (string, erro return a.ctl.exec(namespace, pod, container, command) } -func checkPodReady(pod *kubeApiCore.Pod) error { +// CheckPodReady returns nil if the given pod and all of its containers are ready. +func CheckPodReady(pod *kubeApiCore.Pod) error { switch pod.Status.Phase { case kubeApiCore.PodSucceeded: return nil diff --git a/prow/e2e-suite.sh b/prow/e2e-suite.sh index 46fbc3ce6051..dbdaa6eb0992 100755 --- a/prow/e2e-suite.sh +++ b/prow/e2e-suite.sh @@ -48,6 +48,9 @@ source "${ROOT}/prow/lib.sh" setup_e2e_cluster E2E_ARGS+=("--test_logs_path=${ARTIFACTS_DIR}") +# e2e tests on prow use clusters borrowed from boskos, which cleans up the +# clusters. There is no need to cleanup in the test jobs. +E2E_ARGS+=("--skip_cleanup") export HUB=${HUB:-"gcr.io/istio-testing"} export TAG="${TAG:-${GIT_SHA}}" diff --git a/prow/e2e_pilotv2_auth_sds.sh b/prow/e2e_pilotv2_auth_sds.sh new file mode 100755 index 000000000000..041593f9e1a8 --- /dev/null +++ b/prow/e2e_pilotv2_auth_sds.sh @@ -0,0 +1,32 @@ +#!/bin/bash + +# Copyright 2017 Istio Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +####################################### +# # +# e2e_pilotv2_auth_sds # +# # +####################################### + +# Exit immediately for non zero status +set -e +# Check unset variables +set -u +# Print commands +set -x + +# Run tests with auth enabled through SDS +#echo 'Running pilot e2e tests (v1alpha3, auth through sds)' +./prow/e2e-suite.sh --single_test e2e_pilotv2_auth_sds diff --git a/prow/istio-integ-k8s-tests.sh b/prow/istio-integ-k8s-tests.sh index 076599c8eb7b..8cdd95d55c8b 100755 --- a/prow/istio-integ-k8s-tests.sh +++ b/prow/istio-integ-k8s-tests.sh @@ -72,5 +72,7 @@ make init setup_cluster -time make test.integration.kube T=-v +JUNIT_UNIT_TEST_XML="${ARTIFACTS_DIR}/junit_unit-tests.xml" \ +T="-v" \ +make test.integration.kube diff --git a/prow/istio-integ-local-tests.sh b/prow/istio-integ-local-tests.sh index 533df2cd7b2f..2e67511971a9 100755 --- a/prow/istio-integ-local-tests.sh +++ b/prow/istio-integ-local-tests.sh @@ -31,8 +31,8 @@ setup_and_export_git_sha cd "${ROOT}" -# Unit tests are run against a local apiserver and etcd. -# Integration/e2e tests in the other scripts are run against GKE or real clusters. +make sync + JUNIT_UNIT_TEST_XML="${ARTIFACTS_DIR}/junit_unit-tests.xml" \ T="-v" \ make test.integration.local diff --git a/samples/httpbin/httpbin-nodeport.yaml b/samples/httpbin/httpbin-nodeport.yaml new file mode 100644 index 000000000000..8f09c91e415e --- /dev/null +++ b/samples/httpbin/httpbin-nodeport.yaml @@ -0,0 +1,50 @@ +# Copyright 2017 Istio Authors +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +################################################################################################## +# httpbin service +################################################################################################## +apiVersion: v1 +kind: Service +metadata: + name: httpbin + labels: + app: httpbin +spec: + type: NodePort + ports: + - name: http + port: 8000 + targetPort: 80 + selector: + app: httpbin +--- +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: httpbin +spec: + replicas: 1 + template: + metadata: + labels: + app: httpbin + version: v1 + spec: + containers: + - image: docker.io/kennethreitz/httpbin + imagePullPolicy: IfNotPresent + name: httpbin + ports: + - containerPort: 80 diff --git a/security/cmd/istio_ca/main.go b/security/cmd/istio_ca/main.go index d9935a611214..d0474f27de79 100644 --- a/security/cmd/istio_ca/main.go +++ b/security/cmd/istio_ca/main.go @@ -20,6 +20,8 @@ import ( "strings" "time" + "istio.io/istio/pkg/spiffe" + "github.com/spf13/cobra" "github.com/spf13/cobra/doc" "k8s.io/client-go/kubernetes" @@ -161,9 +163,8 @@ func init() { "When set to true, the '--signing-cert' and '--signing-key' options are ignored.") flags.DurationVar(&opts.selfSignedCACertTTL, "self-signed-ca-cert-ttl", cmd.DefaultSelfSignedCACertTTL, "The TTL of self-signed CA root certificate") - flags.StringVar(&opts.trustDomain, "trust-domain", controller.DefaultTrustDomain, - fmt.Sprintf("The domain serves to identify the system with spiffe (default: %s)", controller.DefaultTrustDomain)) - + flags.StringVar(&opts.trustDomain, "trust-domain", "", + "The domain serves to identify the system with spiffe ") // Upstream CA configuration if Citadel interacts with upstream CA. flags.StringVar(&opts.cAClientConfig.CAAddress, "upstream-ca-address", "", "The IP:port address of the upstream "+ "CA. When set, the CA will rely on the upstream Citadel to provision its own certificate.") @@ -294,7 +295,7 @@ func runCA() { ca := createCA(cs.CoreV1()) // For workloads in K8s, we apply the configured workload cert TTL. sc, err := controller.NewSecretController(ca, - opts.workloadCertTTL, opts.trustDomain, + opts.workloadCertTTL, opts.workloadCertGracePeriodRatio, opts.workloadCertMinGracePeriod, opts.dualUse, cs.CoreV1(), opts.signCACerts, opts.listenedNamespace, webhooks) if err != nil { @@ -327,7 +328,7 @@ func runCA() { // The CA API uses cert with the max workload cert TTL. hostnames := append(strings.Split(opts.grpcHosts, ","), fqdn()) - caServer, startErr := caserver.New(ca, opts.maxWorkloadCertTTL, opts.signCACerts, hostnames, opts.grpcPort) + caServer, startErr := caserver.New(ca, opts.maxWorkloadCertTTL, opts.signCACerts, hostnames, opts.grpcPort, spiffe.GetTrustDomain()) if startErr != nil { fatalf("Failed to create istio ca server: %v", startErr) } @@ -402,8 +403,9 @@ func createCA(client corev1.CoreV1Interface) *ca.IstioCA { if opts.selfSignedCA { log.Info("Use self-signed certificate as the CA certificate") + spiffe.SetTrustDomain(spiffe.DetermineTrustDomain(opts.trustDomain, "", len(opts.kubeConfigFile) != 0)) caOpts, err = ca.NewSelfSignedIstioCAOptions(opts.selfSignedCACertTTL, opts.workloadCertTTL, - opts.maxWorkloadCertTTL, opts.trustDomain, opts.dualUse, + opts.maxWorkloadCertTTL, spiffe.GetTrustDomain(), opts.dualUse, opts.istioCaStorageNamespace, client) if err != nil { fatalf("Failed to create a self-signed Citadel (error: %v)", err) diff --git a/security/pkg/caclient/client.go b/security/pkg/caclient/client.go index 7f98da844ef6..cebdc3fbf1c1 100644 --- a/security/pkg/caclient/client.go +++ b/security/pkg/caclient/client.go @@ -21,12 +21,19 @@ import ( "istio.io/istio/pkg/log" "istio.io/istio/security/pkg/caclient/protocol" - "istio.io/istio/security/pkg/nodeagent/secrets" pkiutil "istio.io/istio/security/pkg/pki/util" "istio.io/istio/security/pkg/platform" pb "istio.io/istio/security/proto" ) +const ( + // keyFilePermission is the permission bits for private key file. + keyFilePermission = 0600 + + // certFilePermission is the permission bits for certificate file. + certFilePermission = 0644 +) + // CAClient is a client to provision key and certificate from the upstream CA via CSR protocol. type CAClient struct { platformClient platform.Client @@ -109,8 +116,8 @@ func (c *CAClient) createCSRRequest(opts *pkiutil.CertOptions) ([]byte, *pb.CsrR // SaveKeyCert stores the specified key/cert into file specified by the path. // TODO(incfly): move this into CAClient struct's own method later. func SaveKeyCert(keyFile, certFile string, privKey, cert []byte) error { - if err := ioutil.WriteFile(keyFile, privKey, secrets.KeyFilePermission); err != nil { + if err := ioutil.WriteFile(keyFile, privKey, keyFilePermission); err != nil { return err } - return ioutil.WriteFile(certFile, cert, secrets.CertFilePermission) + return ioutil.WriteFile(certFile, cert, certFilePermission) } diff --git a/security/pkg/k8s/controller/workloadsecret.go b/security/pkg/k8s/controller/workloadsecret.go index 3eaa5cd67b0e..f1e3fcc695d3 100644 --- a/security/pkg/k8s/controller/workloadsecret.go +++ b/security/pkg/k8s/controller/workloadsecret.go @@ -21,7 +21,10 @@ import ( "strings" "time" + "istio.io/istio/pkg/spiffe" + v1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/fields" @@ -49,9 +52,6 @@ const ( // The key to specify corresponding service account in the annotation of K8s secrets. ServiceAccountNameAnnotationKey = "istio.io/service-account.name" - // The default SPIFFE URL value for trust domain - DefaultTrustDomain = "cluster.local" - secretNamePrefix = "istio." secretResyncPeriod = time.Minute @@ -84,7 +84,6 @@ type DNSNameEntry struct { type SecretController struct { ca ca.CertificateAuthority certTTL time.Duration - trustDomain string core corev1.CoreV1Interface minGracePeriod time.Duration // Length of the grace period for the certificate rotation. @@ -111,7 +110,7 @@ type SecretController struct { } // NewSecretController returns a pointer to a newly constructed SecretController instance. -func NewSecretController(ca ca.CertificateAuthority, certTTL time.Duration, trustDomain string, +func NewSecretController(ca ca.CertificateAuthority, certTTL time.Duration, gracePeriodRatio float32, minGracePeriod time.Duration, dualUse bool, core corev1.CoreV1Interface, forCA bool, namespace string, dnsNames map[string]DNSNameEntry) (*SecretController, error) { @@ -123,14 +122,9 @@ func NewSecretController(ca ca.CertificateAuthority, certTTL time.Duration, trus gracePeriodRatio, recommendedMinGracePeriodRatio, recommendedMaxGracePeriodRatio) } - if trustDomain == "" { - trustDomain = DefaultTrustDomain - } - c := &SecretController{ ca: ca, certTTL: certTTL, - trustDomain: trustDomain, gracePeriodRatio: gracePeriodRatio, minGracePeriod: minGracePeriod, dualUse: dualUse, @@ -308,7 +302,7 @@ func (sc *SecretController) scrtDeleted(obj interface{}) { } func (sc *SecretController) generateKeyAndCert(saName string, saNamespace string) ([]byte, []byte, error) { - id := fmt.Sprintf("%s://%s/ns/%s/sa/%s", util.URIScheme, sc.trustDomain, saNamespace, saName) + id := spiffe.MustGenSpiffeURI(saNamespace, saName) if sc.dnsNames != nil { // Control plane components in same namespace. if e, ok := sc.dnsNames[saName]; ok { diff --git a/security/pkg/k8s/controller/workloadsecret_test.go b/security/pkg/k8s/controller/workloadsecret_test.go index fb26cd51db59..9aa3da275b47 100644 --- a/security/pkg/k8s/controller/workloadsecret_test.go +++ b/security/pkg/k8s/controller/workloadsecret_test.go @@ -161,7 +161,7 @@ func TestSecretController(t *testing.T) { Namespace: "test-ns", }, } - controller, err := NewSecretController(createFakeCA(), defaultTTL, DefaultTrustDomain, + controller, err := NewSecretController(createFakeCA(), defaultTTL, tc.gracePeriodRatio, defaultMinGracePeriod, false, client.CoreV1(), false, metav1.NamespaceAll, webhooks) if tc.shouldFail { @@ -202,7 +202,7 @@ func TestSecretContent(t *testing.T) { saName := "test-serviceaccount" saNamespace := "test-namespace" client := fake.NewSimpleClientset() - controller, err := NewSecretController(createFakeCA(), defaultTTL, DefaultTrustDomain, + controller, err := NewSecretController(createFakeCA(), defaultTTL, defaultGracePeriodRatio, defaultMinGracePeriod, false, client.CoreV1(), false, metav1.NamespaceAll, map[string]DNSNameEntry{}) if err != nil { @@ -224,7 +224,7 @@ func TestSecretContent(t *testing.T) { } func TestDeletedIstioSecret(t *testing.T) { client := fake.NewSimpleClientset() - controller, err := NewSecretController(createFakeCA(), defaultTTL, DefaultTrustDomain, + controller, err := NewSecretController(createFakeCA(), defaultTTL, defaultGracePeriodRatio, defaultMinGracePeriod, false, client.CoreV1(), false, metav1.NamespaceAll, nil) if err != nil { @@ -343,7 +343,7 @@ func TestUpdateSecret(t *testing.T) { for k, tc := range testCases { client := fake.NewSimpleClientset() - controller, err := NewSecretController(createFakeCA(), time.Hour, DefaultTrustDomain, + controller, err := NewSecretController(createFakeCA(), time.Hour, tc.gracePeriodRatio, tc.minGracePeriod, false, client.CoreV1(), false, metav1.NamespaceAll, nil) if err != nil { t.Errorf("failed to create secret controller: %v", err) diff --git a/security/pkg/k8s/tokenreview/k8sauthn.go b/security/pkg/k8s/tokenreview/k8sauthn.go index 18e1f61d31ea..6a943ae4d243 100644 --- a/security/pkg/k8s/tokenreview/k8sauthn.go +++ b/security/pkg/k8s/tokenreview/k8sauthn.go @@ -96,33 +96,33 @@ func (authn *K8sSvcAcctAuthn) reviewServiceAccountAtK8sAPIServer(k8sAPIServerURL } // ValidateK8sJwt validates a k8s JWT at API server. -// Return : in the JWT when the validation passes. +// Return {, } in the JWT when the validation passes. // Otherwise, return the error. // jwt: the JWT to validate -func (authn *K8sSvcAcctAuthn) ValidateK8sJwt(jwt string) (string, error) { +func (authn *K8sSvcAcctAuthn) ValidateK8sJwt(jwt string) ([]string, error) { resp, err := authn.reviewServiceAccountAtK8sAPIServer(authn.apiServerAddr, authn.apiServerCert, authn.reviewerSvcAcct, jwt) if err != nil { - return "", fmt.Errorf("failed to get a token review response: %v", err) + return nil, fmt.Errorf("failed to get a token review response: %v", err) } // Check that the JWT is valid if !(resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusAccepted) { - return "", fmt.Errorf("invalid review response status code %v", resp.StatusCode) + return nil, fmt.Errorf("invalid review response status code %v", resp.StatusCode) } defer resp.Body.Close() bodyBytes, err := ioutil.ReadAll(resp.Body) if err != nil { - return "", fmt.Errorf("failed to read from the response body: %v", err) + return nil, fmt.Errorf("failed to read from the response body: %v", err) } tokenReview := &k8sauth.TokenReview{} err = json.Unmarshal(bodyBytes, tokenReview) if err != nil { - return "", fmt.Errorf("unmarshal response body returns an error: %v", err) + return nil, fmt.Errorf("unmarshal response body returns an error: %v", err) } if tokenReview.Status.Error != "" { - return "", fmt.Errorf("the service account authentication returns an error: %v" + tokenReview.Status.Error) + return nil, fmt.Errorf("the service account authentication returns an error: %v" + tokenReview.Status.Error) } // An example SA token: // {"alg":"RS256","typ":"JWT"} @@ -145,7 +145,7 @@ func (authn *K8sSvcAcctAuthn) ValidateK8sJwt(jwt string) (string, error) { // } if !tokenReview.Status.Authenticated { - return "", fmt.Errorf("the token is not authenticated") + return nil, fmt.Errorf("the token is not authenticated") } inServiceAccountGroup := false for _, group := range tokenReview.Status.User.Groups { @@ -155,16 +155,16 @@ func (authn *K8sSvcAcctAuthn) ValidateK8sJwt(jwt string) (string, error) { } } if !inServiceAccountGroup { - return "", fmt.Errorf("the token is not a service account") + return nil, fmt.Errorf("the token is not a service account") } // "username" is in the form of system:serviceaccount:{namespace}:{service account name}", // e.g., "username":"system:serviceaccount:default:example-pod-sa" subStrings := strings.Split(tokenReview.Status.User.Username, ":") if len(subStrings) != 4 { - return "", fmt.Errorf("invalid username field in the token review result") + return nil, fmt.Errorf("invalid username field in the token review result") } namespace := subStrings[2] saName := subStrings[3] - return namespace + ":" + saName, nil + return []string{namespace, saName}, nil } diff --git a/security/pkg/nodeagent/cache/secretcache.go b/security/pkg/nodeagent/cache/secretcache.go index 4e5e03e4bc09..143a3e835360 100644 --- a/security/pkg/nodeagent/cache/secretcache.go +++ b/security/pkg/nodeagent/cache/secretcache.go @@ -22,11 +22,15 @@ import ( "encoding/json" "errors" "fmt" + "math/rand" "strings" "sync" "sync/atomic" "time" + "github.com/gogo/status" + "google.golang.org/grpc/codes" + "istio.io/istio/pkg/log" "istio.io/istio/security/pkg/nodeagent/model" "istio.io/istio/security/pkg/nodeagent/plugin" @@ -49,6 +53,12 @@ const ( // identityTemplate is the format template of identity in the CSR request. identityTemplate = "spiffe://%s/ns/%s/sa/%s" + + // For REST APIs between envoy->nodeagent, default value of 1s is used. + envoyDefaultTimeoutInMilliSec = 1000 + + // initialBackOffIntervalInMilliSec is the initial backoff time interval when hitting non-retryable error in CSR request. + initialBackOffIntervalInMilliSec = 50 ) type k8sJwtPayload struct { @@ -447,10 +457,33 @@ func (sc *SecretCache) generateSecret(ctx context.Context, token, resourceName s return nil, err } - certChainPEM, err := sc.fetcher.CaClient.CSRSign(ctx, csrPEM, exchangedToken, int64(sc.configOptions.SecretTTL.Seconds())) - if err != nil { - log.Errorf("Failed to sign cert for %q: %v", resourceName, err) - return nil, err + startTime := time.Now() + retry := 0 + backOffInMilliSec := initialBackOffIntervalInMilliSec + var certChainPEM []string + for true { + certChainPEM, err = sc.fetcher.CaClient.CSRSign(ctx, csrPEM, exchangedToken, int64(sc.configOptions.SecretTTL.Seconds())) + if err == nil { + break + } + + // If non-retryable error, fail the request by returning err + if !isRetryableErr(status.Code(err)) { + log.Errorf("CSR for %q hit non-retryable error %v", resourceName, err) + return nil, err + } + + // If reach envoy timeout, fail the request by returning err + if startTime.Add(time.Millisecond * envoyDefaultTimeoutInMilliSec).Before(time.Now()) { + log.Errorf("CSR retry timeout for %q: %v", resourceName, err) + return nil, err + } + + retry++ + backOffInMilliSec = retry * backOffInMilliSec + randomTime := rand.Intn(initialBackOffIntervalInMilliSec) + time.Sleep(time.Duration(backOffInMilliSec+randomTime) * time.Millisecond) + log.Warnf("Failed to sign cert for %q: %v, will retry in %d millisec", resourceName, err, backOffInMilliSec) } certChain := []byte{} @@ -528,3 +561,11 @@ func constructCSRHostName(trustDomain, token string) (string, error) { return fmt.Sprintf(identityTemplate, domain, ns, sa), nil } + +func isRetryableErr(c codes.Code) bool { + switch c { + case codes.Canceled, codes.DeadlineExceeded, codes.ResourceExhausted, codes.Aborted, codes.Internal, codes.Unavailable: + return true + } + return false +} diff --git a/security/pkg/nodeagent/cache/secretcache_test.go b/security/pkg/nodeagent/cache/secretcache_test.go index f532ef111624..c971ace75889 100644 --- a/security/pkg/nodeagent/cache/secretcache_test.go +++ b/security/pkg/nodeagent/cache/secretcache_test.go @@ -24,6 +24,9 @@ import ( "testing" "time" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" + "istio.io/istio/security/pkg/nodeagent/model" "istio.io/istio/security/pkg/nodeagent/secretfetcher" @@ -410,9 +413,13 @@ func newMockCAClient() *mockCAClient { func (c *mockCAClient) CSRSign(ctx context.Context, csrPEM []byte, subjectID string, certValidTTLInSec int64) ([]string /*PEM-encoded certificate chain*/, error) { - atomic.AddUint64(&c.signInvokeCount, 1) + if atomic.LoadUint64(&c.signInvokeCount) == 0 { + atomic.AddUint64(&c.signInvokeCount, 1) + return nil, status.Error(codes.Internal, "some internal error") + } if atomic.LoadUint64(&c.signInvokeCount) == 1 { + atomic.AddUint64(&c.signInvokeCount, 1) return mockCertChain1st, nil } diff --git a/security/pkg/nodeagent/caclient/providers/google/client.go b/security/pkg/nodeagent/caclient/providers/google/client.go index 2fb28f4cd0b9..5f509c32598c 100644 --- a/security/pkg/nodeagent/caclient/providers/google/client.go +++ b/security/pkg/nodeagent/caclient/providers/google/client.go @@ -21,6 +21,7 @@ import ( "fmt" "os" "strconv" + "strings" "google.golang.org/grpc" "google.golang.org/grpc/credentials" @@ -33,7 +34,10 @@ import ( var usePodDefaultFlag = false -const podIdentityFlag = "POD_IDENTITY" +const ( + podIdentityFlag = "POD_IDENTITY" + bearerTokenPrefix = "Bearer " +) type googleCAClient struct { caEndpoint string @@ -83,6 +87,11 @@ func (cl *googleCAClient) CSRSign(ctx context.Context, csrPEM []byte, token stri ValidityDuration: certValidTTLInSec, } + // If the token doesn't have "Bearer " prefix, add it. + if !strings.HasSuffix(token, bearerTokenPrefix) { + token = bearerTokenPrefix + token + } + ctx = metadata.NewOutgoingContext(ctx, metadata.Pairs("Authorization", token)) var resp *gcapb.IstioCertificateResponse diff --git a/security/pkg/nodeagent/caclient/providers/vault/client.go b/security/pkg/nodeagent/caclient/providers/vault/client.go index d1e3240aa19c..b83f34173fd5 100644 --- a/security/pkg/nodeagent/caclient/providers/vault/client.go +++ b/security/pkg/nodeagent/caclient/providers/vault/client.go @@ -214,14 +214,14 @@ func signCsrByVault(client *api.Client, csrSigningPath string, certTTLInSec int6 return nil, fmt.Errorf("the certificate chain in the CSR response is of unexpected format") } var certChain []string - certChain = append(certChain, cert) + certChain = append(certChain, cert+"\n") for idx, c := range chain { _, ok := c.(string) if !ok { log.Errorf("the certificate in the certificate chain %v is not a string", idx) return nil, fmt.Errorf("the certificate in the certificate chain %v is not a string", idx) } - certChain = append(certChain, c.(string)) + certChain = append(certChain, c.(string)+"\n") } return certChain, nil diff --git a/security/pkg/nodeagent/caclient/providers/vault/client_test.go b/security/pkg/nodeagent/caclient/providers/vault/client_test.go index 8c4c59838f79..27a05c46ac1f 100644 --- a/security/pkg/nodeagent/caclient/providers/vault/client_test.go +++ b/security/pkg/nodeagent/caclient/providers/vault/client_test.go @@ -113,7 +113,7 @@ SQYzPWVk89gu6nKV+fS2pA9C8dAnYOzVu9XXc+PGlcIhjnuS+/P74hN5D3aIGljW "ONNfuN8hrIDl95vJjhUlE-O-_cx8qWtXNdqJlMje1SsiPCL4uq70OepG_I4aSzC2o8aD" + "tlQ" - fakeCert = []string{"fake-certificate", "fake-ca1", "fake-ca2"} + fakeCert = []string{"fake-certificate\n", "fake-ca1\n", "fake-ca2\n"} vaultNonTLSAddr = "http://35.247.15.29:8200" vaultTLSAddr = "https://35.233.249.249:8200" ) diff --git a/security/pkg/nodeagent/sds/server.go b/security/pkg/nodeagent/sds/server.go index eb340d105ad4..9147d727f0df 100644 --- a/security/pkg/nodeagent/sds/server.go +++ b/security/pkg/nodeagent/sds/server.go @@ -25,7 +25,6 @@ import ( "istio.io/istio/pkg/log" "istio.io/istio/security/pkg/nodeagent/cache" "istio.io/istio/security/pkg/nodeagent/plugin" - iamclient "istio.io/istio/security/pkg/nodeagent/plugin/providers/google" "istio.io/istio/security/pkg/nodeagent/plugin/providers/google/stsclient" ) @@ -140,7 +139,6 @@ func (s *Server) Stop() { // NewPlugins returns a slice of default Plugins. func NewPlugins(in []string) []plugin.Plugin { var availablePlugins = map[string]plugin.Plugin{ - plugin.GoogleIAM: iamclient.NewPlugin(), plugin.GoogleTokenExchange: stsclient.NewPlugin(), } var plugins []plugin.Plugin @@ -163,11 +161,13 @@ func (s *Server) initWorkloadSdsService(options *Options) error { } go func() { - if err = s.grpcWorkloadServer.Serve(s.grpcWorkloadListener); err != nil { - log.Errorf("SDS grpc server for workload proxies failed to start: %v", err) + for { + // Retry if Serve() fails + log.Info("Start SDS grpc server") + if err = s.grpcWorkloadServer.Serve(s.grpcWorkloadListener); err != nil { + log.Errorf("SDS grpc server for workload proxies failed to start: %v", err) + } } - - log.Info("SDS grpc server for workload proxies started") }() return nil diff --git a/security/pkg/nodeagent/secrets/secretfileserver.go b/security/pkg/nodeagent/secrets/secretfileserver.go deleted file mode 100644 index 8bbf7e5ea658..000000000000 --- a/security/pkg/nodeagent/secrets/secretfileserver.go +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright 2017 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package secrets - -import ( - "fmt" - "io/ioutil" - "os" - "path" - - "istio.io/istio/security/pkg/pki/util" -) - -const ( - // KeyFilePermission is the permission bits for private key file. - KeyFilePermission = 0600 - - // CertFilePermission is the permission bits for certificate file. - CertFilePermission = 0644 -) - -// SecretFileServer is an implementation of SecretServer that writes the key/cert into file system. -type SecretFileServer struct { - rootDir string -} - -// Put writes the specified key and cert to the files. -func (sf *SecretFileServer) Put(serviceAccount string, keycert util.KeyCertBundle) error { - _, priv, cert, root := keycert.GetAllPem() - dir := path.Join(sf.rootDir, serviceAccount) - if _, err := os.Stat(dir); os.IsNotExist(err) { - if err := os.Mkdir(dir, 0700); err != nil { - return fmt.Errorf("failed to create directory for %v, err %v", serviceAccount, err) - } - } - kpath := path.Join(dir, "key.pem") - if err := ioutil.WriteFile(kpath, priv, KeyFilePermission); err != nil { - return err - } - cpath := path.Join(dir, "cert-chain.pem") - if err := ioutil.WriteFile(cpath, cert, CertFilePermission); err != nil { - return err - } - rpath := path.Join(dir, "root-cert.pem") - return ioutil.WriteFile(rpath, root, CertFilePermission) -} diff --git a/security/pkg/nodeagent/secrets/secretserver.go b/security/pkg/nodeagent/secrets/secretserver.go deleted file mode 100644 index 2930d34e6c4c..000000000000 --- a/security/pkg/nodeagent/secrets/secretserver.go +++ /dev/null @@ -1,58 +0,0 @@ -// Copyright 2017 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package secrets - -import ( - "fmt" - - "istio.io/istio/security/pkg/pki/util" -) - -const ( - // SecretFile the key/cert to the workload through file. - SecretFile SecretServerMode = iota // 0 - // SecretDiscoveryServiceAPI the key/cert to the workload through SDS API. - SecretDiscoveryServiceAPI // 1 -) - -// SecretServerMode is the mode SecretServer runs. -type SecretServerMode int - -// SecretServer is for implementing the communication from the node agent to the workload. -type SecretServer interface { - // Put stores the key cert bundle with associated workload identity. - Put(serviceAccount string, bundle util.KeyCertBundle) error -} - -// Config contains the SecretServer configuration. -type Config struct { - // Mode specifies how the node agent communications to workload. - Mode SecretServerMode - - // SecretDirectory specifies the root directory storing the key cert files, only for file mode. - SecretDirectory string -} - -// NewSecretServer instantiates a SecretServer according to the configuration. -func NewSecretServer(cfg *Config) (SecretServer, error) { - switch cfg.Mode { - case SecretFile: - return &SecretFileServer{cfg.SecretDirectory}, nil - case SecretDiscoveryServiceAPI: - return &SDSServer{}, nil - default: - return nil, fmt.Errorf("mode: %d is not supported", cfg.Mode) - } -} diff --git a/security/pkg/nodeagent/secrets/secretserver_test.go b/security/pkg/nodeagent/secrets/secretserver_test.go deleted file mode 100644 index 95c0fcd30c08..000000000000 --- a/security/pkg/nodeagent/secrets/secretserver_test.go +++ /dev/null @@ -1,132 +0,0 @@ -// Copyright 2018 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package secrets - -import ( - "bytes" - "fmt" - "io/ioutil" - "os" - "path" - "testing" - "time" - - "istio.io/istio/security/pkg/pki/util" -) - -// TODO(incfly): put this in a testing library. -func createBundle(t *testing.T, host string) util.KeyCertBundle { - rootCAOpts := util.CertOptions{ - Host: host, - IsCA: true, - IsSelfSigned: true, - TTL: time.Hour, - Org: "Root CA", - RSAKeySize: 2048, - } - rootCertBytes, rootKeyBytes, err := util.GenCertKeyFromOptions(rootCAOpts) - if err != nil { - t.Errorf("failed to genecreate key for %v", host) - } - bundle, err := util.NewVerifiedKeyCertBundleFromPem( - rootCertBytes, rootKeyBytes, rootCertBytes, rootCertBytes) - if err != nil { - t.Errorf("failed to create key cert bundle for %v", host) - } - return bundle -} - -func setupTempDir(t *testing.T) (string, func()) { - t.Helper() - path, err := ioutil.TempDir("./", t.Name()) - if err != nil { - t.Errorf("failed to create temp dir for test %v err %v", t.Name(), err) - } - return path, func() { - _ = os.RemoveAll(path) - } -} - -func TestNewSecretServer_FileMode(t *testing.T) { - dir, cleanup := setupTempDir(t) - defer cleanup() - ss, err := NewSecretServer(&Config{Mode: SecretFile, SecretDirectory: dir}) - if err != nil { - t.Errorf("failed to create file mode secret server err (%v)", err) - } - bundleMap := map[string]util.KeyCertBundle{} - hosts := []string{"sa1", "sa2"} - for _, host := range hosts { - bundleMap[host] = createBundle(t, host) - } - for _, host := range hosts { - b := bundleMap[host] - if err := ss.Put(host, b); err != nil { - t.Errorf("failed to save secret for %v, err %v", host, err) - } - } - - // Verify each identity has correct key certs saved in the file. - for _, host := range hosts { - b := bundleMap[host] - _, key, cert, root := b.GetAllPem() - - keyBytes, err := ioutil.ReadFile(path.Join(dir, host, "key.pem")) - if err != nil { - t.Errorf("failed to read key file for %v, error %v", host, err) - } - if !bytes.Equal(key, keyBytes) { - t.Errorf("unexpecte key for %v, want\n%v\ngot\n%v", host, string(key), string(keyBytes)) - } - - certBytes, err := ioutil.ReadFile(path.Join(dir, host, "cert-chain.pem")) - if err != nil { - t.Errorf("failed to read cert file for %v, error %v", host, err) - } - if !bytes.Equal(cert, certBytes) { - t.Errorf("unexpecte cert for %v, want\n%v\ngot\n%v", host, string(cert), string(certBytes)) - } - - rootBytes, err := ioutil.ReadFile(path.Join(dir, host, "root-cert.pem")) - if err != nil { - t.Errorf("failed to read root file for %v, error %v", host, err) - } - if !bytes.Equal(root, rootBytes) { - t.Errorf("unexpecte root for %v, want\n%v\ngot\n%v", host, string(root), string(rootBytes)) - } - } -} - -func TestNewSecretServer_WorkloadAPI(t *testing.T) { - ss, err := NewSecretServer(&Config{Mode: SecretDiscoveryServiceAPI}) - if err != nil { - t.Errorf("failed to create SDS mode secret server err (%v)", err) - } - if ss == nil { - t.Errorf("secretServer should not be nil") - } -} - -func TestNewSecretServer_Unsupported(t *testing.T) { - actual, err := NewSecretServer(&Config{Mode: SecretDiscoveryServiceAPI + 1}) - expectedErr := fmt.Errorf("mode: 2 is not supported") - if err == nil || err.Error() != expectedErr.Error() { - t.Errorf("error message mismatch got %v want %v", err, expectedErr) - } - - if actual != nil { - t.Errorf("server should be nil") - } -} diff --git a/security/pkg/nodeagent/secrets/server.go b/security/pkg/nodeagent/secrets/server.go deleted file mode 100644 index af94f97ce5a6..000000000000 --- a/security/pkg/nodeagent/secrets/server.go +++ /dev/null @@ -1,220 +0,0 @@ -// Copyright 2018 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package secrets - -import ( - "fmt" - "net" - "os" - "sync" - "time" - - api "github.com/envoyproxy/go-control-plane/envoy/api/v2" - "github.com/envoyproxy/go-control-plane/envoy/api/v2/auth" - "github.com/envoyproxy/go-control-plane/envoy/api/v2/core" - sds "github.com/envoyproxy/go-control-plane/envoy/service/discovery/v2" - "github.com/gogo/protobuf/proto" - "github.com/gogo/protobuf/types" - "golang.org/x/net/context" - "google.golang.org/grpc" - "google.golang.org/grpc/codes" - "google.golang.org/grpc/status" - - "istio.io/istio/pkg/log" - "istio.io/istio/security/pkg/pki/util" -) - -// SDSServer implements api.SecretDiscoveryServiceServer that listens on a -// list of Unix Domain Sockets. -type SDSServer struct { - // Stores the certificated chain in the memory protected by certificateChainGuard - certificateChain []byte - - // Stores the private key in the memory protected by privateKeyGuard - privateKey []byte - - // Read/Write mutex for certificateChain - certificateChainGuard sync.RWMutex - - // Read/Write mutex for privateKey - privateKeyGuard sync.RWMutex - - // Specifies a map of Unix Domain Socket paths and the server listens on. - // Each UDS path identifies the identity for which the workload will - // request X.509 key/cert from this server. This path should only be - // accessible by such workload. - udsServerMap map[string]*grpc.Server - - // Mutex for udsServerMap - udsServerMapGuard sync.Mutex - - // current certificate chain and private key version number - version string -} - -const ( - // SecretTypeURL defines the type URL for Envoy secret proto. - SecretTypeURL = "type.googleapis.com/envoy.api.v2.auth.Secret" - - // SecretName defines the type of the secrets to fetch from the SDS server. - SecretName = "SPKI" -) - -// SetServiceIdentityCert sets the service identity certificate into the memory. -func (s *SDSServer) SetServiceIdentityCert(content []byte) error { - s.certificateChainGuard.Lock() - s.certificateChain = content - s.version = fmt.Sprintf("%v", time.Now().UnixNano()/int64(time.Millisecond)) - s.certificateChainGuard.Unlock() - return nil -} - -// SetServiceIdentityPrivateKey sets the service identity private key into the memory. -func (s *SDSServer) SetServiceIdentityPrivateKey(content []byte) error { - s.privateKeyGuard.Lock() - s.privateKey = content - s.version = fmt.Sprintf("%v", time.Now().UnixNano()/int64(time.Millisecond)) - s.privateKeyGuard.Unlock() - return nil -} - -// Put stores the KeyCertBundle for a specific service account. -func (s *SDSServer) Put(serviceAccount string, b util.KeyCertBundle) error { - return nil -} - -// GetTLSCertificate generates the X.509 key/cert for the workload identity -// derived from udsPath, which is where the FetchSecrets grpc request is -// received. -// SecretServer implementations could have different implementation -func (s *SDSServer) GetTLSCertificate() (*auth.TlsCertificate, error) { - s.certificateChainGuard.RLock() - s.privateKeyGuard.RLock() - - tlsSecret := &auth.TlsCertificate{ - CertificateChain: &core.DataSource{ - Specifier: &core.DataSource_InlineBytes{InlineBytes: s.certificateChain}, - }, - PrivateKey: &core.DataSource{ - Specifier: &core.DataSource_InlineBytes{InlineBytes: s.privateKey}, - }, - } - - s.certificateChainGuard.RUnlock() - s.privateKeyGuard.RUnlock() - return tlsSecret, nil -} - -// FetchSecrets fetches the X.509 key/cert for a given workload whose identity -// can be derived from the UDS path where this call is received. -func (s *SDSServer) FetchSecrets(ctx context.Context, request *api.DiscoveryRequest) (*api.DiscoveryResponse, error) { - tlsCertificate, err := s.GetTLSCertificate() - if err != nil { - return nil, status.Errorf(codes.Internal, "failed to read TLS certificate (%v)", err) - } - - resources := make([]types.Any, 1) - secret := &auth.Secret{ - Name: SecretName, - Type: &auth.Secret_TlsCertificate{ - TlsCertificate: tlsCertificate, - }, - } - data, err := proto.Marshal(secret) - if err != nil { - errMessage := fmt.Sprintf("Generates invalid secret (%v)", err) - log.Errorf(errMessage) - return nil, status.Errorf(codes.Internal, errMessage) - } - resources[0] = types.Any{ - TypeUrl: SecretTypeURL, - Value: data, - } - - // TODO(jaebong) for now we are using timestamp in miliseconds. It needs to be updated once we have a new design - response := &api.DiscoveryResponse{ - Resources: resources, - TypeUrl: SecretTypeURL, - VersionInfo: s.version, - } - - return response, nil -} - -// StreamSecrets is not supported. -func (s *SDSServer) StreamSecrets(stream sds.SecretDiscoveryService_StreamSecretsServer) error { - errMessage := "StreamSecrets is not implemented." - log.Error(errMessage) - return status.Errorf(codes.Unimplemented, errMessage) -} - -// NewSDSServer creates the SDSServer that registers -// SecretDiscoveryServiceServer, a gRPC server. -func NewSDSServer() *SDSServer { - s := &SDSServer{ - udsServerMap: map[string]*grpc.Server{}, - version: fmt.Sprintf("%v", time.Now().UnixNano()/int64(time.Millisecond)), - } - - return s -} - -// RegisterUdsPath registers a path for Unix Domain Socket and has -// SDSServer's gRPC server listen on it. -func (s *SDSServer) RegisterUdsPath(udsPath string) error { - s.udsServerMapGuard.Lock() - defer s.udsServerMapGuard.Unlock() - - _, err := os.Stat(udsPath) - if err == nil { - return fmt.Errorf("UDS path %v already exists", udsPath) - } - listener, err := net.Listen("unix", udsPath) - if err != nil { - return fmt.Errorf("failed to listen on %v", err) - } - - var opts []grpc.ServerOption - udsServer := grpc.NewServer(opts...) - sds.RegisterSecretDiscoveryServiceServer(udsServer, s) - s.udsServerMap[udsPath] = udsServer - - // grpcServer.Serve() is a blocking call, so run it in a goroutine. - go func() { - log.Infof("Starting GRPC server on UDS path: %s", udsPath) - err := udsServer.Serve(listener) - // grpcServer.Serve() always returns a non-nil error. - log.Warnf("GRPC server returns an error: %v", err) - }() - - return nil -} - -// DeregisterUdsPath closes and removes the grpcServer instance serving UDS -func (s *SDSServer) DeregisterUdsPath(udsPath string) error { - s.udsServerMapGuard.Lock() - defer s.udsServerMapGuard.Unlock() - - udsServer, ok := s.udsServerMap[udsPath] - if !ok { - return fmt.Errorf("udsPath is not registred: %s", udsPath) - } - - udsServer.GracefulStop() - delete(s.udsServerMap, udsPath) - log.Infof("Stopped the GRPC server on UDS path: %s", udsPath) - - return nil -} diff --git a/security/pkg/nodeagent/secrets/server_test.go b/security/pkg/nodeagent/secrets/server_test.go deleted file mode 100644 index a8d75fad99a9..000000000000 --- a/security/pkg/nodeagent/secrets/server_test.go +++ /dev/null @@ -1,152 +0,0 @@ -// Copyright 2018 Istio Authors -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package secrets - -import ( - "fmt" - "io/ioutil" - "net" - "path/filepath" - "testing" - "time" - - api "github.com/envoyproxy/go-control-plane/envoy/api/v2" - "github.com/envoyproxy/go-control-plane/envoy/api/v2/auth" - sds "github.com/envoyproxy/go-control-plane/envoy/service/discovery/v2" - "github.com/gogo/protobuf/proto" - "golang.org/x/net/context" - "google.golang.org/grpc" -) - -func unixDialer(target string, timeout time.Duration) (net.Conn, error) { - return net.DialTimeout("unix", target, timeout) -} - -func FetchSecrets(t *testing.T, udsPath string) *api.DiscoveryResponse { - var opts []grpc.DialOption - opts = append(opts, grpc.WithInsecure()) - opts = append(opts, grpc.WithDialer(unixDialer)) - conn, err := grpc.Dial(udsPath, opts...) - if err != nil { - t.Fatalf("Failed to connect with server %v", err) - } - defer conn.Close() - - client := sds.NewSecretDiscoveryServiceClient(conn) - response, err := client.FetchSecrets(context.Background(), &api.DiscoveryRequest{}) - if err != nil { - t.Fatalf("Failed fetch secrets %v", err) - } - return response -} - -func VerifySecrets(t *testing.T, response *api.DiscoveryResponse, certificateChain string, privateKey string) { - var secret auth.Secret - resource := response.GetResources()[0] - bytes := resource.Value - - err := proto.Unmarshal(bytes, &secret) - if err != nil { - t.Fatalf("failed parse the response %v", err) - } - if SecretTypeURL != response.GetTypeUrl() || SecretName != secret.GetName() { - t.Fatalf("Unexpected response. Expected: type %s, name %s; Actual: type %s, name %s", - SecretTypeURL, SecretName, response.GetTypeUrl(), secret.GetName()) - } - - if certificateChain != string(secret.GetTlsCertificate().CertificateChain.GetInlineBytes()) { - t.Errorf("Certificates mismatch. Expected: %v, Got: %v", - certificateChain, string(secret.GetTlsCertificate().CertificateChain.GetInlineBytes())) - } - - if privateKey != string(secret.GetTlsCertificate().PrivateKey.GetInlineBytes()) { - t.Errorf("Private key mismatch. Expected: %v, Got: %v", - privateKey, string(secret.GetTlsCertificate().PrivateKey.GetInlineBytes())) - } -} - -func TestSingleUdsPath(t *testing.T) { - server := NewSDSServer() - _ = server.SetServiceIdentityCert([]byte("certificate")) - _ = server.SetServiceIdentityPrivateKey([]byte("private key")) - - tmpdir, _ := ioutil.TempDir("", "uds") - udsPath := filepath.Join(tmpdir, "test_path") - - if err := server.RegisterUdsPath(udsPath); err != nil { - t.Fatalf("Unexpected Error: %v", err) - } - - VerifySecrets(t, FetchSecrets(t, udsPath), "certificate", "private key") - - if err := server.DeregisterUdsPath(udsPath); err != nil { - t.Errorf("failed to deregister udsPath: %s (error: %v)", udsPath, err) - } -} - -func TestMultipleUdsPaths(t *testing.T) { - server := NewSDSServer() - _ = server.SetServiceIdentityCert([]byte("certificate")) - _ = server.SetServiceIdentityPrivateKey([]byte("private key")) - - tmpdir, _ := ioutil.TempDir("", "uds") - udsPath1 := filepath.Join(tmpdir, "test_path1") - udsPath2 := filepath.Join(tmpdir, "test_path2") - udsPath3 := filepath.Join(tmpdir, "test_path3") - - err1 := server.RegisterUdsPath(udsPath1) - err2 := server.RegisterUdsPath(udsPath2) - err3 := server.RegisterUdsPath(udsPath3) - if err1 != nil || err2 != nil || err3 != nil { - t.Fatalf("Unexpected Error: %v %v %v", err1, err2, err3) - } - - VerifySecrets(t, FetchSecrets(t, udsPath1), "certificate", "private key") - VerifySecrets(t, FetchSecrets(t, udsPath2), "certificate", "private key") - VerifySecrets(t, FetchSecrets(t, udsPath3), "certificate", "private key") - - if err := server.DeregisterUdsPath(udsPath1); err != nil { - t.Errorf("failed to deregister udsPath: %s (error: %v)", udsPath1, err) - } - - if err := server.DeregisterUdsPath(udsPath2); err != nil { - t.Errorf("failed to deregister udsPath: %s (error: %v)", udsPath2, err) - } - - if err := server.DeregisterUdsPath(udsPath3); err != nil { - t.Errorf("failed to deregister udsPath: %s (error: %v)", udsPath3, err) - } - -} - -func TestDuplicateUdsPaths(t *testing.T) { - server := NewSDSServer() - _ = server.SetServiceIdentityCert([]byte("certificate")) - _ = server.SetServiceIdentityPrivateKey([]byte("private key")) - - tmpdir, _ := ioutil.TempDir("", "uds") - udsPath := filepath.Join(tmpdir, "test_path") - - _ = server.RegisterUdsPath(udsPath) - err := server.RegisterUdsPath(udsPath) - expectedErr := fmt.Sprintf("UDS path %v already exists", udsPath) - if err == nil || err.Error() != expectedErr { - t.Fatalf("Expect error: %v, Actual error: %v", expectedErr, err) - } - - if err := server.DeregisterUdsPath(udsPath); err != nil { - t.Errorf("failed to deregister udsPath: %s (error: %v)", udsPath, err) - } -} diff --git a/security/pkg/pki/util/san.go b/security/pkg/pki/util/san.go index 91fc7a32ea12..b20f6d69d46a 100644 --- a/security/pkg/pki/util/san.go +++ b/security/pkg/pki/util/san.go @@ -20,17 +20,14 @@ import ( "fmt" "net" "strings" + + "istio.io/istio/pkg/spiffe" ) // IdentityType represents type of an identity. This is used to properly encode // an identity into a SAN extension. type IdentityType int -const ( - // URIScheme is the URI scheme for Istio identities. - URIScheme string = "spiffe" -) - const ( // TypeDNS represents a DNS name. TypeDNS IdentityType = iota @@ -85,7 +82,7 @@ func BuildSubjectAltNameExtension(hosts string) (*pkix.Extension, error) { ip = eip } ids = append(ids, Identity{Type: TypeIP, Value: ip}) - } else if strings.HasPrefix(host, URIScheme+":") { + } else if strings.HasPrefix(host, spiffe.Scheme+":") { ids = append(ids, Identity{Type: TypeURI, Value: []byte(host)}) } else { ids = append(ids, Identity{Type: TypeDNS, Value: []byte(host)}) @@ -195,15 +192,6 @@ func ExtractIDs(exts []pkix.Extension) ([]string, error) { return ids, nil } -// GenSanURI returns the formatted uri(SPIFFEE format for now) for the certificate. -func GenSanURI(ns, serviceAccount string) (string, error) { - if ns == "" || serviceAccount == "" { - return "", fmt.Errorf( - "namespace or service account can't be empty ns=%v serviceAccount=%v", ns, serviceAccount) - } - return fmt.Sprintf("spiffe://cluster.local/ns/%s/sa/%s", ns, serviceAccount), nil -} - func generateReversedMap(m map[IdentityType]int) map[int]IdentityType { reversed := make(map[int]IdentityType) for key, value := range m { diff --git a/security/pkg/pki/util/san_test.go b/security/pkg/pki/util/san_test.go index 751ad836c4f6..e521250865df 100644 --- a/security/pkg/pki/util/san_test.go +++ b/security/pkg/pki/util/san_test.go @@ -19,7 +19,6 @@ import ( "encoding/asn1" "net" "reflect" - "strings" "testing" ) @@ -228,44 +227,3 @@ func TestExtractIDs(t *testing.T) { } } } - -func TestGenSanURI(t *testing.T) { - testCases := []struct { - namespace string - serviceAccount string - expectedError string - expectedURI string - }{ - { - serviceAccount: "sa", - expectedError: "namespace or service account can't be empty", - }, - { - namespace: "ns", - expectedError: "namespace or service account can't be empty", - }, - { - namespace: "namespace-foo", - serviceAccount: "service-bar", - expectedURI: "spiffe://cluster.local/ns/namespace-foo/sa/service-bar", - }, - } - for id, tc := range testCases { - got, err := GenSanURI(tc.namespace, tc.serviceAccount) - if tc.expectedError == "" && err != nil { - t.Errorf("teste case [%v] failed, error %v", id, tc) - } - if tc.expectedError != "" { - if err == nil { - t.Errorf("want get error %v, got nil", tc.expectedError) - } else if !strings.Contains(err.Error(), tc.expectedError) { - t.Errorf("want error contains %v, got error %v", tc.expectedError, err) - } - continue - } - if got != tc.expectedURI { - t.Errorf("unexpected subject name, want %v, got %v", tc.expectedURI, got) - } - - } -} diff --git a/security/pkg/platform/gcp.go b/security/pkg/platform/gcp.go index f25c67be4799..e9c1354cf7cc 100644 --- a/security/pkg/platform/gcp.go +++ b/security/pkg/platform/gcp.go @@ -17,6 +17,8 @@ package platform import ( "fmt" + "istio.io/istio/pkg/spiffe" + "cloud.google.com/go/compute/metadata" "golang.org/x/net/context" "google.golang.org/grpc" @@ -24,7 +26,6 @@ import ( "istio.io/istio/pkg/log" cred "istio.io/istio/security/pkg/credential" - "istio.io/istio/security/pkg/pki/util" ) const ( @@ -95,7 +96,7 @@ func (ci *GcpClientImpl) GetServiceIdentity() (string, error) { log.Errorf("Failed to get service account with error: %v", err) return "", err } - return util.GenSanURI("default", serviceAccount) + return spiffe.GenSpiffeURI("default", serviceAccount) } // GetAgentCredential returns the GCP JWT for the serivce account. diff --git a/security/pkg/platform/onprem.go b/security/pkg/platform/onprem.go index 95e48bc977ea..5f03b86fdd94 100644 --- a/security/pkg/platform/onprem.go +++ b/security/pkg/platform/onprem.go @@ -21,6 +21,8 @@ import ( "io/ioutil" "os" + "istio.io/istio/pkg/spiffe" + "google.golang.org/grpc" "google.golang.org/grpc/credentials" @@ -91,7 +93,7 @@ func (ci *OnPremClientImpl) GetServiceIdentity() (string, error) { } if len(serviceIDs) != 1 { for _, s := range serviceIDs { - if strings.HasPrefix(s, "spiffe://") { + if strings.HasPrefix(s, spiffe.Scheme+"://") { return s, nil } } diff --git a/security/pkg/registry/kube/serviceaccount.go b/security/pkg/registry/kube/serviceaccount.go index da618baabadc..2b6da4c066ff 100644 --- a/security/pkg/registry/kube/serviceaccount.go +++ b/security/pkg/registry/kube/serviceaccount.go @@ -15,11 +15,13 @@ package kube import ( - "fmt" "reflect" "time" + "istio.io/istio/pkg/spiffe" + v1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/watch" @@ -27,7 +29,6 @@ import ( "k8s.io/client-go/tools/cache" "istio.io/istio/pkg/log" - "istio.io/istio/security/pkg/pki/util" "istio.io/istio/security/pkg/registry" ) @@ -75,8 +76,7 @@ func (c *ServiceAccountController) Run(stopCh chan struct{}) { } func getSpiffeID(sa *v1.ServiceAccount) string { - // borrowed from security/pkg/k8s/controller/secret.go:generateKeyAndCert() - return fmt.Sprintf("%s://cluster.local/ns/%s/sa/%s", util.URIScheme, sa.GetNamespace(), sa.GetName()) + return spiffe.MustGenSpiffeURI(sa.GetNamespace(), sa.GetName()) } func (c *ServiceAccountController) serviceAccountAdded(obj interface{}) { diff --git a/security/pkg/registry/kube/serviceaccount_test.go b/security/pkg/registry/kube/serviceaccount_test.go index 7e53dd8e87ea..661052c7fc1d 100644 --- a/security/pkg/registry/kube/serviceaccount_test.go +++ b/security/pkg/registry/kube/serviceaccount_test.go @@ -19,6 +19,7 @@ import ( "testing" v1 "k8s.io/api/core/v1" + meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/client-go/kubernetes/fake" @@ -39,12 +40,14 @@ type serviceAccountPair struct { newSvcAcct *v1.ServiceAccount } +type testStruct struct { + name string + namespace string + expected string +} + func TestSpiffeID(t *testing.T) { - testCases := []struct { - name string - namespace string - expected string - }{ + testCases := []testStruct{ { name: "foo", namespace: "bar", @@ -72,7 +75,7 @@ func TestSpiffeID(t *testing.T) { }, } for _, c := range testCases { - if id := getSpiffeID(createServiceAccount(c.name, c.namespace)); id != c.expected { + if id := getSpiffeID(createServiceAccount(c.name, c.namespace)); c.expected != id { t.Errorf("getSpiffeID(%s, %s): expected %s, got %s", c.name, c.namespace, c.expected, id) } } diff --git a/security/pkg/server/ca/authenticate/kube_jwt.go b/security/pkg/server/ca/authenticate/kube_jwt.go index 57b3727370e5..beb7e062be1a 100644 --- a/security/pkg/server/ca/authenticate/kube_jwt.go +++ b/security/pkg/server/ca/authenticate/kube_jwt.go @@ -23,17 +23,23 @@ import ( "istio.io/istio/security/pkg/k8s/tokenreview" ) +const ( + // identityTemplate is the SPIFFE format template of the identity. + identityTemplate = "spiffe://%s/ns/%s/sa/%s" +) + type tokenReviewClient interface { - ValidateK8sJwt(targetJWT string) (string, error) + ValidateK8sJwt(targetJWT string) ([]string, error) } // KubeJWTAuthenticator authenticates K8s JWTs. type KubeJWTAuthenticator struct { - client tokenReviewClient + client tokenReviewClient + trustDomain string } // NewKubeJWTAuthenticator creates a new kubeJWTAuthenticator. -func NewKubeJWTAuthenticator(k8sAPIServerURL, caCertPath, jwtPath string) (*KubeJWTAuthenticator, error) { +func NewKubeJWTAuthenticator(k8sAPIServerURL, caCertPath, jwtPath, trustDomain string) (*KubeJWTAuthenticator, error) { // Read the CA certificate of the k8s apiserver caCert, err := ioutil.ReadFile(caCertPath) if err != nil { @@ -44,11 +50,13 @@ func NewKubeJWTAuthenticator(k8sAPIServerURL, caCertPath, jwtPath string) (*Kube return nil, fmt.Errorf("failed to read Citadel JWT: %v", err) } return &KubeJWTAuthenticator{ - client: tokenreview.NewK8sSvcAcctAuthn(k8sAPIServerURL, caCert, string(reviewerJWT[:])), + client: tokenreview.NewK8sSvcAcctAuthn(k8sAPIServerURL, caCert, string(reviewerJWT[:])), + trustDomain: trustDomain, }, nil } // Authenticate authenticates the call using the K8s JWT from the context. +// The returned Caller.Identities is in SPIFFE format. func (a *KubeJWTAuthenticator) Authenticate(ctx context.Context) (*Caller, error) { targetJWT, err := extractBearerToken(ctx) if err != nil { @@ -58,8 +66,11 @@ func (a *KubeJWTAuthenticator) Authenticate(ctx context.Context) (*Caller, error if err != nil { return nil, fmt.Errorf("failed to validate the JWT: %v", err) } + if len(id) != 2 { + return nil, fmt.Errorf("failed to parse the JWT. Validation result length is not 2, but %d", len(id)) + } return &Caller{ AuthSource: AuthSourceIDToken, - Identities: []string{id}, + Identities: []string{fmt.Sprintf(identityTemplate, a.trustDomain, id[0], id[1])}, }, nil } diff --git a/security/pkg/server/ca/authenticate/kube_jwt_test.go b/security/pkg/server/ca/authenticate/kube_jwt_test.go index 41a522f74753..fa6b940241b3 100644 --- a/security/pkg/server/ca/authenticate/kube_jwt_test.go +++ b/security/pkg/server/ca/authenticate/kube_jwt_test.go @@ -28,15 +28,15 @@ import ( ) type mockTokenReviewClient struct { - id string + id []string err error } -func (c mockTokenReviewClient) ValidateK8sJwt(jwt string) (string, error) { - if c.id != "" { +func (c mockTokenReviewClient) ValidateK8sJwt(jwt string) ([]string, error) { + if c.id != nil { return c.id, nil } - return "", c.err + return nil, c.err } func TestNewKubeJWTAuthenticator(t *testing.T) { @@ -45,6 +45,7 @@ func TestNewKubeJWTAuthenticator(t *testing.T) { validJWTPath := filepath.Join(tmpdir, "jwt") caCertFileContent := []byte("CACERT") jwtFileContent := []byte("JWT") + trustDomain := "testdomain.com" url := "https://server/url" if err := ioutil.WriteFile(validCACertPath, caCertFileContent, 0777); err != nil { t.Errorf("Failed to write to testing CA cert file: %v", err) @@ -56,6 +57,7 @@ func TestNewKubeJWTAuthenticator(t *testing.T) { testCases := map[string]struct { caCertPath string jwtPath string + trustDomain string expectedErrMsg string }{ "Invalid CA cert path": { @@ -76,7 +78,7 @@ func TestNewKubeJWTAuthenticator(t *testing.T) { } for id, tc := range testCases { - authenticator, err := NewKubeJWTAuthenticator(url, tc.caCertPath, tc.jwtPath) + authenticator, err := NewKubeJWTAuthenticator(url, tc.caCertPath, tc.jwtPath, trustDomain) if len(tc.expectedErrMsg) > 0 { if err == nil { t.Errorf("Case %s: Succeeded. Error expected: %v", id, err) @@ -89,7 +91,8 @@ func TestNewKubeJWTAuthenticator(t *testing.T) { t.Errorf("Case %s: Unexpected Error: %v", id, err) } expectedAuthenticator := &KubeJWTAuthenticator{ - client: tokenreview.NewK8sSvcAcctAuthn(url, caCertFileContent, string(jwtFileContent[:])), + client: tokenreview.NewK8sSvcAcctAuthn(url, caCertFileContent, string(jwtFileContent[:])), + trustDomain: trustDomain, } if !reflect.DeepEqual(authenticator, expectedAuthenticator) { t.Errorf("Case %q: Unexpected authentication result: want %v but got %v", @@ -101,8 +104,8 @@ func TestNewKubeJWTAuthenticator(t *testing.T) { func TestAuthenticate(t *testing.T) { testCases := map[string]struct { metadata metadata.MD - id string client tokenReviewClient + expectedID string expectedErrMsg string }{ "No bearer token": { @@ -122,9 +125,20 @@ func TestAuthenticate(t *testing.T) { "Bearer bearer-token", }, }, - client: &mockTokenReviewClient{id: "", err: fmt.Errorf("test error")}, + client: &mockTokenReviewClient{id: nil, err: fmt.Errorf("test error")}, expectedErrMsg: "failed to validate the JWT: test error", }, + "Wrong identity length": { + metadata: metadata.MD{ + "random": []string{}, + "authorization": []string{ + "Basic callername", + "Bearer bearer-token", + }, + }, + client: &mockTokenReviewClient{id: []string{"foo"}, err: nil}, + expectedErrMsg: "failed to parse the JWT. Validation result length is not 2, but 1", + }, "Successful": { metadata: metadata.MD{ "random": []string{}, @@ -133,8 +147,8 @@ func TestAuthenticate(t *testing.T) { "Bearer bearer-token", }, }, - id: "namespace:serviceaccount", - client: &mockTokenReviewClient{id: "namespace:serviceaccount", err: nil}, + client: &mockTokenReviewClient{id: []string{"foo", "bar"}, err: nil}, + expectedID: "spiffe://example.com/ns/foo/sa/bar", expectedErrMsg: "", }, } @@ -145,7 +159,7 @@ func TestAuthenticate(t *testing.T) { ctx = metadata.NewIncomingContext(ctx, tc.metadata) } - authenticator := &KubeJWTAuthenticator{client: tc.client} + authenticator := &KubeJWTAuthenticator{client: tc.client, trustDomain: "example.com"} actualCaller, err := authenticator.Authenticate(ctx) if len(tc.expectedErrMsg) > 0 { @@ -163,7 +177,7 @@ func TestAuthenticate(t *testing.T) { expectedCaller := &Caller{ AuthSource: AuthSourceIDToken, - Identities: []string{tc.id}, + Identities: []string{tc.expectedID}, } if !reflect.DeepEqual(actualCaller, expectedCaller) { diff --git a/security/pkg/server/ca/server.go b/security/pkg/server/ca/server.go index c72a48efe079..1b78a6ff4ad2 100644 --- a/security/pkg/server/ca/server.go +++ b/security/pkg/server/ca/server.go @@ -181,7 +181,7 @@ func (s *Server) Run() error { } // New creates a new instance of `IstioCAServiceServer`. -func New(ca ca.CertificateAuthority, ttl time.Duration, forCA bool, hostlist []string, port int) (*Server, error) { +func New(ca ca.CertificateAuthority, ttl time.Duration, forCA bool, hostlist []string, port int, trustDomain string) (*Server, error) { if len(hostlist) == 0 { return nil, fmt.Errorf("failed to create grpc server hostlist empty") } @@ -191,7 +191,7 @@ func New(ca ca.CertificateAuthority, ttl time.Duration, forCA bool, hostlist []s authenticators := []authenticator{&authenticate.ClientCertAuthenticator{}} log.Info("added client certificate authenticator") - authenticator, err := authenticate.NewKubeJWTAuthenticator(k8sAPIServerURL, caCertPath, jwtPath) + authenticator, err := authenticate.NewKubeJWTAuthenticator(k8sAPIServerURL, caCertPath, jwtPath, trustDomain) if err == nil { authenticators = append(authenticators, authenticator) log.Info("added K8s JWT authenticator") diff --git a/security/pkg/server/ca/server_test.go b/security/pkg/server/ca/server_test.go index b3994e53cb94..24937e3aa382 100644 --- a/security/pkg/server/ca/server_test.go +++ b/security/pkg/server/ca/server_test.go @@ -392,7 +392,7 @@ func TestRun(t *testing.T) { // K8s JWT authenticator is added in k8s env. tc.expectedAuthenticatorsLen = tc.expectedAuthenticatorsLen + 1 } - server, err := New(tc.ca, time.Hour, false, tc.hostname, tc.port) + server, err := New(tc.ca, time.Hour, false, tc.hostname, tc.port, "testdomain.com") if err == nil { err = server.Run() } diff --git a/security/tests/integration/kubernetes_utils.go b/security/tests/integration/kubernetes_utils.go index d233b7393ada..86078a94b831 100644 --- a/security/tests/integration/kubernetes_utils.go +++ b/security/tests/integration/kubernetes_utils.go @@ -29,6 +29,7 @@ import ( "k8s.io/client-go/tools/clientcmd" "istio.io/istio/pkg/log" + "istio.io/istio/pkg/spiffe" "istio.io/istio/security/pkg/k8s/controller" "istio.io/istio/security/pkg/pki/util" ) @@ -348,7 +349,7 @@ func ExamineSecret(secret *v1.Secret) error { } } - expectedID, err := util.GenSanURI(secret.GetNamespace(), "default") + expectedID, err := spiffe.GenSpiffeURI(secret.GetNamespace(), "default") if err != nil { return err } diff --git a/tests/e2e/framework/kubernetes.go b/tests/e2e/framework/kubernetes.go index 45c5c691ae67..c9602a5a1c7b 100644 --- a/tests/e2e/framework/kubernetes.go +++ b/tests/e2e/framework/kubernetes.go @@ -30,10 +30,9 @@ import ( "time" "github.com/pkg/errors" - "k8s.io/client-go/kubernetes" - "istio.io/istio/pilot/pkg/serviceregistry/kube" "istio.io/istio/pkg/log" + testKube "istio.io/istio/pkg/test/kube" "istio.io/istio/tests/util" ) @@ -42,6 +41,7 @@ const ( istioInstallDir = "install/kubernetes" nonAuthInstallFile = "istio.yaml" authInstallFile = "istio-auth.yaml" + authSdsInstallFile = "istio-auth-sds.yaml" nonAuthWithoutMCPInstallFile = "istio-mcp.yaml" authWithoutMCPInstallFile = "istio-auth-mcp.yaml" nonAuthInstallFileNamespace = "istio-one-namespace.yaml" @@ -96,6 +96,7 @@ var ( sidecarInjectorHub = flag.String("sidecar_injector_hub", os.Getenv("HUB"), "Sidecar injector hub") sidecarInjectorTag = flag.String("sidecar_injector_tag", os.Getenv("TAG"), "Sidecar injector tag") authEnable = flag.Bool("auth_enable", false, "Enable auth") + authSdsEnable = flag.Bool("auth_sds_enable", false, "Enable auth using key/cert distributed through SDS") rbacEnable = flag.Bool("rbac_enable", true, "Enable rbac") localCluster = flag.Bool("use_local_cluster", false, "If true any LoadBalancer type services will be converted to a NodePort service during testing. If running on minikube, this should be set to true") @@ -157,6 +158,7 @@ type KubeInfo struct { localCluster bool namespaceCreated bool AuthEnabled bool + AuthSdsEnabled bool RBACEnabled bool // Istioctl installation @@ -172,19 +174,23 @@ type KubeInfo struct { appPods map[string]*appPodsInfo Clusters map[string]string - KubeConfig string - KubeClient kubernetes.Interface - RemoteKubeConfig string - RemoteKubeClient kubernetes.Interface - RemoteAppManager *AppManager - RemoteIstioctl *Istioctl + KubeConfig string + KubeAccessor *testKube.Accessor + RemoteKubeConfig string + RemoteKubeAccessor *testKube.Accessor + RemoteAppManager *AppManager + RemoteIstioctl *Istioctl } func getClusterWideInstallFile() string { var istioYaml string if *authEnable { if *useMCP { - istioYaml = authInstallFile + if *authSdsEnable { + istioYaml = authSdsInstallFile + } else { + istioYaml = authInstallFile + } } else { istioYaml = authWithoutMCPInstallFile } @@ -234,7 +240,7 @@ func newKubeInfo(tmpDir, runID, baseVersion string) (*KubeInfo, error) { // environment's kubeconfig if an empty string is provided. Therefore in the // default case kubeConfig will not be set. var kubeConfig, remoteKubeConfig string - var kubeClient, remoteKubeClient kubernetes.Interface + var remoteKubeAccessor *testKube.Accessor var aRemote *AppManager var remoteI *Istioctl if *multiClusterDir != "" { @@ -253,10 +259,7 @@ func newKubeInfo(tmpDir, runID, baseVersion string) (*KubeInfo, error) { // TODO could change this to continue tests if only a single cluster is in play return nil, err } - if kubeClient, err = kube.CreateInterface(kubeConfig); err != nil { - return nil, err - } - if remoteKubeClient, err = kube.CreateInterface(remoteKubeConfig); err != nil { + if remoteKubeAccessor, err = testKube.NewAccessor(remoteKubeConfig, tmpDir); err != nil { return nil, err } // Create Istioctl for remote using injectConfigMap on remote (not the same as master cluster's) @@ -273,6 +276,11 @@ func newKubeInfo(tmpDir, runID, baseVersion string) (*KubeInfo, error) { a := NewAppManager(tmpDir, *namespace, i, kubeConfig) + kubeAccessor, err := testKube.NewAccessor(kubeConfig, tmpDir) + if err != nil { + return nil, err + } + clusters := make(map[string]string) appPods := make(map[string]*appPodsInfo) clusters[PrimaryCluster] = kubeConfig @@ -284,25 +292,26 @@ func newKubeInfo(tmpDir, runID, baseVersion string) (*KubeInfo, error) { log.Infof("Using release dir: %s", releaseDir) return &KubeInfo{ - Namespace: *namespace, - namespaceCreated: false, - TmpDir: tmpDir, - yamlDir: yamlDir, - localCluster: *localCluster, - Istioctl: i, - RemoteIstioctl: remoteI, - AppManager: a, - RemoteAppManager: aRemote, - AuthEnabled: *authEnable, - RBACEnabled: *rbacEnable, - ReleaseDir: releaseDir, - BaseVersion: baseVersion, - KubeConfig: kubeConfig, - KubeClient: kubeClient, - RemoteKubeConfig: remoteKubeConfig, - RemoteKubeClient: remoteKubeClient, - appPods: appPods, - Clusters: clusters, + Namespace: *namespace, + namespaceCreated: false, + TmpDir: tmpDir, + yamlDir: yamlDir, + localCluster: *localCluster, + Istioctl: i, + RemoteIstioctl: remoteI, + AppManager: a, + RemoteAppManager: aRemote, + AuthEnabled: *authEnable, + AuthSdsEnabled: *authSdsEnable, + RBACEnabled: *rbacEnable, + ReleaseDir: releaseDir, + BaseVersion: baseVersion, + KubeConfig: kubeConfig, + KubeAccessor: kubeAccessor, + RemoteKubeConfig: remoteKubeConfig, + RemoteKubeAccessor: remoteKubeAccessor, + appPods: appPods, + Clusters: clusters, }, nil } @@ -746,11 +755,12 @@ func (k *KubeInfo) deployTiller(yamlFileName string) error { // deploy tiller, helm cli is already available if err := util.HelmInit("tiller"); err != nil { - log.Errorf("Failed to init helm tiller ") + log.Errorf("Failed to init helm tiller") return err } - // wait till tiller reaches running - return util.CheckPodRunning("kube-system", "name=tiller", k.KubeConfig) + + // Wait until Helm's Tiller is running + return util.HelmTillerRunning() } var ( @@ -854,9 +864,6 @@ func (k *KubeInfo) deployIstioWithHelm() error { setValue += " --set istiotesting.oneNameSpace=true" } - // CRDs installed ahead of time with 2.9.x - setValue += " --set global.crds=false" - // enable helm test for istio setValue += " --set global.enableHelmTest=true" diff --git a/tests/e2e/framework/multicluster.go b/tests/e2e/framework/multicluster.go index 6d167ee5318e..6c1675ef03ae 100644 --- a/tests/e2e/framework/multicluster.go +++ b/tests/e2e/framework/multicluster.go @@ -58,7 +58,7 @@ func (k *KubeInfo) getEndpointIPForService(svc string) (ip string, err error) { var eps *v1.Endpoints // Wait until endpoint is obtained for i := 0; i <= 200; i++ { - eps, err = k.KubeClient.CoreV1().Endpoints(k.Namespace).Get(svc, getOpt) + eps, err = k.KubeAccessor.GetEndpoints(k.Namespace, svc, getOpt) if (len(eps.Subsets) == 0) || (err != nil) { time.Sleep(time.Second * 1) } else { diff --git a/tests/e2e/tests/dashboard/dashboard_test.go b/tests/e2e/tests/dashboard/dashboard_test.go index 18a0e49ddaeb..92e2391921db 100644 --- a/tests/e2e/tests/dashboard/dashboard_test.go +++ b/tests/e2e/tests/dashboard/dashboard_test.go @@ -340,6 +340,10 @@ func galleyQueryFilterFn(queries []string) []string { if strings.Contains(query, "request_acks_total") { continue } + // This is a frequent source of flakes in e2e-dashboard test. Remove from checked queries for now. + if strings.Contains(query, "runtime_strategy_timer_quiesce_reached_total") { + continue + } filtered = append(filtered, query) } return filtered diff --git a/tests/e2e/tests/mixer/mixer_test.go b/tests/e2e/tests/mixer/mixer_test.go index 4d59a4aa4039..87798fa2ce6e 100644 --- a/tests/e2e/tests/mixer/mixer_test.go +++ b/tests/e2e/tests/mixer/mixer_test.go @@ -57,19 +57,11 @@ const ( mixerMetricsPort = uint16(42422) productPagePort = uint16(10000) - srcLabel = "source_service" - srcPodLabel = "source_pod" - srcWorkloadLabel = "source_workload" - srcOwnerLabel = "source_owner" - srcUIDLabel = "source_workload_uid" - destLabel = "destination_service" - destPodLabel = "destination_pod" - destWorkloadLabel = "destination_workload" - destOwnerLabel = "destination_owner" - destUIDLabel = "destination_workload_uid" - destContainerLabel = "destination_container" - responseCodeLabel = "response_code" - reporterLabel = "reporter" + srcLabel = "source_service" + srcWorkloadLabel = "source_workload" + destLabel = "destination_service" + responseCodeLabel = "response_code" + reporterLabel = "reporter" // This namespace is used by default in all mixer config documents. // It will be replaced with the test namespace. @@ -305,7 +297,8 @@ func dumpK8Env() { _, _ = util.Shell("kubectl --namespace %s get pods -o wide", tc.Kube.Namespace) podLogs("istio="+ingressName, ingressName) - podLogs("istio=mixer", "mixer") + podLogs("istio=mixer,istio-mixer-type=policy", "mixer") + podLogs("istio=mixer,istio-mixer-type=telemetry", "mixer") podLogs("istio=pilot", "discovery") podLogs("app=productpage", "istio-proxy") @@ -322,24 +315,6 @@ func podID(labelSelector string) (pod string, err error) { return } -func deployment(labelSelector string) (name, owner, uid string, err error) { - name, err = util.Shell("kubectl -n %s get deployment -l %s -o jsonpath='{.items[0].metadata.name}'", tc.Kube.Namespace, labelSelector) - if err != nil { - log.Warnf("could not get %s deployment: %v", labelSelector, err) - return - } - log.Infof("%s deployment name: %s", labelSelector, name) - uid = fmt.Sprintf("istio://%s/workloads/%s", tc.Kube.Namespace, name) - selfLink, err := util.Shell("kubectl -n %s get deployment -l %s -o jsonpath='{.items[0].metadata.selfLink}'", tc.Kube.Namespace, labelSelector) - if err != nil { - log.Warnf("could not get deployment %s self link: %v", name, err) - return - } - log.Infof("deployment %s self link: %s", labelSelector, selfLink) - owner = fmt.Sprintf("kubernetes:/%s", selfLink) - return -} - func podLogs(labelSelector string, container string) { pod, err := podID(labelSelector) if err != nil { @@ -421,11 +396,9 @@ type redisDeployment struct { } func (r *redisDeployment) Setup() error { - // Deploy Tiller if not already running. - if err := util.CheckPodRunning("kube-system", "name=tiller", tc.Kube.KubeConfig); err != nil { - if errDeployTiller := tc.Kube.DeployTiller(); errDeployTiller != nil { - return fmt.Errorf("failed to deploy helm tiller: %v", errDeployTiller) - } + // Deploy Tiller if not already deployed + if errDeployTiller := tc.Kube.DeployTiller(); errDeployTiller != nil { + return fmt.Errorf("failed to deploy helm tiller: %v", errDeployTiller) } setValue := "--set usePassword=false,persistence.enabled=false" @@ -582,11 +555,11 @@ func TestTcpMetrics(t *testing.T) { if err != nil { fatalf(t, "Could not build prometheus API client: %v", err) } - query := fmt.Sprintf("istio_tcp_sent_bytes_total{destination_app=\"%s\"}", "mongodb") + query := fmt.Sprintf("sum(istio_tcp_sent_bytes_total{destination_app=\"%s\"})", "mongodb") want := float64(1) validateMetric(t, promAPI, query, "istio_tcp_sent_bytes_total", want) - query = fmt.Sprintf("istio_tcp_received_bytes_total{destination_app=\"%s\"}", "mongodb") + query = fmt.Sprintf("sum(istio_tcp_received_bytes_total{destination_app=\"%s\"})", "mongodb") validateMetric(t, promAPI, query, "istio_tcp_received_bytes_total", want) query = fmt.Sprintf("sum(istio_tcp_connections_opened_total{destination_app=\"%s\"})", "mongodb") @@ -600,16 +573,33 @@ func TestTcpMetrics(t *testing.T) { func validateMetric(t *testing.T, promAPI v1.API, query, metricName string, want float64) { t.Helper() t.Logf("prometheus query: %s", query) - value, err := promAPI.Query(context.Background(), query, time.Now()) - if err != nil { - fatalf(t, "Could not get metrics from prometheus: %v", err) + + var got float64 + + retry := util.Retrier{ + BaseDelay: 30 * time.Second, + Retries: 4, } - got, err := vectorValue(value, map[string]string{}) - if err != nil { + retryFn := func(_ context.Context, i int) error { + t.Helper() + t.Logf("Trying to find metrics via promql (attempt %d)...", i) + value, err := promAPI.Query(context.Background(), query, time.Now()) + if err != nil { + errorf(t, "Could not get metrics from prometheus: %v", err) + return err + } + got, err = vectorValue(value, map[string]string{}) + t.Logf("vector value => got: %f, err: %v", got, err) + return err + } + + if _, err := retry.Retry(context.Background(), retryFn); err != nil { t.Logf("prometheus values for %s:\n%s", metricName, promDump(promAPI, metricName)) + dumpMixerMetrics() fatalf(t, "Could not find metric value: %v", err) } + t.Logf("%s: %f", metricName, got) if got < want { t.Logf("prometheus values for %s:\n%s", metricName, promDump(promAPI, metricName)) @@ -686,27 +676,9 @@ func TestKubeenvMetrics(t *testing.T) { if err != nil { fatalf(t, "Could not build prometheus API client: %v", err) } - productPagePod, err := podID("app=productpage") - if err != nil { - fatalf(t, "Could not get productpage pod ID: %v", err) - } - productPageWorkloadName, productPageOwner, productPageUID, err := deployment("app=productpage") - if err != nil { - fatalf(t, "Could not get productpage deployment metadata: %v", err) - } - ingressPod, err := podID(fmt.Sprintf("istio=%s", ingressName)) - if err != nil { - fatalf(t, "Could not get ingress pod ID: %v", err) - } - ingressWorkloadName, ingressOwner, ingressUID, err := deployment(fmt.Sprintf("istio=%s", ingressName)) - if err != nil { - fatalf(t, "Could not get ingress deployment metadata: %v", err) - } - query := fmt.Sprintf("istio_kube_request_count{%s=\"%s\",%s=\"%s\",%s=\"%s\",%s=\"%s\",%s=\"%s\",%s=\"%s\",%s=\"%s\",%s=\"%s\",%s=\"%s\",%s=\"200\"}", - srcPodLabel, ingressPod, srcWorkloadLabel, ingressWorkloadName, srcOwnerLabel, ingressOwner, srcUIDLabel, ingressUID, - destPodLabel, productPagePod, destWorkloadLabel, productPageWorkloadName, destOwnerLabel, productPageOwner, destUIDLabel, productPageUID, - destContainerLabel, "productpage", responseCodeLabel) + // instead of trying to find an exact match, we'll loop through all successful requests to ensure no values are "unknown" + query := fmt.Sprintf("istio_kube_request_count{%s=\"200\"}", responseCodeLabel) t.Logf("prometheus query: %s", query) value, err := promAPI.Query(context.Background(), query, time.Now()) if err != nil { @@ -714,14 +686,22 @@ func TestKubeenvMetrics(t *testing.T) { } log.Infof("promvalue := %s", value.String()) - got, err := vectorValue(value, map[string]string{}) - if err != nil { - t.Logf("prometheus values for istio_kube_request_count:\n%s", promDump(promAPI, "istio_kube_request_count")) - fatalf(t, "Error get metric value: %v", err) + if value.Type() != model.ValVector { + errorf(t, "Value not a model.Vector; was %s", value.Type().String()) } - want := float64(1) - if got < want { - errorf(t, "Bad metric value: got %f, want at least %f", got, want) + vec := value.(model.Vector) + + if got, want := len(vec), 1; got < want { + errorf(t, "Found %d istio_kube_request_count metrics, want at least %d", got, want) + } + + for _, sample := range vec { + metric := sample.Metric + for labelKey, labelVal := range metric { + if labelVal == "unknown" { + errorf(t, "Unexpected 'unknown' value for label '%s' in sample '%s'", labelKey, sample) + } + } } } @@ -869,30 +849,54 @@ func testCheckCache(t *testing.T, visit func() error, app string) { } // nolint: unparam -func fetchRequestCount(t *testing.T, promAPI v1.API, service string) (prior429s float64, prior200s float64, value model.Value) { +func fetchRequestCount(t *testing.T, promAPI v1.API, service, additionalLabels string, totalReqExpected float64) (prior429s float64, prior200s float64) { var err error t.Log("Establishing metrics baseline for test...") - query := fmt.Sprintf("istio_requests_total{%s=\"%s\"}", destLabel, fqdn(service)) - t.Logf("prometheus query: %s", query) - value, err = promAPI.Query(context.Background(), query, time.Now()) - if err != nil { + + retry := util.Retrier{ + BaseDelay: 30 * time.Second, + Retries: 4, + } + + retryFn := func(_ context.Context, i int) error { + t.Helper() + t.Logf("Trying to find metrics via promql (attempt %d)...", i) + query := fmt.Sprintf("sum(istio_requests_total{%s=\"%s\",%s=\"%s\",%s})", destLabel, fqdn(service), reporterLabel, "destination", additionalLabels) + t.Logf("prometheus query: %s", query) + totalReq, err := queryValue(promAPI, query) + if err != nil { + t.Logf("error getting total requests (msg: %v)", err) + return err + } + if totalReq < totalReqExpected { + return fmt.Errorf("total Requests: %f less than expected: %f", totalReq, totalReqExpected) + } + return nil + } + + if _, err := retry.Retry(context.Background(), retryFn); err != nil { + dumpMixerMetrics() fatalf(t, "Could not get metrics from prometheus: %v", err) } - prior429s, err = vectorValue(value, map[string]string{responseCodeLabel: "429", reporterLabel: "destination"}) + query := fmt.Sprintf("sum(istio_requests_total{%s=\"%s\",%s=\"%s\",%s=\"%s\",%s})", destLabel, fqdn(service), + reporterLabel, "destination", responseCodeLabel, "429", additionalLabels) + prior429s, err = queryValue(promAPI, query) if err != nil { t.Logf("error getting prior 429s, using 0 as value (msg: %v)", err) prior429s = 0 } - prior200s, err = vectorValue(value, map[string]string{responseCodeLabel: "200", reporterLabel: "destination"}) + query = fmt.Sprintf("sum(istio_requests_total{%s=\"%s\",%s=\"%s\",%s=\"%s\",%s})", destLabel, fqdn(service), + reporterLabel, "destination", responseCodeLabel, "200", additionalLabels) + prior200s, err = queryValue(promAPI, query) if err != nil { t.Logf("error getting prior 200s, using 0 as value (msg: %v)", err) prior200s = 0 } t.Logf("Baseline established: prior200s = %f, prior429s = %f", prior200s, prior429s) - return prior429s, prior200s, value + return prior429s, prior200s } func sendTraffic(t *testing.T, msg string, calls int64) *fhttp.HTTPRunnerResults { @@ -952,11 +956,11 @@ func TestMetricsAndRateLimitAndRulesAndBookinfo(t *testing.T) { // establish baseline - initPrior429s, _, _ := fetchRequestCount(t, promAPI, "ratings") + initPrior429s, initPrior200s := fetchRequestCount(t, promAPI, "ratings", "", 0) _ = sendTraffic(t, "Warming traffic...", 150) allowPrometheusSync() - prior429s, prior200s, _ := fetchRequestCount(t, promAPI, "ratings") + prior429s, prior200s := fetchRequestCount(t, promAPI, "ratings", "", initPrior429s+initPrior200s+150) // check if at least one more prior429 was reported if prior429s-initPrior429s < 1 { fatalf(t, "no 429 is allotted time: prior429s:%v", prior429s) @@ -993,55 +997,49 @@ func TestMetricsAndRateLimitAndRulesAndBookinfo(t *testing.T) { fatalf(t, "Not enough traffic generated to exercise rate limit: ratings_reqs=%f, want200s=%f", callsToRatings, want200s) } - _, _, value := fetchRequestCount(t, promAPI, "ratings") - log.Infof("promvalue := %s", value.String()) - - got, err := vectorValue(value, map[string]string{responseCodeLabel: "429", "destination_version": "v1"}) - if err != nil { + got200s, got429s := fetchRequestCount(t, promAPI, "ratings", "destination_version=\"v1\"", prior429s+prior200s+300) + if got429s == 0 { t.Logf("prometheus values for istio_requests_total:\n%s", promDump(promAPI, "istio_requests_total")) errorf(t, "Could not find 429s: %v", err) - got = 0 // want to see 200 rate even if no 429s were recorded } // Lenient calculation TODO: tighten/simplify - want := math.Floor(want429s * .25) + want429s = math.Floor(want429s * .25) - got = got - prior429s + got429s = got429s - prior429s - t.Logf("Actual 429s: %f (%f rps)", got, got/actualDuration) + t.Logf("Actual 429s: %f (%f rps)", got429s, got429s/actualDuration) // check resource exhausted - if got < want { + if got429s < want429s { t.Logf("prometheus values for istio_requests_total:\n%s", promDump(promAPI, "istio_requests_total")) - errorf(t, "Bad metric value for rate-limited requests (429s): got %f, want at least %f", got, want) + errorf(t, "Bad metric value for rate-limited requests (429s): got %f, want at least %f", got429s, want429s) } - got, err = vectorValue(value, map[string]string{responseCodeLabel: "200", "destination_version": "v1"}) - if err != nil { + if got200s == 0 { t.Logf("prometheus values for istio_requests_total:\n%s", promDump(promAPI, "istio_requests_total")) errorf(t, "Could not find successes value: %v", err) - got = 0 } - got = got - prior200s + got200s = got200s - prior200s - t.Logf("Actual 200s: %f (%f rps), expecting ~1 rps", got, got/actualDuration) + t.Logf("Actual 200s: %f (%f rps), expecting ~1 rps", got200s, got200s/actualDuration) // establish some baseline to protect against flakiness due to randomness in routing // and to allow for leniency in actual ceiling of enforcement (if 10 is the limit, but we allow slightly // less than 10, don't fail this test). - want = math.Floor(want200s * .25) + want := math.Floor(want200s * .25) // check successes - if got < want { + if got200s < want { t.Logf("prometheus values for istio_requests_total:\n%s", promDump(promAPI, "istio_requests_total")) - errorf(t, "Bad metric value for successful requests (200s): got %f, want at least %f", got, want) + errorf(t, "Bad metric value for successful requests (200s): got %f, want at least %f", got200s, want) } // TODO: until https://github.com/istio/istio/issues/3028 is fixed, use 25% - should be only 5% or so want200s = math.Ceil(want200s * 1.5) - if got > want200s { + if got200s > want { t.Logf("prometheus values for istio_requests_total:\n%s", promDump(promAPI, "istio_requests_total")) - errorf(t, "Bad metric value for successful requests (200s): got %f, want at most %f", got, want200s) + errorf(t, "Bad metric value for successful requests (200s): got %f, want at most %f", got200s, want200s) } } @@ -1073,14 +1071,16 @@ func testRedisQuota(t *testing.T, quotaRule string) { fatalf(t, "Could not build prometheus API client: %v", err) } + // This is the number of requests we allow to be missing to be reported, so as to make test stable. + errorInRequestReportingAllowed := 5.0 // establish baseline _ = sendTraffic(t, "Warming traffic...", 150) allowPrometheusSync() - initPrior429s, _, _ := fetchRequestCount(t, promAPI, "ratings") + initPrior429s, initPrior200s := fetchRequestCount(t, promAPI, "ratings", "", 0) _ = sendTraffic(t, "Warming traffic...", 150) allowPrometheusSync() - prior429s, prior200s, _ := fetchRequestCount(t, promAPI, "ratings") + prior429s, prior200s := fetchRequestCount(t, promAPI, "ratings", "", initPrior429s+initPrior200s+150-errorInRequestReportingAllowed) // check if at least one more prior429 was reported if prior429s-initPrior429s < 1 { fatalf(t, "no 429 in allotted time: prior429s:%v", prior429s) @@ -1123,64 +1123,59 @@ func testRedisQuota(t *testing.T, quotaRule string) { fatalf(t, "Not enough traffic generated to exercise rate limit: ratings_reqs=%f, want200s=%f", callsToRatings, want200s) } - _, _, value := fetchRequestCount(t, promAPI, "ratings") - log.Infof("promvalue := %s", value.String()) + got429s, got200s := fetchRequestCount(t, promAPI, "ratings", "", prior429s+prior200s+300-errorInRequestReportingAllowed) - got, err := vectorValue(value, map[string]string{responseCodeLabel: "429", reporterLabel: "destination"}) - if err != nil { + if got429s == 0 { attributes := []string{fmt.Sprintf("%s=\"%s\"", destLabel, fqdn("ratings")), fmt.Sprintf("%s=\"%d\"", responseCodeLabel, 429), fmt.Sprintf("%s=\"%s\"", reporterLabel, "destination")} t.Logf("prometheus values for istio_requests_total for 429's:\n%s", promDumpWithAttributes(promAPI, "istio_requests_total", attributes)) errorf(t, "Could not find 429s: %v", err) - got = 0 // want to see 200 rate even if no 429s were recorded } - want := math.Floor(want429s * 0.70) + want429s = math.Floor(want429s * 0.70) - got = got - prior429s + got429s = got429s - prior429s - t.Logf("Actual 429s: %f (%f rps)", got, got/actualDuration) + t.Logf("Actual 429s: %f (%f rps)", got429s, got429s/actualDuration) // check resource exhausted - if got < want { + if got429s < want429s { attributes := []string{fmt.Sprintf("%s=\"%s\"", destLabel, fqdn("ratings")), fmt.Sprintf("%s=\"%d\"", responseCodeLabel, 429), fmt.Sprintf("%s=\"%s\"", reporterLabel, "destination")} t.Logf("prometheus values for istio_requests_total for 429's:\n%s", promDumpWithAttributes(promAPI, "istio_requests_total", attributes)) - errorf(t, "Bad metric value for rate-limited requests (429s): got %f, want at least %f", got, want) + errorf(t, "Bad metric value for rate-limited requests (429s): got %f, want at least %f", got429s, want429s) } - got, err = vectorValue(value, map[string]string{responseCodeLabel: "200", reporterLabel: "destination"}) - if err != nil { + if got200s == 0 { attributes := []string{fmt.Sprintf("%s=\"%s\"", destLabel, fqdn("ratings")), fmt.Sprintf("%s=\"%d\"", responseCodeLabel, 200), fmt.Sprintf("%s=\"%s\"", reporterLabel, "destination")} t.Logf("prometheus values for istio_requests_total for 200's:\n%s", promDumpWithAttributes(promAPI, "istio_requests_total", attributes)) errorf(t, "Could not find successes value: %v", err) - got = 0 } - got = got - prior200s + got200s = got200s - prior200s - t.Logf("Actual 200s: %f (%f rps), expecting ~1.666rps", got, got/actualDuration) + t.Logf("Actual 200s: %f (%f rps), expecting ~1.666rps", got200s, got200s/actualDuration) // establish some baseline to protect against flakiness due to randomness in routing // and to allow for leniency in actual ceiling of enforcement (if 10 is the limit, but we allow slightly // less than 10, don't fail this test). - want = math.Floor(want200s * 0.70) + want := math.Floor(want200s * 0.70) // check successes - if got < want { + if got200s < want { attributes := []string{fmt.Sprintf("%s=\"%s\"", destLabel, fqdn("ratings")), fmt.Sprintf("%s=\"%d\"", responseCodeLabel, 200), fmt.Sprintf("%s=\"%s\"", reporterLabel, "destination")} t.Logf("prometheus values for istio_requests_total for 200's:\n%s", promDumpWithAttributes(promAPI, "istio_requests_total", attributes)) - errorf(t, "Bad metric value for successful requests (200s): got %f, want at least %f", got, want) + errorf(t, "Bad metric value for successful requests (200s): got %f, want at least %f", got200s, want) } // TODO: until https://github.com/istio/istio/issues/3028 is fixed, use 25% - should be only 5% or so want200s = math.Ceil(want200s * 1.25) - if got > want200s { + if got200s > want200s { attributes := []string{fmt.Sprintf("%s=\"%s\"", destLabel, fqdn("ratings")), fmt.Sprintf("%s=\"%d\"", responseCodeLabel, 200), fmt.Sprintf("%s=\"%s\"", reporterLabel, "destination")} t.Logf("prometheus values for istio_requests_total for 200's:\n%s", promDumpWithAttributes(promAPI, "istio_requests_total", attributes)) - errorf(t, "Bad metric value for successful requests (200s): got %f, want at most %f", got, want200s) + errorf(t, "Bad metric value for successful requests (200s): got %f, want at most %f", got200s, want200s) } } diff --git a/tests/e2e/tests/pilot/cloudfoundry/copilot_test.go b/tests/e2e/tests/pilot/cloudfoundry/copilot_test.go index 393b1227a387..dc2e121e1e8a 100644 --- a/tests/e2e/tests/pilot/cloudfoundry/copilot_test.go +++ b/tests/e2e/tests/pilot/cloudfoundry/copilot_test.go @@ -18,25 +18,23 @@ import ( "encoding/json" "fmt" "io/ioutil" - "log" "net/http" "net/url" "strings" "testing" "time" - "github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/types" "github.com/onsi/gomega" - mcp "istio.io/api/mcp/v1alpha1" networking "istio.io/api/networking/v1alpha3" mixerEnv "istio.io/istio/mixer/test/client/env" "istio.io/istio/pilot/pkg/bootstrap" "istio.io/istio/pilot/pkg/model" - mcpserver "istio.io/istio/pkg/mcp/server" + "istio.io/istio/pkg/mcp/snapshot" + "istio.io/istio/pkg/mcp/source" + mcptesting "istio.io/istio/pkg/mcp/testing" "istio.io/istio/pkg/test/env" - mockmcp "istio.io/istio/tests/e2e/tests/pilot/mock/mcp" "istio.io/istio/tests/util" ) @@ -66,6 +64,7 @@ func pilotURL(path string) string { } var fakeCreateTime *types.Timestamp +var fakeCreateTime2 = time.Date(2018, time.January, 1, 2, 3, 4, 5, time.UTC) func TestWildcardHostEdgeRouterWithMockCopilot(t *testing.T) { g := gomega.NewGomegaWithT(t) @@ -81,10 +80,35 @@ func TestWildcardHostEdgeRouterWithMockCopilot(t *testing.T) { fakeCreateTime, err = types.TimestampProto(time.Date(2018, time.January, 1, 12, 15, 30, 5e8, time.UTC)) g.Expect(err).NotTo(gomega.HaveOccurred()) - copilotMCPServer, err := startMCPCopilot(mcpServerResponse) + copilotMCPServer, err := startMCPCopilot() g.Expect(err).NotTo(gomega.HaveOccurred()) defer copilotMCPServer.Close() + sn := snapshot.NewInMemoryBuilder() + + for _, m := range model.IstioConfigTypes { + sn.SetVersion(m.Collection, "v0") + } + + sn.SetEntry(model.Gateway.Collection, "cloudfoundry-ingress", "v1", fakeCreateTime2, nil, nil, gateway) + + sn.SetEntry(model.VirtualService.Collection, "vs-1", "v1", fakeCreateTime2, nil, nil, + virtualService(8060, "cloudfoundry-ingress", "/some/path", cfRouteOne, subsetOne)) + sn.SetEntry(model.VirtualService.Collection, "vs-2", "v1", fakeCreateTime2, nil, nil, + virtualService(8070, "cloudfoundry-ingress", "", cfRouteTwo, subsetTwo)) + + sn.SetEntry(model.DestinationRule.Collection, "dr-1", "v1", fakeCreateTime2, nil, nil, + destinationRule(cfRouteOne, subsetOne)) + sn.SetEntry(model.DestinationRule.Collection, "dr-2", "v1", fakeCreateTime2, nil, nil, + destinationRule(cfRouteTwo, subsetTwo)) + + sn.SetEntry(model.ServiceEntry.Collection, "se-1", "v1", fakeCreateTime2, nil, nil, + serviceEntry(8060, app1ListenPort, nil, cfRouteOne, subsetOne)) + sn.SetEntry(model.ServiceEntry.Collection, "se-2", "v1", fakeCreateTime2, nil, nil, + serviceEntry(8070, app2ListenPort, nil, cfRouteTwo, subsetTwo)) + + copilotMCPServer.Cache.SetSnapshot(snapshot.DefaultGroup, sn.Build()) + tearDown := initLocalPilotTestEnv(t, copilotMCPServer.Port, pilotGrpcPort, pilotDebugPort) defer tearDown() @@ -162,10 +186,18 @@ func TestWildcardHostSidecarRouterWithMockCopilot(t *testing.T) { runFakeApp(app3ListenPort) t.Logf("internal backend is running on port %d", app3ListenPort) - copilotMCPServer, err := startMCPCopilot(mcpSidecarServerResponse) + copilotMCPServer, err := startMCPCopilot() g.Expect(err).NotTo(gomega.HaveOccurred()) defer copilotMCPServer.Close() + sn := snapshot.NewInMemoryBuilder() + for _, m := range model.IstioConfigTypes { + sn.SetVersion(m.Collection, "v0") + } + sn.SetEntry(model.ServiceEntry.Collection, "se-1", "v1", fakeCreateTime2, nil, nil, + serviceEntry(sidecarServicePort, app3ListenPort, []string{"127.1.1.1"}, cfInternalRoute, subsetOne)) + copilotMCPServer.Cache.SetSnapshot(snapshot.DefaultGroup, sn.Build()) + tearDown := initLocalPilotTestEnv(t, copilotMCPServer.Port, pilotGrpcPort, pilotDebugPort) defer tearDown() @@ -205,13 +237,13 @@ func TestWildcardHostSidecarRouterWithMockCopilot(t *testing.T) { }, "300s", "1s").Should(gomega.Succeed()) } -func startMCPCopilot(serverResponse func(req *mcp.MeshConfigRequest) (*mcpserver.WatchResponse, mcpserver.CancelWatchFunc)) (*mockmcp.Server, error) { - supportedTypes := make([]string, len(model.IstioConfigTypes)) +func startMCPCopilot() (*mcptesting.Server, error) { + collections := make([]string, len(model.IstioConfigTypes)) for i, m := range model.IstioConfigTypes { - supportedTypes[i] = fmt.Sprintf("type.googleapis.com/%s", m.MessageName) + collections[i] = m.Collection } - server, err := mockmcp.NewServer(supportedTypes, serverResponse) + server, err := mcptesting.NewServer(0, source.CollectionOptionsFromSlice(collections)) if err != nil { return nil, err } @@ -319,94 +351,6 @@ func curlApp(endpoint, hostRoute url.URL) (string, error) { return string(respBytes), nil } -var gatewayTestAllConfig = map[string]map[string]proto.Message{ - fmt.Sprintf("type.googleapis.com/%s", model.Gateway.MessageName): map[string]proto.Message{ - "cloudfoundry-ingress": gateway, - }, - - fmt.Sprintf("type.googleapis.com/%s", model.VirtualService.MessageName): map[string]proto.Message{ - "vs-1": virtualService(8060, "cloudfoundry-ingress", "/some/path", cfRouteOne, subsetOne), - "vs-2": virtualService(8070, "cloudfoundry-ingress", "", cfRouteTwo, subsetTwo), - }, - - fmt.Sprintf("type.googleapis.com/%s", model.DestinationRule.MessageName): map[string]proto.Message{ - "dr-1": destinationRule(cfRouteOne, subsetOne), - "dr-2": destinationRule(cfRouteTwo, subsetTwo), - }, - - fmt.Sprintf("type.googleapis.com/%s", model.ServiceEntry.MessageName): map[string]proto.Message{ - "se-1": serviceEntry(8060, app1ListenPort, nil, cfRouteOne, subsetOne), - "se-2": serviceEntry(8070, app2ListenPort, nil, cfRouteTwo, subsetTwo), - }, -} - -var sidecarTestAllConfig = map[string]map[string]proto.Message{ - fmt.Sprintf("type.googleapis.com/%s", model.ServiceEntry.MessageName): map[string]proto.Message{ - "se-1": serviceEntry(sidecarServicePort, app3ListenPort, []string{"127.1.1.1"}, cfInternalRoute, subsetOne), - }, -} - -func mcpSidecarServerResponse(req *mcp.MeshConfigRequest) (*mcpserver.WatchResponse, mcpserver.CancelWatchFunc) { - var cancelFunc mcpserver.CancelWatchFunc - cancelFunc = func() { - log.Printf("watch canceled for %s\n", req.GetTypeUrl()) - } - - namedMsgs, ok := sidecarTestAllConfig[req.GetTypeUrl()] - if ok { - return buildWatchResp(req, namedMsgs), cancelFunc - } - - return &mcpserver.WatchResponse{ - Version: req.GetVersionInfo(), - TypeURL: req.GetTypeUrl(), - Envelopes: []*mcp.Envelope{}, - }, cancelFunc -} - -func mcpServerResponse(req *mcp.MeshConfigRequest) (*mcpserver.WatchResponse, mcpserver.CancelWatchFunc) { - var cancelFunc mcpserver.CancelWatchFunc - cancelFunc = func() { - log.Printf("watch canceled for %s\n", req.GetTypeUrl()) - } - - namedMsgs, ok := gatewayTestAllConfig[req.GetTypeUrl()] - if ok { - return buildWatchResp(req, namedMsgs), cancelFunc - } - - return &mcpserver.WatchResponse{ - Version: req.GetVersionInfo(), - TypeURL: req.GetTypeUrl(), - Envelopes: []*mcp.Envelope{}, - }, cancelFunc -} - -func buildWatchResp(req *mcp.MeshConfigRequest, namedMsgs map[string]proto.Message) *mcpserver.WatchResponse { - envelopes := []*mcp.Envelope{} - for name, msg := range namedMsgs { - marshaledMsg, err := proto.Marshal(msg) - if err != nil { - log.Fatalf("marshaling %s: %s\n", name, err) - } - envelopes = append(envelopes, &mcp.Envelope{ - Metadata: &mcp.Metadata{ - Name: name, - CreateTime: fakeCreateTime, - }, - Resource: &types.Any{ - TypeUrl: req.GetTypeUrl(), - Value: marshaledMsg, - }, - }) - } - return &mcpserver.WatchResponse{ - Version: req.GetVersionInfo(), - TypeURL: req.GetTypeUrl(), - Envelopes: envelopes, - } -} - var gateway = &networking.Gateway{ Servers: []*networking.Server{ &networking.Server{ diff --git a/tests/e2e/tests/pilot/mcp_test.go b/tests/e2e/tests/pilot/mcp_test.go index 1d1b70489a72..39e30551822f 100644 --- a/tests/e2e/tests/pilot/mcp_test.go +++ b/tests/e2e/tests/pilot/mcp_test.go @@ -17,27 +17,25 @@ package pilot import ( "fmt" "io/ioutil" - "log" "net" "net/http" "testing" "time" - "github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/types" "github.com/onsi/gomega" - mcp "istio.io/api/mcp/v1alpha1" networking "istio.io/api/networking/v1alpha3" mixerEnv "istio.io/istio/mixer/test/client/env" "istio.io/istio/pilot/pkg/bootstrap" "istio.io/istio/pilot/pkg/model" - mcpserver "istio.io/istio/pkg/mcp/server" - mockmcp "istio.io/istio/tests/e2e/tests/pilot/mock/mcp" + "istio.io/istio/pkg/mcp/source" "istio.io/istio/tests/util" // Import the resource package to pull in all proto types. _ "istio.io/istio/galley/pkg/metadata" + "istio.io/istio/pkg/mcp/snapshot" + mcptesting "istio.io/istio/pkg/mcp/testing" "istio.io/istio/pkg/test/env" ) @@ -47,6 +45,7 @@ const ( ) var fakeCreateTime *types.Timestamp +var fakeCreateTime2 = time.Date(2018, time.January, 1, 2, 3, 4, 5, time.UTC) func TestPilotMCPClient(t *testing.T) { g := gomega.NewGomegaWithT(t) @@ -60,6 +59,16 @@ func TestPilotMCPClient(t *testing.T) { g.Expect(err).NotTo(gomega.HaveOccurred()) defer mcpServer.Close() + sn := snapshot.NewInMemoryBuilder() + for _, m := range model.IstioConfigTypes { + sn.SetVersion(m.Collection, "v0") + } + + sn.SetEntry(model.Gateway.Collection, "some-name", "v1", fakeCreateTime2, nil, nil, firstGateway) + sn.SetEntry(model.Gateway.Collection, "some-other name", "v1", fakeCreateTime2, nil, nil, secondGateway) + + mcpServer.Cache.SetSnapshot(snapshot.DefaultGroup, sn.Build()) + tearDown := initLocalPilotTestEnv(t, mcpServer.Port, pilotGrpcPort, pilotDebugPort) defer tearDown() @@ -79,67 +88,12 @@ func TestPilotMCPClient(t *testing.T) { }, "180s", "1s").Should(gomega.Succeed()) } -func mcpServerResponse(req *mcp.MeshConfigRequest) (*mcpserver.WatchResponse, mcpserver.CancelWatchFunc) { - var cancelFunc mcpserver.CancelWatchFunc - cancelFunc = func() { - log.Printf("watch canceled for %s\n", req.GetTypeUrl()) - } - if req.GetTypeUrl() == fmt.Sprintf("type.googleapis.com/%s", model.Gateway.MessageName) { - marshaledFirstGateway, err := proto.Marshal(firstGateway) - if err != nil { - log.Fatalf("marshaling gateway %s\n", err) - } - marshaledSecondGateway, err := proto.Marshal(secondGateway) - if err != nil { - log.Fatalf("marshaling gateway %s\n", err) - } - - return &mcpserver.WatchResponse{ - Version: req.GetVersionInfo(), - TypeURL: req.GetTypeUrl(), - Envelopes: []*mcp.Envelope{ - { - Metadata: &mcp.Metadata{ - Name: "some-name", - CreateTime: fakeCreateTime, - }, - Resource: &types.Any{ - TypeUrl: req.GetTypeUrl(), - Value: marshaledFirstGateway, - }, - }, - { - Metadata: &mcp.Metadata{ - Name: "some-other-name", - CreateTime: fakeCreateTime, - }, - Resource: &types.Any{ - TypeUrl: req.GetTypeUrl(), - Value: marshaledSecondGateway, - }, - }, - }, - }, cancelFunc - } - return &mcpserver.WatchResponse{ - Version: req.GetVersionInfo(), - TypeURL: req.GetTypeUrl(), - Envelopes: []*mcp.Envelope{}, - }, cancelFunc -} - -func runMcpServer() (*mockmcp.Server, error) { - supportedTypes := make([]string, len(model.IstioConfigTypes)) +func runMcpServer() (*mcptesting.Server, error) { + collections := make([]string, len(model.IstioConfigTypes)) for i, m := range model.IstioConfigTypes { - supportedTypes[i] = fmt.Sprintf("type.googleapis.com/%s", m.MessageName) - } - - server, err := mockmcp.NewServer(supportedTypes, mcpServerResponse) - if err != nil { - return nil, err + collections[i] = m.Collection } - - return server, nil + return mcptesting.NewServer(0, source.CollectionOptionsFromSlice(collections)) } func runEnvoy(t *testing.T, grpcPort, debugPort uint16) *mixerEnv.TestSetup { diff --git a/tests/e2e/tests/pilot/mesh_config_verify_test.go b/tests/e2e/tests/pilot/mesh_config_verify_test.go index e6af0aeb4d5a..932c895bad9f 100644 --- a/tests/e2e/tests/pilot/mesh_config_verify_test.go +++ b/tests/e2e/tests/pilot/mesh_config_verify_test.go @@ -68,7 +68,7 @@ func deleteRemoteCluster() error { func deployApp(cluster string, deploymentName, serviceName string, port1, port2, port3, port4, port5, port6 int, version string, injectProxy bool, headless bool, serviceAccount bool, createService bool) (*framework.App, error) { - tmpApp := getApp(deploymentName, serviceName, port1, port2, port3, port4, port5, port6, version, injectProxy, headless, serviceAccount, createService) + tmpApp := getApp(deploymentName, serviceName, 1, port1, port2, port3, port4, port5, port6, version, injectProxy, headless, serviceAccount, createService) var appMgr *framework.AppManager if cluster == primaryCluster { diff --git a/tests/e2e/tests/pilot/mock/mcp/server.go b/tests/e2e/tests/pilot/mock/mcp/server.go index 50e979c0826c..adc840d697fd 100644 --- a/tests/e2e/tests/pilot/mock/mcp/server.go +++ b/tests/e2e/tests/pilot/mock/mcp/server.go @@ -22,17 +22,18 @@ import ( "google.golang.org/grpc" mcp "istio.io/api/mcp/v1alpha1" - mcpserver "istio.io/istio/pkg/mcp/server" - mcptestmon "istio.io/istio/pkg/mcp/testing/monitoring" + "istio.io/istio/pkg/mcp/server" + "istio.io/istio/pkg/mcp/source" + "istio.io/istio/pkg/mcp/testing/monitoring" ) -type WatchResponse func(req *mcp.MeshConfigRequest) (*mcpserver.WatchResponse, mcpserver.CancelWatchFunc) +type WatchResponse func(req *source.Request) (*source.WatchResponse, source.CancelWatchFunc) type mockWatcher struct { response WatchResponse } -func (m mockWatcher) Watch(req *mcp.MeshConfigRequest, pushResponse mcpserver.PushResponseFunc) mcpserver.CancelWatchFunc { +func (m mockWatcher) Watch(req *source.Request, pushResponse source.PushResponseFunc) source.CancelWatchFunc { response, cancel := m.response(req) pushResponse(response) return cancel @@ -42,8 +43,8 @@ type Server struct { // The internal snapshot.Cache that the server is using. Watcher *mockWatcher - // TypeURLs that were originally passed in. - TypeURLs []string + // Collections that were originally passed in. + Collections []string // Port that the service is listening on. Port int @@ -55,11 +56,17 @@ type Server struct { l net.Listener } -func NewServer(typeUrls []string, watchResponseFunc WatchResponse) (*Server, error) { +func NewServer(collections []string, watchResponseFunc WatchResponse) (*Server, error) { watcher := mockWatcher{ response: watchResponseFunc, } - s := mcpserver.New(watcher, typeUrls, mcpserver.NewAllowAllChecker(), mcptestmon.NewInMemoryServerStatsContext()) + + options := &source.Options{ + Watcher: watcher, + Reporter: monitoring.NewInMemoryStatsContext(), + CollectionsOptions: source.CollectionOptionsFromSlice(collections), + } + s := server.New(options, server.NewAllowAllChecker()) l, err := net.Listen("tcp", "localhost:") if err != nil { @@ -76,17 +83,18 @@ func NewServer(typeUrls []string, watchResponseFunc WatchResponse) (*Server, err gs := grpc.NewServer() - mcp.RegisterAggregatedMeshConfigServiceServer(gs, s) + // mcp.RegisterAggregatedMeshConfigServiceServer(gs, s) + mcp.RegisterResourceSourceServer(gs, s) go func() { _ = gs.Serve(l) }() log.Printf("MCP mock server listening on localhost:%d", p) return &Server{ - Watcher: &watcher, - TypeURLs: typeUrls, - Port: p, - URL: u, - gs: gs, - l: l, + Watcher: &watcher, + Collections: collections, + Port: p, + URL: u, + gs: gs, + l: l, }, nil } @@ -98,7 +106,7 @@ func (t *Server) Close() (err error) { t.l = nil // gRPC stack will close this t.Watcher = nil - t.TypeURLs = nil + t.Collections = nil t.Port = 0 return diff --git a/tests/e2e/tests/pilot/performance/serviceentry_test.go b/tests/e2e/tests/pilot/performance/serviceentry_test.go index 54f5ddfd352f..8724e81f8e3d 100644 --- a/tests/e2e/tests/pilot/performance/serviceentry_test.go +++ b/tests/e2e/tests/pilot/performance/serviceentry_test.go @@ -1,3 +1,17 @@ +// Copyright 2019 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + package performance import ( @@ -13,7 +27,6 @@ import ( "testing" "time" - "github.com/gogo/protobuf/proto" "github.com/gogo/protobuf/types" "github.com/google/uuid" "github.com/onsi/gomega" @@ -25,6 +38,7 @@ import ( "istio.io/istio/pilot/pkg/model" "istio.io/istio/pkg/adsc" "istio.io/istio/pkg/mcp/snapshot" + "istio.io/istio/pkg/mcp/source" mcptest "istio.io/istio/pkg/mcp/testing" "istio.io/istio/tests/util" ) @@ -114,15 +128,15 @@ func runSnapshot(mcpServer *mcptest.Server, quit chan struct{}, t *testing.T) { version := strconv.Itoa(v) for _, m := range model.IstioConfigTypes { if m.MessageName == model.ServiceEntry.MessageName { - b.Set("type.googleapis.com/istio.networking.v1alpha3.ServiceEntry", version, generateServiceEntries(t)) + b.Set(model.ServiceEntry.Collection, version, generateServiceEntries(t)) } else if m.MessageName == model.Gateway.MessageName { gw, err := generateGateway() if err != nil { t.Fatal(err) } - b.Set("type.googleapis.com/istio.networking.v1alpha3.Gateway", version, gw) + b.Set(model.Gateway.Collection, version, gw) } else { - b.Set(fmt.Sprintf("type.googleapis.com/%s", m.MessageName), version, []*mcp.Envelope{}) + b.Set(m.Collection, version, []*mcp.Resource{}) } } @@ -176,11 +190,11 @@ func adsConnectAndWait(n int, pilotAddr string, t *testing.T) (adscs []*adsc.ADS } func runMcpServer() (*mcptest.Server, error) { - supportedTypes := make([]string, len(model.IstioConfigTypes)) + collections := make([]string, len(model.IstioConfigTypes)) for i, m := range model.IstioConfigTypes { - supportedTypes[i] = fmt.Sprintf("type.googleapis.com/%s", m.MessageName) + collections[i] = m.Collection } - return mcptest.NewServer(0, supportedTypes) + return mcptest.NewServer(0, source.CollectionOptionsFromSlice(collections)) } func initLocalPilotTestEnv(t *testing.T, mcpPort int, grpcAddr, debugAddr string) util.TearDownFunc { @@ -222,7 +236,7 @@ func registeredServiceEntries(apiEndpoint string) (int, error) { return strings.Count(debug, serviceDomain), nil } -func generateServiceEntries(t *testing.T) (envelopes []*mcp.Envelope) { +func generateServiceEntries(t *testing.T) (resources []*mcp.Resource) { port := 1 createTime, err := createTime() @@ -234,27 +248,24 @@ func generateServiceEntries(t *testing.T) (envelopes []*mcp.Envelope) { servicePort := port + 1 backendPort := servicePort + 1 host := fmt.Sprintf("%s.%s", randName(), serviceDomain) - se, err := marshaledServiceEntry(servicePort, backendPort, host) + body, err := marshaledServiceEntry(servicePort, backendPort, host) if err != nil { t.Fatal(err) } - envelopes = append(envelopes, &mcp.Envelope{ + resources = append(resources, &mcp.Resource{ Metadata: &mcp.Metadata{ Name: fmt.Sprintf("serviceEntry-%s", randName()), CreateTime: createTime, }, - Resource: &types.Any{ - TypeUrl: fmt.Sprintf("type.googleapis.com/%s", model.ServiceEntry.MessageName), - Value: se, - }, + Body: body, }) port = backendPort } - return envelopes + return resources } -func marshaledServiceEntry(servicePort, backendPort int, host string) ([]byte, error) { +func marshaledServiceEntry(servicePort, backendPort int, host string) (*types.Any, error) { serviceEntry := &networking.ServiceEntry{ Hosts: []string{host}, Ports: []*networking.Port{ @@ -277,10 +288,10 @@ func marshaledServiceEntry(servicePort, backendPort int, host string) ([]byte, e }, } - return proto.Marshal(serviceEntry) + return types.MarshalAny(serviceEntry) } -func generateGateway() ([]*mcp.Envelope, error) { +func generateGateway() ([]*mcp.Resource, error) { gateway := &networking.Gateway{ Servers: []*networking.Server{ &networking.Server{ @@ -294,7 +305,7 @@ func generateGateway() ([]*mcp.Envelope, error) { }, } - gw, err := proto.Marshal(gateway) + body, err := types.MarshalAny(gateway) if err != nil { return nil, err } @@ -304,16 +315,13 @@ func generateGateway() ([]*mcp.Envelope, error) { return nil, err } - return []*mcp.Envelope{ + return []*mcp.Resource{ { Metadata: &mcp.Metadata{ Name: fmt.Sprintf("gateway-%s", randName()), CreateTime: createTime, }, - Resource: &types.Any{ - TypeUrl: "type.googleapis.com/istio.networking.v1alpha3.Gateway", - Value: gw, - }, + Body: body, }, }, nil } diff --git a/tests/e2e/tests/pilot/pilot_test.go b/tests/e2e/tests/pilot/pilot_test.go index a37dd0814135..554560a8b049 100644 --- a/tests/e2e/tests/pilot/pilot_test.go +++ b/tests/e2e/tests/pilot/pilot_test.go @@ -355,18 +355,18 @@ func getApps() []framework.App { appsWithSidecar = []string{"a-", "b-", "c-", "d-", "headless-"} return []framework.App{ // deploy a healthy mix of apps, with and without proxy - getApp("t", "t", 8080, 80, 9090, 90, 7070, 70, "unversioned", false, false, false, true), - getApp("a", "a", 8080, 80, 9090, 90, 7070, 70, "v1", true, false, true, true), - getApp("b", "b", 80, 8080, 90, 9090, 70, 7070, "unversioned", true, false, true, true), - getApp("c-v1", "c", 80, 8080, 90, 9090, 70, 7070, "v1", true, false, true, true), - getApp("c-v2", "c", 80, 8080, 90, 9090, 70, 7070, "v2", true, false, true, false), - getApp("d", "d", 80, 8080, 90, 9090, 70, 7070, "per-svc-auth", true, false, true, true), - getApp("headless", "headless", 80, 8080, 10090, 19090, 70, 7070, "unversioned", true, true, true, true), + getApp("t", "t", 1, 8080, 80, 9090, 90, 7070, 70, "unversioned", false, false, false, true), + getApp("a", "a", 1, 8080, 80, 9090, 90, 7070, 70, "v1", true, false, true, true), + getApp("b", "b", 1, 80, 8080, 90, 9090, 70, 7070, "unversioned", true, false, true, true), + getApp("c-v1", "c", 1, 80, 8080, 90, 9090, 70, 7070, "v1", true, false, true, true), + getApp("c-v2", "c", 1, 80, 8080, 90, 9090, 70, 7070, "v2", true, false, true, false), + getApp("d", "d", 1, 80, 8080, 90, 9090, 70, 7070, "per-svc-auth", true, false, true, true), + getApp("headless", "headless", 1, 80, 8080, 10090, 19090, 70, 7070, "unversioned", true, true, true, true), getStatefulSet("statefulset", 19090, true), } } -func getApp(deploymentName, serviceName string, port1, port2, port3, port4, port5, port6 int, +func getApp(deploymentName, serviceName string, replicas, port1, port2, port3, port4, port5, port6 int, version string, injectProxy bool, headless bool, serviceAccount bool, createService bool) framework.App { // TODO(nmittler): Consul does not support management ports ... should we support other registries? healthPort := "true" @@ -379,6 +379,7 @@ func getApp(deploymentName, serviceName string, port1, port2, port3, port4, port "Tag": tc.Kube.PilotTag(), "service": serviceName, "deployment": deploymentName, + "replicas": strconv.Itoa(replicas), "port1": strconv.Itoa(port1), "port2": strconv.Itoa(port2), "port3": strconv.Itoa(port3), diff --git a/tests/e2e/tests/pilot/pod_churn_test.go b/tests/e2e/tests/pilot/pod_churn_test.go new file mode 100644 index 000000000000..92ea7042043e --- /dev/null +++ b/tests/e2e/tests/pilot/pod_churn_test.go @@ -0,0 +1,384 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pilot + +import ( + "fmt" + "sort" + "strings" + "sync" + "testing" + "time" + + multierror "github.com/hashicorp/go-multierror" + + "istio.io/istio/pkg/log" + "istio.io/istio/pkg/test/application/echo" + "istio.io/istio/pkg/test/application/echo/proto" + "istio.io/istio/pkg/test/kube" + "istio.io/istio/tests/e2e/framework" +) + +const ( + churnAppName = "churn" + churnAppSelector = "app=" + churnAppName + churnAppURL = "http://" + churnAppName + churnAppReplicas = 5 + churnAppMinReady = 2 + churnTrafficDuration = 1 * time.Minute + churnTrafficQPS = 1000 + + // The number of traffic threads that we should run on each source App pod. + // We purposely assign multiple threads to the same pod due to the fact that + // Envoy's default circuit breaker parameter "max_retries" is set to 3, + // which means that only 3 retry attempts may be on-going concurrently per + // Envoy. If exceeded, the circuit breaker itself will cause 503s. Here we + // purposely use > 3 sending threads to each pod (i.e. Envoy) to ensure + // that these 503s do not occur (i.e. Envoy is properly configured). + churnTrafficThreadsPerPod = 10 +) + +// TestPodChurn creates a replicated app and periodically brings down one pod. Traffic is sent during the pod churn +// to ensure that no 503s are received by the application. This verifies that Envoy is properly retrying disconnected +// endpoints. +func TestPodChurn(t *testing.T) { + // Deploy the churn app + app := newChurnApp(t) + defer app.stop() + + // Start a pod churner on the app. The pod churner will periodically bring down one of the pods for the app. + podChurner := newPodChurner(tc.Kube.Namespace, churnAppSelector, churnAppMinReady, tc.Kube.KubeAccessor) + defer podChurner.stop() + + // Start a bunch of threads to send traffic to the churn app from the given source apps. + srcApps := []string{"a", "b"} + numTgs := len(srcApps) * churnTrafficThreadsPerPod + tgs := make([]trafficGenerator, 0, numTgs) + threadQPS := churnTrafficQPS / numTgs + for _, srcApp := range srcApps { + for i := 0; i < churnTrafficThreadsPerPod; i++ { + tg := newTrafficGenerator(t, srcApp, uint16(threadQPS)) + tgs = append(tgs, tg) + + // Start sending traffic. + tg.start() + defer tg.close() + } + } + + // Merge the results from all traffic generators. + aggregateResults := make(map[string]int) + numBadResponses := 0 + totalResponses := 0 + for _, tg := range tgs { + results, err := tg.awaitResults() + if err != nil { + t.Fatal(err) + } + for code, count := range results { + totalResponses += count + if code != httpOK { + numBadResponses += count + } + aggregateResults[code] = aggregateResults[code] + count + } + } + + // TODO(https://github.com/istio/istio/issues/9818): Connection_terminated events will not be retried by Envoy. + // Until we implement graceful shutdown of Envoy (with lameducking), we will not be able to completely + // eliminate 503s in pod churning cases. + maxBadPercentage := 0.5 + badPercentage := (float64(numBadResponses) / float64(totalResponses)) * 100.0 + if badPercentage > maxBadPercentage { + t.Fatalf(fmt.Sprintf("received too many bad response codes: %v", aggregateResults)) + } else { + t.Logf("successfully survived pod churning! Results: %v", aggregateResults) + } +} + +// trafficGenerator generates traffic from a source app to the churn app +type trafficGenerator interface { + // start generating traffic + start() + + // awaitResults waits for the traffic generation to end and returns the results or error. + awaitResults() (map[string]int, error) + + // close any resources + close() +} + +// newTrafficGenerator from the given source app to the churn app. +func newTrafficGenerator(t *testing.T, srcApp string, qps uint16) trafficGenerator { + wg := &sync.WaitGroup{} + wg.Add(1) + + return &trafficGeneratorImpl{ + srcApp: srcApp, + qps: qps, + forwarder: forwardToGrpcPort(t, srcApp), + wg: wg, + results: make(map[string]int), + } +} + +func forwardToGrpcPort(t *testing.T, app string) kube.PortForwarder { + // Find the gRPC port for this service. + svc, err := tc.Kube.KubeAccessor.GetService(tc.Kube.Namespace, app) + if err != nil { + t.Fatal(err) + } + grpcPort := 0 + for _, port := range svc.Spec.Ports { + if port.Name == "grpc" { + grpcPort = int(port.TargetPort.IntVal) + } + } + if grpcPort == 0 { + t.Fatalf("unable to locate gRPC port for service %s", app) + } + + // Get the pods for the source App. + pods := tc.Kube.GetAppPods(primaryCluster)[app] + if len(pods) == 0 { + t.Fatalf("missing pod names for app %q from %s cluster", app, primaryCluster) + } + + // Create a port forwarder so that we can send commands app "a" to talk to the churn app. + forwarder, err := tc.Kube.KubeAccessor.NewPortForwarder(&kube.PodSelectOptions{ + PodNamespace: tc.Kube.Namespace, + PodName: pods[0], + }, 0, uint16(grpcPort)) + if err != nil { + t.Fatal(err) + } + + if err := forwarder.Start(); err != nil { + t.Fatal(err) + } + return forwarder +} + +// trafficGeneratorImpl implementation of the trafficGenerator interface. +type trafficGeneratorImpl struct { + srcApp string + qps uint16 + forwarder kube.PortForwarder + wg *sync.WaitGroup + + results map[string]int + err error +} + +// close implements the trafficGenerator interface. +func (g *trafficGeneratorImpl) close() { + if g.forwarder != nil { + _ = g.forwarder.Close() + g.forwarder = nil + } +} + +// awaitResults implements the trafficGenerator interface. +func (g *trafficGeneratorImpl) awaitResults() (map[string]int, error) { + g.wg.Wait() + + if g.err != nil { + return nil, g.err + } + + return g.results, nil +} + +// start implements the trafficGenerator interface. +func (g *trafficGeneratorImpl) start() { + // Command to send traffic for 1s. + request := proto.ForwardEchoRequest{ + Url: churnAppURL, + Count: int32(g.qps), + Qps: int32(g.qps), + } + + // Create the results for this thread. + go func() { + defer g.wg.Done() + + // Create a gRPC client to the source pod. + client, err := echo.NewClient(g.forwarder.Address()) + if err != nil { + g.err = multierror.Append(g.err, err) + return + } + defer func() { + _ = client.Close() + }() + + // Send traffic from the source pod to the churned pods for a period of time. + timer := time.NewTimer(churnTrafficDuration) + + responseCount := 0 + for { + select { + case <-timer.C: + if responseCount == 0 { + g.err = fmt.Errorf("service %s unable to communicate to churn app", g.srcApp) + } + return + default: + responses, err := client.ForwardEcho(&request) + if err != nil { + // Retry on RPC errors. + log.Infof("retrying failed control RPC to app: %s. Error: %v", g.srcApp, err) + continue + } + responseCount += len(responses) + for _, resp := range responses { + g.results[resp.Code] = g.results[resp.Code] + 1 + } + } + } + }() +} + +// churnApp is a multi-replica app, the pods for which will be periodically deleted. +type churnApp struct { + app *framework.App +} + +// newChurnApp creates and deploys the churnApp +func newChurnApp(t *testing.T) *churnApp { + a := getApp(churnAppName, + churnAppName, + churnAppReplicas, + 8080, + 80, + 9090, + 90, + 7070, + 70, + "v1", + true, + false, + true, + true) + app := &churnApp{ + app: &a, + } + if err := tc.Kube.AppManager.DeployApp(app.app); err != nil { + t.Fatal(err) + return nil + } + + fetchFn := tc.Kube.KubeAccessor.NewPodFetch(tc.Kube.Namespace, churnAppSelector) + if err := tc.Kube.KubeAccessor.WaitUntilPodsAreReady(fetchFn); err != nil { + app.stop() + t.Fatal(err) + return nil + } + return app +} + +// stop the churn app (i.e. undeploy) +func (a *churnApp) stop() { + if a.app != nil { + _ = tc.Kube.AppManager.UndeployApp(a.app) + a.app = nil + } +} + +// podChurner periodically deletes pods in the churn app. +type podChurner struct { + accessor *kube.Accessor + namespace string + labelSelector string + minReplicas int + stopCh chan struct{} +} + +// newPodChurner creates and starts the podChurner. +func newPodChurner(namespace, labelSelector string, minReplicas int, accessor *kube.Accessor) *podChurner { + churner := &podChurner{ + accessor: accessor, + namespace: namespace, + labelSelector: labelSelector, + minReplicas: minReplicas, + stopCh: make(chan struct{}), + } + + // Start churning. + go func() { + // Periodically check to see if we need to kill a pod. + ticker := time.NewTicker(500 * time.Millisecond) + + for { + select { + case <-churner.stopCh: + ticker.Stop() + return + case <-ticker.C: + readyPods := churner.getReadyPods() + if len(readyPods) <= churner.minReplicas { + // Wait until we have at least minReplicas before taking one down. + continue + } + + // Delete the oldest ready pod. + _ = churner.accessor.DeletePod(churner.namespace, readyPods[0]) + } + } + }() + + return churner +} + +// stop the pod churner +func (c *podChurner) stop() { + c.stopCh <- struct{}{} +} + +func (c *podChurner) getReadyPods() []string { + pods, err := c.accessor.GetPods(c.namespace, c.labelSelector) + if err != nil { + log.Warnf("failed getting pods for ns=%s, label selector=%s, %v", c.namespace, c.labelSelector, err) + return make([]string, 0) + } + + // Sort the pods oldest first. + sort.Slice(pods, func(i, j int) bool { + p1 := pods[i] + p2 := pods[j] + + t1 := &p1.CreationTimestamp + t2 := &p2.CreationTimestamp + + if t1.Before(t2) { + return true + } + if t2.Before(t1) { + return false + } + + // Use name as the tie-breaker + return strings.Compare(p1.Name, p2.Name) < 0 + }) + + // Return the names of all pods that are ready. + out := make([]string, 0, len(pods)) + for _, p := range pods { + if e := kube.CheckPodReady(&p); e == nil { + out = append(out, p.Name) + } + } + return out +} diff --git a/tests/e2e/tests/pilot/routing_test.go b/tests/e2e/tests/pilot/routing_test.go index 2a7adb6c604f..af55a888ef55 100644 --- a/tests/e2e/tests/pilot/routing_test.go +++ b/tests/e2e/tests/pilot/routing_test.go @@ -791,3 +791,133 @@ func TestDestinationRuleConfigScope(t *testing.T) { } }) } + +func TestSidecarScope(t *testing.T) { + var cfgs []*deployableConfig + applyRuleFunc := func(t *testing.T, ruleYamls map[string][]string) { + // Delete the previous rule if there was one. No delay on the teardown, since we're going to apply + // a delay when we push the new config. + for _, cfg := range cfgs { + if cfg != nil { + if err := cfg.TeardownNoDelay(); err != nil { + t.Fatal(err) + } + cfg = nil + } + } + + cfgs = make([]*deployableConfig, 0) + for ns, rules := range ruleYamls { + // Apply the new rules in the namespace + cfg := &deployableConfig{ + Namespace: ns, + YamlFiles: rules, + kubeconfig: tc.Kube.KubeConfig, + } + if err := cfg.Setup(); err != nil { + t.Fatal(err) + } + cfgs = append(cfgs, cfg) + } + } + // Upon function exit, delete the active rule. + defer func() { + for _, cfg := range cfgs { + if cfg != nil { + _ = cfg.Teardown() + } + } + }() + + rules := make(map[string][]string) + rules[tc.Kube.Namespace] = []string{"testdata/networking/v1alpha3/sidecar-scope-ns1-ns2.yaml"} + rules["ns1"] = []string{ + "testdata/networking/v1alpha3/service-entry-http-scope-public.yaml", + "testdata/networking/v1alpha3/service-entry-http-scope-private.yaml", + "testdata/networking/v1alpha3/virtualservice-http-scope-public.yaml", + "testdata/networking/v1alpha3/virtualservice-http-scope-private.yaml", + } + rules["ns2"] = []string{"testdata/networking/v1alpha3/service-entry-tcp-scope-public.yaml"} + + // Create the namespaces and install the rules in each namespace + for _, ns := range []string{"ns1", "ns2"} { + if err := util.CreateNamespace(ns, tc.Kube.KubeConfig); err != nil { + t.Errorf("Unable to create namespace %s: %v", ns, err) + } + defer func(ns string) { + if err := util.DeleteNamespace(ns, tc.Kube.KubeConfig); err != nil { + t.Errorf("Failed to delete namespace %s", ns) + } + }(ns) + } + applyRuleFunc(t, rules) + // Wait a few seconds so that the older proxy listeners get overwritten + time.Sleep(10 * time.Second) + + cases := []struct { + testName string + reqURL string + host string + expectedHdr *regexp.Regexp + reachable bool + onFailure func() + }{ + { + testName: "cannot reach services if not imported", + reqURL: "http://c/a", + host: "c", + reachable: false, + }, + { + testName: "ns1: http://bookinfo.com:9999 reachable", + reqURL: "http://127.255.0.1:9999/a", + host: "bookinfo.com:9999", + expectedHdr: regexp.MustCompile("(?i) scope=public"), + reachable: true, + }, + { + testName: "ns1: http://private.com:9999 not reachable", + reqURL: "http://127.255.0.1:9999/a", + host: "private.com:9999", + reachable: false, + }, + { + testName: "ns2: service.tcp.com:8888 reachable", + reqURL: "http://127.255.255.11:8888/a", + host: "tcp.com", + reachable: true, + }, + { + testName: "ns1: bookinfo.com:9999 reachable via egress TCP listener 7.7.7.7:23145", + reqURL: "http://7.7.7.7:23145/a", + host: "bookinfo.com:9999", + reachable: true, + }, + } + + t.Run("v1alpha3", func(t *testing.T) { + for _, c := range cases { + for cluster := range tc.Kube.Clusters { + testName := fmt.Sprintf("%s from %s cluster", c.testName, cluster) + runRetriableTest(t, testName, 5, func() error { + resp := ClientRequest(cluster, "a", c.reqURL, 1, fmt.Sprintf("-key Host -val %s", c.host)) + if c.reachable && !resp.IsHTTPOk() { + return fmt.Errorf("cannot reach %s, %v", c.reqURL, resp.Code) + } + + if !c.reachable && resp.IsHTTPOk() { + return fmt.Errorf("expected request %s to fail, but got success", c.reqURL) + } + + if c.reachable && c.expectedHdr != nil { + found := len(c.expectedHdr.FindAllStringSubmatch(resp.Body, -1)) + if found != 1 { + return fmt.Errorf("public virtualService for %s is not in effect", c.reqURL) + } + } + return nil + }, c.onFailure) + } + } + }) +} diff --git a/tests/e2e/tests/pilot/testdata/app.yaml.tmpl b/tests/e2e/tests/pilot/testdata/app.yaml.tmpl index 9211d2aed20a..8760569da050 100644 --- a/tests/e2e/tests/pilot/testdata/app.yaml.tmpl +++ b/tests/e2e/tests/pilot/testdata/app.yaml.tmpl @@ -50,17 +50,18 @@ metadata: name: {{.deployment}} spec: - replicas: 1 + replicas: {{.replicas}} template: metadata: labels: app: {{.service}} version: {{.version}} name: {{.deployment}} -{{if eq .injectProxy "false"}} annotations: +{{if eq .injectProxy "false"}} sidecar.istio.io/inject: "false" {{end}} + sidecar.istio.io/userVolumeMount: '{"uds":{"mountPath":"/var/run/uds"}}' spec: {{if eq .serviceAccount "true"}} serviceAccountName: {{.service}} @@ -69,6 +70,9 @@ spec: - name: app image: {{.Hub}}/app:{{.Tag}} imagePullPolicy: {{.ImagePullPolicy}} + volumeMounts: + - mountPath: /var/run/uds + name: uds args: - --port - "{{.port1}}" @@ -82,6 +86,8 @@ spec: - "{{.port5}}" - --grpc - "{{.port6}}" + - --uds + - "/var/run/uds/serverSock" {{if eq .healthPort "true"}} - --port - "3333" @@ -112,4 +118,8 @@ spec: periodSeconds: 10 failureThreshold: 10 {{end}} + volumes: + - emptyDir: + medium: Memory + name: uds --- diff --git a/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-private.yaml b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-private.yaml new file mode 100644 index 000000000000..4ae21f583528 --- /dev/null +++ b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-private.yaml @@ -0,0 +1,19 @@ +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: service-entry-http-scope-private +spec: + hosts: + - "private.com" + ports: + - number: 9999 + name: http + protocol: HTTP + resolution: DNS + endpoints: + # Rather than relying on an external host that might become unreachable (causing test failures) + # we can mock the external endpoint using service t which has no sidecar. + - address: t.istio-system.svc.cluster.local # TODO: this is brittle + ports: + http: 8080 + configScope: PRIVATE diff --git a/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-public.yaml b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-public.yaml new file mode 100644 index 000000000000..293d4e07f377 --- /dev/null +++ b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-http-scope-public.yaml @@ -0,0 +1,19 @@ +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: service-entry-http-scope-public +spec: + hosts: + - "bookinfo.com" + ports: + - number: 9999 + name: http + protocol: HTTP + resolution: DNS + endpoints: + # Rather than relying on an external host that might become unreachable (causing test failures) + # we can mock the external endpoint using service t which has no sidecar. + - address: t.istio-system.svc.cluster.local # TODO: this is brittle + ports: + http: 8080 + configScope: PUBLIC diff --git a/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-tcp-scope-public.yaml b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-tcp-scope-public.yaml new file mode 100644 index 000000000000..93599489d1eb --- /dev/null +++ b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/service-entry-tcp-scope-public.yaml @@ -0,0 +1,21 @@ +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: service-entry-tcp-scope-public +spec: + hosts: + - "service.tcp.com" + addresses: + - "127.255.255.11" + ports: + - number: 8888 + name: tcp + protocol: TCP + resolution: DNS + endpoints: + # Rather than relying on an external host that might become unreachable (causing test failures) + # we can mock the external endpoint using service t which has no sidecar. + - address: t.istio-system.svc.cluster.local # TODO: this is brittle + ports: + tcp: 8080 + configScope: PUBLIC diff --git a/tests/e2e/tests/pilot/testdata/networking/v1alpha3/sidecar-scope-ns1-ns2.yaml b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/sidecar-scope-ns1-ns2.yaml new file mode 100644 index 000000000000..f0f000605566 --- /dev/null +++ b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/sidecar-scope-ns1-ns2.yaml @@ -0,0 +1,24 @@ +apiVersion: networking.istio.io/v1alpha3 +kind: Sidecar +metadata: + name: sidecar-scope-ns1-ns2 +spec: +# ingress: +# - port: +# number: 12345 +# protocol: HTTP +# name: inbound-http +# defaultEndpoint: ":80" + egress: + - port: + number: 23145 + protocol: TCP + name: outbound-tcp + bind: 7.7.7.7 + hosts: + - "*/bookinfo.com" + # bookinfo.com is on port 9999. + - hosts: + - "ns1/*" + - "*/*.tcp.com" + # Notice missing istio-system. This means a cant reach b diff --git a/tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-private.yaml b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-private.yaml new file mode 100644 index 000000000000..52f099944a65 --- /dev/null +++ b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-private.yaml @@ -0,0 +1,16 @@ +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: virtual-service-scope-private +spec: + configScope: PRIVATE + hosts: + - "bookinfo.com" + http: + - route: + - destination: + host: "bookinfo.com" + headers: + request: + add: + scope: private diff --git a/tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-public.yaml b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-public.yaml new file mode 100644 index 000000000000..9ae127ba4b61 --- /dev/null +++ b/tests/e2e/tests/pilot/testdata/networking/v1alpha3/virtualservice-http-scope-public.yaml @@ -0,0 +1,20 @@ +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: virtual-service-scope-public +spec: + configScope: PUBLIC + hosts: + - "bookinfo.com" + http: + - route: + - destination: + host: "bookinfo.com" + headers: + request: + add: + scope: public + tcp: + - route: + - destination: + host: "bookinfo.com" diff --git a/tests/integration2/README.md b/tests/integration2/README.md index ec481a43ba28..0cde6e415756 100644 --- a/tests/integration2/README.md +++ b/tests/integration2/README.md @@ -36,7 +36,7 @@ The test framework currently supports two environments: When running tests, only one environment can be specified: ```console -$ go test ./... -istio.test.env local +$ go test ./... -istio.test.env native $ go test ./... -istio.test.env kubernetes ``` diff --git a/tests/integration2/citadel/citadel_test_util.go b/tests/integration2/citadel/citadel_test_util.go index f29d1036c3cd..dd63155a705a 100644 --- a/tests/integration2/citadel/citadel_test_util.go +++ b/tests/integration2/citadel/citadel_test_util.go @@ -18,6 +18,8 @@ import ( "crypto/x509" "fmt" + "istio.io/istio/pkg/spiffe" + v1 "k8s.io/api/core/v1" "istio.io/istio/security/pkg/k8s/controller" @@ -39,7 +41,7 @@ func ExamineSecret(secret *v1.Secret) error { } } - expectedID, err := util.GenSanURI(secret.GetNamespace(), "default") + expectedID, err := spiffe.GenSpiffeURI(secret.GetNamespace(), "default") if err != nil { return err } diff --git a/tests/integration2/citadel/secret_creation_test.go b/tests/integration2/citadel/secret_creation_test.go index c8ac1d01ed69..74743fa02e61 100644 --- a/tests/integration2/citadel/secret_creation_test.go +++ b/tests/integration2/citadel/secret_creation_test.go @@ -15,6 +15,7 @@ package citadel import ( + "os" "testing" "istio.io/istio/pkg/test/framework" @@ -35,7 +36,7 @@ func TestSecretCreationKubernetes(t *testing.T) { // Test the existence of istio.default secret. s, err := c.WaitForSecretToExist() if err != nil { - t.Error(err) + t.Fatal(err) } t.Log(`checking secret "istio.default" is correctly created`) @@ -61,5 +62,6 @@ func TestSecretCreationKubernetes(t *testing.T) { } func TestMain(m *testing.M) { - framework.Run("citadel_test", m) + rt, _ := framework.Run("citadel_test", m) + os.Exit(rt) } diff --git a/tests/integration2/examples/basic/basic_test.go b/tests/integration2/examples/basic/basic_test.go index 34894d88e893..23f917e4f908 100644 --- a/tests/integration2/examples/basic/basic_test.go +++ b/tests/integration2/examples/basic/basic_test.go @@ -16,6 +16,9 @@ package basic import ( + "os" + "testing" + "istio.io/istio/pkg/test/framework" "istio.io/istio/pkg/test/framework/api/components" "istio.io/istio/pkg/test/framework/api/ids" @@ -24,7 +27,8 @@ import ( // To opt-in to the test framework, implement a TestMain, and call test.Run. func TestMain(m *testing.M) { - framework.Run("basic_test", m) + rt, _ := framework.Run("basic_test", m) + os.Exit(rt) } func TestBasic(t *testing.T) { diff --git a/tests/integration2/galley/conversion/conversion_test.go b/tests/integration2/galley/conversion/conversion_test.go index 10679b4a1fca..219972b729c9 100644 --- a/tests/integration2/galley/conversion/conversion_test.go +++ b/tests/integration2/galley/conversion/conversion_test.go @@ -45,9 +45,9 @@ func TestConversion(t *testing.T) { // TODO: Limit to Native environment until the Kubernetes environment is supported in the Galley // component - ctx.Require(lifecycle.Suite, &descriptors.NativeEnvironment) + ctx.RequireOrSkip(t, lifecycle.Suite, &descriptors.NativeEnvironment) - ctx.Require(lifecycle.Test, &ids.Galley) + ctx.RequireOrFail(t, lifecycle.Test, &ids.Galley) gal := components.GetGalley(ctx, t) @@ -94,9 +94,9 @@ func runTest(t *testing.T, fset *testdata.FileSet, gal components.Galley) { t.Fatalf("unable to apply config to Galley: %v", err) } - for typeURL, e := range expected { - if err = gal.WaitForSnapshot(typeURL, e...); err != nil { - t.Errorf("Error waiting for %s:\n%v\n", typeURL, err) + for collection, e := range expected { + if err = gal.WaitForSnapshot(collection, e...); err != nil { + t.Errorf("Error waiting for %s:\n%v\n", collection, err) } } } diff --git a/tests/integration2/galley/conversion/main_test.go b/tests/integration2/galley/conversion/main_test.go index 58e99e9676a4..6012b1f60d3c 100644 --- a/tests/integration2/galley/conversion/main_test.go +++ b/tests/integration2/galley/conversion/main_test.go @@ -15,11 +15,13 @@ package conversion import ( + "os" "testing" "istio.io/istio/pkg/test/framework" ) func TestMain(m *testing.M) { - framework.Run("galley_conversion_test", m) + rt, _ := framework.Run("galley_conversion_test", m) + os.Exit(rt) } diff --git a/tests/integration2/galley/validation/main_test.go b/tests/integration2/galley/validation/main_test.go index c278b62ab522..e0c2c1be213b 100644 --- a/tests/integration2/galley/validation/main_test.go +++ b/tests/integration2/galley/validation/main_test.go @@ -15,11 +15,13 @@ package validation import ( + "os" "testing" "istio.io/istio/pkg/test/framework" ) func TestMain(m *testing.M) { - framework.Run("galley_validation_test", m) + rt, _ := framework.Run("galley_validation_test", m) + os.Exit(rt) } diff --git a/tests/integration2/mixer/check_test.go b/tests/integration2/mixer/check_test.go index b18b1da1cfed..289dd1eb9cf7 100644 --- a/tests/integration2/mixer/check_test.go +++ b/tests/integration2/mixer/check_test.go @@ -95,12 +95,14 @@ apiVersion: "config.istio.io/v1alpha2" kind: checknothing metadata: name: checknothing1 + namespace: {{.TestNamespace}} spec: --- apiVersion: "config.istio.io/v1alpha2" kind: rule metadata: name: rule1 + namespace: {{.TestNamespace}} spec: actions: - handler: handler1.bypass diff --git a/tests/integration2/mixer/main_test.go b/tests/integration2/mixer/main_test.go index bfced2c4cede..8b6b2bc14e34 100644 --- a/tests/integration2/mixer/main_test.go +++ b/tests/integration2/mixer/main_test.go @@ -15,11 +15,13 @@ package mixer import ( + "os" "testing" "istio.io/istio/pkg/test/framework" ) func TestMain(m *testing.M) { - framework.Run("mixer_test", m) + rt, _ := framework.Run("mixer_test", m) + os.Exit(rt) } diff --git a/tests/integration2/mixer/scenarios_test.go b/tests/integration2/mixer/scenarios_test.go index 3831e25d1e28..ba9aaac543d9 100644 --- a/tests/integration2/mixer/scenarios_test.go +++ b/tests/integration2/mixer/scenarios_test.go @@ -15,7 +15,9 @@ package mixer import ( + "fmt" "testing" + "time" "istio.io/istio/pkg/test" "istio.io/istio/pkg/test/framework" @@ -57,7 +59,7 @@ func testMetric(t *testing.T, ctx component.Repository, label string, labelValue lifecycle.Test, test.JoinConfigs( bookinfo.NetworkingBookinfoGateway.LoadOrFail(t), - bookinfo.NetworkingDestinationRuleAll.LoadOrFail(t), + bookinfo.GetDestinationRuleConfigFile(t, ctx).LoadOrFail(t), bookinfo.NetworkingVirtualServiceAllV1.LoadOrFail(t), )) @@ -65,10 +67,7 @@ func testMetric(t *testing.T, ctx component.Repository, label string, labelValue ingress := components.GetIngress(ctx, t) // Warm up - _, err := ingress.Call("/productpage") - if err != nil { - t.Fatal(err) - } + visitProductPage(ingress, 30*time.Second, 200, t) // Wait for some data to arrive. initial, err := prometheus.WaitForQuiesce(`istio_requests_total{%s=%q,response_code="200"}`, label, labelValue) @@ -77,10 +76,7 @@ func testMetric(t *testing.T, ctx component.Repository, label string, labelValue } t.Logf("Baseline established: initial = %v", initial) - _, err = ingress.Call("/productpage") - if err != nil { - t.Fatal(err) - } + visitProductPage(ingress, 30*time.Second, 200, t) final, err := prometheus.WaitForQuiesce(`istio_requests_total{%s=%q,response_code="200"}`, label, labelValue) if err != nil { @@ -88,13 +84,16 @@ func testMetric(t *testing.T, ctx component.Repository, label string, labelValue } t.Logf("Quiesced to: final = %v", final) + metricName := "istio_requests_total" i, err := prometheus.Sum(initial, nil) if err != nil { + t.Logf("prometheus values for %s:\n%s", metricName, promDump(prometheus, metricName)) t.Fatal(err) } f, err := prometheus.Sum(final, nil) if err != nil { + t.Logf("prometheus values for %s:\n%s", metricName, promDump(prometheus, metricName)) t.Fatal(err) } @@ -102,3 +101,101 @@ func testMetric(t *testing.T, ctx component.Repository, label string, labelValue t.Errorf("Bad metric value: got %f, want at least 1", f-i) } } + +// Port of TestTcpMetric +func TestTcpMetric(t *testing.T) { + ctx := framework.GetContext(t) + scope := lifecycle.Test + ctx.RequireOrSkip(t, scope, &descriptors.KubernetesEnvironment, &ids.Mixer, &ids.Prometheus, &ids.BookInfo, &ids.Ingress) + + bookInfo := components.GetBookinfo(ctx, t) + err := bookInfo.DeployRatingsV2(ctx, scope) + if err != nil { + t.Fatalf("Could not deploy ratings v2: %v", err) + } + err = bookInfo.DeployMongoDb(ctx, scope) + if err != nil { + t.Fatalf("Could not deploy mongodb: %v", err) + } + + mxr := components.GetMixer(ctx, t) + mxr.Configure(t, + lifecycle.Test, + test.JoinConfigs( + bookinfo.NetworkingBookinfoGateway.LoadOrFail(t), + bookinfo.GetDestinationRuleConfigFile(t, ctx).LoadOrFail(t), + bookinfo.NetworkingVirtualServiceAllV1.LoadOrFail(t), + bookinfo.NetworkingTCPDbRule.LoadOrFail(t), + )) + + prometheus := components.GetPrometheus(ctx, t) + ingress := components.GetIngress(ctx, t) + + visitProductPage(ingress, 30*time.Second, 200, t) + + query := fmt.Sprintf("sum(istio_tcp_sent_bytes_total{destination_app=\"%s\"})", "mongodb") + validateMetric(t, prometheus, query, "istio_tcp_sent_bytes_total") + + query = fmt.Sprintf("sum(istio_tcp_received_bytes_total{destination_app=\"%s\"})", "mongodb") + validateMetric(t, prometheus, query, "istio_tcp_received_bytes_total") + + query = fmt.Sprintf("sum(istio_tcp_connections_opened_total{destination_app=\"%s\"})", "mongodb") + validateMetric(t, prometheus, query, "istio_tcp_connections_opened_total") + + query = fmt.Sprintf("sum(istio_tcp_connections_closed_total{destination_app=\"%s\"})", "mongodb") + validateMetric(t, prometheus, query, "istio_tcp_connections_closed_total") +} + +func validateMetric(t *testing.T, prometheus components.Prometheus, query, metricName string) { + t.Helper() + want := float64(1) + + t.Logf("prometheus query: %s", query) + value, err := prometheus.WaitForQuiesce(query) + if err != nil { + t.Fatalf("Could not get metrics from prometheus: %v", err) + } + + got, err := prometheus.Sum(value, nil) + if err != nil { + t.Logf("value: %s", value.String()) + t.Logf("prometheus values for %s:\n%s", metricName, promDump(prometheus, metricName)) + t.Fatalf("Could not find metric value: %v", err) + } + t.Logf("%s: %f", metricName, got) + if got < want { + t.Logf("prometheus values for %s:\n%s", metricName, promDump(prometheus, metricName)) + t.Errorf("Bad metric value: got %f, want at least %f", got, want) + } +} + +func visitProductPage(ingress components.Ingress, timeout time.Duration, wantStatus int, t *testing.T) error { + start := time.Now() + for { + response, err := ingress.Call("/productpage") + if err != nil { + t.Logf("Unable to connect to product page: %v", err) + } + + status := response.Code + if status == wantStatus { + t.Logf("Got %d response from product page!", wantStatus) + return nil + } + + if time.Since(start) > timeout { + return fmt.Errorf("could not retrieve product page in %v: Last status: %v", timeout, status) + } + + time.Sleep(3 * time.Second) + } +} + +// promDump gets all of the recorded values for a metric by name and generates a report of the values. +// used for debugging of failures to provide a comprehensive view of traffic experienced. +func promDump(prometheus components.Prometheus, metric string) string { + if value, err := prometheus.WaitForQuiesce(fmt.Sprintf("%s{}", metric)); err == nil { + return value.String() + } + return "" +} diff --git a/tests/integration2/pilot/security/authn_permissive_test.go b/tests/integration2/pilot/security/authn_permissive_test.go new file mode 100644 index 000000000000..4a5021a6fc08 --- /dev/null +++ b/tests/integration2/pilot/security/authn_permissive_test.go @@ -0,0 +1,154 @@ +// Copyright 2018 Istio Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package basic contains an example test suite for showcase purposes. +package security + +import ( + "os" + "reflect" + "testing" + + xdsapi "github.com/envoyproxy/go-control-plane/envoy/api/v2" + lis "github.com/envoyproxy/go-control-plane/envoy/api/v2/listener" + proto "github.com/gogo/protobuf/types" + + authnv1alpha "istio.io/api/authentication/v1alpha1" + "istio.io/istio/pilot/pkg/model" + authnplugin "istio.io/istio/pilot/pkg/networking/plugin/authn" + appst "istio.io/istio/pkg/test/framework/runtime/components/apps" + "istio.io/istio/pkg/test/framework/runtime/components/environment/native" + + "istio.io/istio/pkg/test/framework" + "istio.io/istio/pkg/test/framework/api/components" + "istio.io/istio/pkg/test/framework/api/descriptors" + "istio.io/istio/pkg/test/framework/api/ids" + "istio.io/istio/pkg/test/framework/api/lifecycle" +) + +// To opt-in to the test framework, implement a TestMain, and call test.Run. +func TestMain(m *testing.M) { + rt, _ := framework.Run("authn_permissive_test", m) + os.Exit(rt) +} + +func verifyListener(listener *xdsapi.Listener, t *testing.T) bool { + t.Helper() + if listener == nil { + return false + } + if len(listener.ListenerFilters) == 0 { + return false + } + // We expect tls_inspector filter exist. + inspector := false + for _, lf := range listener.ListenerFilters { + if lf.Name == authnplugin.EnvoyTLSInspectorFilterName { + inspector = true + break + } + } + if !inspector { + return false + } + // Check filter chain match. + if len(listener.FilterChains) != 2 { + return false + } + mtlsChain := listener.FilterChains[0] + if !reflect.DeepEqual(mtlsChain.FilterChainMatch.ApplicationProtocols, []string{"istio"}) { + return false + } + if mtlsChain.TlsContext == nil { + return false + } + // Second default filter chain should have empty filter chain match and no tls context. + defaultChain := listener.FilterChains[1] + if !reflect.DeepEqual(defaultChain.FilterChainMatch, &lis.FilterChainMatch{}) { + return false + } + if defaultChain.TlsContext != nil { + return false + } + return true +} + +// TestAuthnPermissive checks when authentication policy is permissive, Pilot generates expected +// listener configuration. +func TestAuthnPermissive(t *testing.T) { + ctx := framework.GetContext(t) + // TODO(incfly): make test able to run both on k8s and native when galley is ready. + ctx.RequireOrSkip(t, lifecycle.Test, &descriptors.NativeEnvironment, &ids.Apps) + env := native.GetEnvironmentOrFail(ctx, t) + _, err := env.ServiceManager.ConfigStore.Create( + model.Config{ + ConfigMeta: model.ConfigMeta{ + Type: model.AuthenticationPolicy.Type, + Name: "default", + Namespace: "istio-system", + }, + Spec: &authnv1alpha.Policy{ + // TODO: make policy work just applied to service a. + // Targets: []*authn.TargetSelector{ + // { + // Name: "a.istio-system.svc.local", + // }, + // }, + Peers: []*authnv1alpha.PeerAuthenticationMethod{{ + Params: &authnv1alpha.PeerAuthenticationMethod_Mtls{ + Mtls: &authnv1alpha.MutualTls{ + Mode: authnv1alpha.MutualTls_PERMISSIVE, + }, + }, + }}, + }, + }, + ) + if err != nil { + t.Error(err) + } + apps := components.GetApps(ctx, t) + a := apps.GetAppOrFail("a", t) + pilot := components.GetPilot(ctx, t) + req := appst.ConstructDiscoveryRequest(a, "type.googleapis.com/envoy.api.v2.Listener") + resp, err := pilot.CallDiscovery(req) + if err != nil { + t.Errorf("failed to call discovery %v", err) + } + for _, r := range resp.Resources { + foo := &xdsapi.Listener{} + if err := proto.UnmarshalAny(&r, foo); err != nil { + t.Errorf("failed to unmarshal %v", err) + } + if verifyListener(foo, t) { + return + } + } + t.Errorf("failed to find any listeners having multiplexing filter chain") +} + +// TestAuthentictionPermissiveE2E these cases are covered end to end +// app A to app B using plaintext (mTLS), +// app A to app B using HTTPS (mTLS), +// app A to app B using plaintext (legacy), +// app A to app B using HTTPS (legacy). +// explained: app-to-app-protocol(sidecar-to-sidecar-protocol). "legacy" means +// no client sidecar, unable to send "istio" alpn indicator. +// TODO(incfly): implement this +// func TestAuthentictionPermissiveE2E(t *testing.T) { +// Steps: +// Configure authn policy. +// Wait for config propagation. +// Send HTTP requests between apps. +// } diff --git a/tests/integration2/qualification/main_test.go b/tests/integration2/qualification/main_test.go index 7077bd996b68..86414f929fba 100644 --- a/tests/integration2/qualification/main_test.go +++ b/tests/integration2/qualification/main_test.go @@ -15,11 +15,13 @@ package qualification import ( + "os" "testing" "istio.io/istio/pkg/test/framework" ) func TestMain(m *testing.M) { - framework.Run("qualification", m) + rt, _ := framework.Run("qualification", m) + os.Exit(rt) } diff --git a/tests/integration2/tests.mk b/tests/integration2/tests.mk index 661b5ca9bea2..6b71435d96a0 100644 --- a/tests/integration2/tests.mk +++ b/tests/integration2/tests.mk @@ -38,7 +38,7 @@ test.integration.all: test.integration test.integration.kube # Generate integration test targets for kubernetes environment. test.integration.%.kube: - $(GO) test -p 1 ${T} ./tests/integration2/$*/... ${_INTEGRATION_TEST_WORKDIR_FLAG} ${_INTEGRATION_TEST_LOGGING_FLAG} \ + $(GO) test -p 1 ${T} ./tests/integration2/$*/... ${_INTEGRATION_TEST_WORKDIR_FLAG} ${_INTEGRATION_TEST_LOGGING_FLAG} -timeout 30m \ --istio.test.env kubernetes \ --istio.test.kube.config ${INTEGRATION_TEST_KUBECONFIG} \ --istio.test.kube.deploy \ @@ -47,7 +47,7 @@ test.integration.%.kube: # Generate integration test targets for local environment. test.integration.%: - $(GO) test -p 1 ${T} ./tests/integration2/$*/... --istio.test.env local + $(GO) test -p 1 ${T} ./tests/integration2/$*/... --istio.test.env native JUNIT_UNIT_TEST_XML ?= $(ISTIO_OUT)/junit_unit-tests.xml JUNIT_REPORT = $(shell which go-junit-report 2> /dev/null || echo "${ISTIO_BIN}/go-junit-report") @@ -60,7 +60,7 @@ TEST_PACKAGES = $(shell go list ./tests/integration2/... | grep -v /qualificatio test.integration.local: | $(JUNIT_REPORT) mkdir -p $(dir $(JUNIT_UNIT_TEST_XML)) set -o pipefail; \ - $(GO) test -p 1 ${T} ${TEST_PACKAGES} --istio.test.env local \ + $(GO) test -p 1 ${T} ${TEST_PACKAGES} --istio.test.env native \ 2>&1 | tee >($(JUNIT_REPORT) > $(JUNIT_UNIT_TEST_XML)) # All integration tests targeting Kubernetes environment. @@ -68,7 +68,7 @@ test.integration.local: | $(JUNIT_REPORT) test.integration.kube: | $(JUNIT_REPORT) mkdir -p $(dir $(JUNIT_UNIT_TEST_XML)) set -o pipefail; \ - $(GO) test -p 1 ${T} ${TEST_PACKAGES} ${_INTEGRATION_TEST_WORKDIR_FLAG} ${_INTEGRATION_TEST_LOGGING_FLAG} \ + $(GO) test -p 1 ${T} ${TEST_PACKAGES} ${_INTEGRATION_TEST_WORKDIR_FLAG} ${_INTEGRATION_TEST_LOGGING_FLAG} -timeout 30m \ --istio.test.env kubernetes \ --istio.test.kube.config ${INTEGRATION_TEST_KUBECONFIG} \ --istio.test.kube.deploy \ diff --git a/tests/istio.mk b/tests/istio.mk index 6da38e81f18e..b877fe3e4bdc 100644 --- a/tests/istio.mk +++ b/tests/istio.mk @@ -139,6 +139,8 @@ e2e_pilotv2_v1alpha3: | istioctl test/local/noauth/e2e_pilotv2 e2e_bookinfo_envoyv2_v1alpha3: | istioctl test/local/auth/e2e_bookinfo_envoyv2 +e2e_pilotv2_auth_sds: | istioctl test/local/auth/e2e_sds_pilotv2 + # This is used to keep a record of the test results. CAPTURE_LOG=| tee -a ${OUT_DIR}/tests/build-log.txt @@ -187,6 +189,14 @@ test/local/auth/e2e_pilotv2: out_dir generate_yaml_coredump # Run the pilot controller tests set -o pipefail; go test -v -timeout ${E2E_TIMEOUT}m ./tests/e2e/tests/controller ${CAPTURE_LOG} +# test with MTLS using key/cert distributed through SDS +test/local/auth/e2e_sds_pilotv2: out_dir generate_e2e_test_yaml + set -o pipefail; go test -v -timeout ${E2E_TIMEOUT}m ./tests/e2e/tests/pilot \ + --auth_enable=true --auth_sds_enable=true --ingress=false --rbac_enable=true --cluster_wide \ + ${E2E_ARGS} ${T} ${EXTRA_E2E_ARGS} ${CAPTURE_LOG} + # Run the pilot controller tests + set -o pipefail; go test -v -timeout ${E2E_TIMEOUT}m ./tests/e2e/tests/controller ${CAPTURE_LOG} + test/local/cloudfoundry/e2e_pilotv2: out_dir sudo apt update sudo apt install -y iptables @@ -248,8 +258,6 @@ helm/upgrade: istio-system install/kubernetes/helm/istio # Delete istio installed with helm -# Note that for Helm 2.10, the CRDs are not cleared helm/delete: ${HELM} delete --purge istio-system for i in install/kubernetes/helm/istio-init/files/crd-*; do kubectl delete -f $i; done - kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml diff --git a/tests/testdata/bootstrap_tmpl.json b/tests/testdata/bootstrap_tmpl.json index aa80f451d887..5b576526a52f 100644 --- a/tests/testdata/bootstrap_tmpl.json +++ b/tests/testdata/bootstrap_tmpl.json @@ -6,7 +6,7 @@ "zone": "testzone" }, "metadata": { - "foo": "bar" + {{ .EnvoyConfigOpt.meta_json_str }} } }, "stats_config": { diff --git a/tests/testdata/config/byon.yaml b/tests/testdata/config/byon.yaml index edd9068030c6..aed768a89ffb 100644 --- a/tests/testdata/config/byon.yaml +++ b/tests/testdata/config/byon.yaml @@ -3,6 +3,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: byon + namespace: testns spec: hosts: - byon.test.istio.io @@ -20,6 +21,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: byon + namespace: testns spec: hosts: - mybyon.test.istio.io @@ -34,6 +36,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: byon-docker + namespace: testns spec: hosts: - byon-docker.test.istio.io @@ -51,6 +54,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: wikipedia-range + namespace: testns spec: hosts: - www.wikipedia.org diff --git a/tests/testdata/config/destination-rule-all.yaml b/tests/testdata/config/destination-rule-all.yaml index 9a692bd640c0..b7c4bfa803aa 100644 --- a/tests/testdata/config/destination-rule-all.yaml +++ b/tests/testdata/config/destination-rule-all.yaml @@ -3,6 +3,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: destall + namespace: testns spec: hosts: - destall.default.svc.cluster.local @@ -20,6 +21,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: destall + namespace: testns spec: # DNS name, prefix wildcard, short name relative to context # IP or CIDR only for services in gateways diff --git a/tests/testdata/config/destination-rule-fqdn.yaml b/tests/testdata/config/destination-rule-fqdn.yaml index 4209a6dbbc2b..899d339a0df5 100644 --- a/tests/testdata/config/destination-rule-fqdn.yaml +++ b/tests/testdata/config/destination-rule-fqdn.yaml @@ -3,6 +3,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: fqdn + namespace: testns spec: host: www.webinf.info trafficPolicy: diff --git a/tests/testdata/config/destination-rule-passthrough.yaml b/tests/testdata/config/destination-rule-passthrough.yaml index 3613ef09c133..75fe3ba06245 100644 --- a/tests/testdata/config/destination-rule-passthrough.yaml +++ b/tests/testdata/config/destination-rule-passthrough.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: drpassthrough + namespace: testns spec: host: "*.foo.com" trafficPolicy: diff --git a/tests/testdata/config/destination-rule-ssl.yaml b/tests/testdata/config/destination-rule-ssl.yaml index dfdfe85ca76a..2c6e2654baca 100644 --- a/tests/testdata/config/destination-rule-ssl.yaml +++ b/tests/testdata/config/destination-rule-ssl.yaml @@ -3,6 +3,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: ssl-simple + namespace: testns spec: host: ssl1.webinf.info trafficPolicy: @@ -17,17 +18,9 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: drprefix + namespace: testns spec: - host: "ssl.webinf.info" + host: "random.webinf.info" trafficPolicy: loadBalancer: simple: RANDOM - tls: - mode: MUTUAL - clientCertificate: myCertFile.pem - privateKey: myPrivateKey.pem - # If omitted, no verification !!! ( can still be verified by receiver ??) - caCertificates: myCA.pem - sni: my.sni.com - subjectAltNames: - - foo.alt.name diff --git a/tests/testdata/config/egressgateway.yaml b/tests/testdata/config/egressgateway.yaml index 69eb848afddf..f97fd13bc12d 100644 --- a/tests/testdata/config/egressgateway.yaml +++ b/tests/testdata/config/egressgateway.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-egressgateway + namespace: testns spec: selector: # DO NOT CHANGE THESE LABELS diff --git a/tests/testdata/config/external_services.yaml b/tests/testdata/config/external_services.yaml index bcdee6d6053d..6e98a5deab09 100644 --- a/tests/testdata/config/external_services.yaml +++ b/tests/testdata/config/external_services.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: external-svc-extsvc + namespace: testns spec: hosts: - external.extsvc.com @@ -17,6 +18,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: external-service-1 + namespace: testns spec: host: external.extsvc.com # BUG: crash envoy @@ -30,6 +32,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: external-svc-ports + namespace: testns spec: hosts: - ports.extsvc.com @@ -50,6 +53,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: external-svc-dst + namespace: testns spec: hosts: - dst.extsvc.com @@ -67,6 +71,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: external-svc-ep + namespace: testns spec: hosts: - ep.extsvc.com diff --git a/tests/testdata/config/gateway-all.yaml b/tests/testdata/config/gateway-all.yaml index 9ae4ff5dd40c..c4c1da9112d9 100644 --- a/tests/testdata/config/gateway-all.yaml +++ b/tests/testdata/config/gateway-all.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: my-gateway + namespace: testns spec: selector: app: my-gateway-controller diff --git a/tests/testdata/config/gateway-tcp-a.yaml b/tests/testdata/config/gateway-tcp-a.yaml index 47b663397ec6..1f906a43d03f 100644 --- a/tests/testdata/config/gateway-tcp-a.yaml +++ b/tests/testdata/config/gateway-tcp-a.yaml @@ -3,6 +3,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: gateway-a + namespace: testns spec: selector: # DO NOT CHANGE THESE LABELS diff --git a/tests/testdata/config/ingress.yaml b/tests/testdata/config/ingress.yaml index 5c402b36e45b..11c2b9119f82 100644 --- a/tests/testdata/config/ingress.yaml +++ b/tests/testdata/config/ingress.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-ingress + namespace: testns spec: selector: istio: ingress @@ -27,6 +28,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: ingress + namespace: testns spec: # K8S Ingress rules are converted on the fly to a VirtualService. # The local tests may run without k8s - so for ingress we test with the diff --git a/tests/testdata/config/ingressgateway.yaml b/tests/testdata/config/ingressgateway.yaml index 82ff4b2d561e..7ce137257a97 100644 --- a/tests/testdata/config/ingressgateway.yaml +++ b/tests/testdata/config/ingressgateway.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-ingressgateway + namespace: testns spec: selector: # DO NOT CHANGE THESE LABELS diff --git a/tests/testdata/config/none.yaml b/tests/testdata/config/none.yaml new file mode 100644 index 000000000000..76ada59c42a2 --- /dev/null +++ b/tests/testdata/config/none.yaml @@ -0,0 +1,279 @@ +# All configs for 'none' namespace, used to test interception without iptables. +# In this mode the namespace isolation is required - the tests will also verify isolation +# It is important to update the tests in ../envoy/v2 which verify the number of generated listeners. + +# This is the first test using the new isolated model, you can use it as a template to create more +# isolated tests. It should be possible to also apply it to real k8s. + +# TODO: the IP addresses are not namespaced yet, so must be unique on the mesh (flat namespace) including in +# ServiceEntry tests. Removing deps on ip in progress. + +--- +# "None" mode depends on unique ports for each defined service or service entry. +# Not supported/require iptables: +# - TCP with 'addresses' field - needs iptables +# - resolution:NONE - 'original DST' - external services (for example https, ServiceEntry+address), stateful sets +# - TCP with resolution:DNS - same issue +# - + +# Local ServiceEntry (meshex, test) - the tests will use the IPs defined in the service when connecting. +# This works on local mode where K8S Service controller doesn't exist, and can be used for testing in k8s by a test +# pretending to have this address. +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: s1tcp + namespace: none +spec: + hosts: + - s1tcp.none + + ports: + - number: 2000 + name: tcplocal + protocol: TCP + + location: MESH_INTERNAL + resolution: STATIC + + endpoints: + - address: 10.11.0.1 + ports: + tcplocal: 7070 + labels: + app: s1tcp +--- +# Another inbound service, http type. Should generate a http listener on :7071 +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: s1http + namespace: none +spec: + hosts: + - s1http.none + + ports: + - number: 2001 + name: httplocal + protocol: HTTP + + location: MESH_INTERNAL + resolution: STATIC + + endpoints: + - address: 10.11.0.1 + ports: + httplocal: 7071 + +--- + +## Sidecar selecting the s1tcp service, which is used in the test. +#apiVersion: networking.istio.io/v1alpha3 +#kind: Sidecar +#metadata: +# name: s1http +# namespace: none +#spec: +# workload_selector: +# labels: +# app: s1tcp +# +# ingress: +# - port: +# number: 7071 +# protocol: TCP +# name: tcplocal +# default_endpoint: 127.0.0.1:17071 +# - port: +# number: 7070 +# protocol: TCP +# name: tcplocal +# default_endpoint: 127.0.0.1:17070 + +--- + +# Default sidecar +apiVersion: networking.istio.io/v1alpha3 +kind: Sidecar +metadata: + name: default + namespace: none +spec: + egress: + - hosts: + - none/* + - default/test.default # TODO: without namespace it fails validation ! + # TODO: if we include the namespace, why do we need full name ? Importing regular services should work. + # Label selection seems to confuse the new code. + ingress: + - port: + number: 7071 + protocol: HTTP + name: httplocal + default_endpoint: 127.0.0.1:17071 + - port: + number: 7070 + protocol: TCP + name: tcplocal + default_endpoint: 127.0.0.1:17070 + +--- + +# Regular TCP outbound cluster (Default MeshExternal = true, Resolution ClientSideLB) +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: s2 + namespace: none +spec: + hosts: + - s2.external.test.istio.io + + ports: + - number: 2005 + name: http-remote # To verify port name doesn't confuse pilot - protocol is TCP + protocol: TCP + resolution: STATIC + endpoints: + - address: 10.11.0.2 + ports: + http-remote: 7071 + - address: 10.11.0.3 + ports: + http-remote: 7072 + +--- +# Another TCP outbound cluster, resolution DNS (Default MeshExternal = true) +# Not supported, bind=false +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: s2dns + namespace: none +spec: + hosts: + - s2dns.external.test.istio.io + + ports: + - number: 2006 + protocol: TCP + name: tcp1 # TODO: is it optional ? Why not ? + resolution: DNS + +--- +# Outbound TCP cluster, resolution DNS - for a '.svc' (in cluster) service. +# As an optimization, this can be converted to EDS +# The new Sidecar is the recommended way to declare deps to mesh services - however +# DNS resolution is supposed to continue to work. +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: tcpmeshdns + namespace: none +spec: + hosts: + - tcpmeshdns.seexamples.svc + ports: + - number: 2007 + protocol: TCP + name: tcp1 + resolution: DNS + + +--- + +# Outbound TCP cluster, resolution STATIC - for a '.svc' (in cluster) service. +# This binds on each endpoint address ! +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: tcpmeshstatic + namespace: none +spec: + hosts: + - tcpmeshstatic.seexamples.svc + ports: + - number: 2008 + protocol: TCP + name: tcp1 + resolution: STATIC + endpoints: + - address: 10.11.0.8 + ports: + tcp1: 7070 +--- +# Outbound TCP cluster, resolution STATIC - for a '.svc' (in cluster) service. +# This generates EDS +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: tcpmeshstaticint + namespace: none +spec: + hosts: + - tcpmeshstaticint.seexamples.svc + ports: + - number: 2009 + protocol: TCP + name: tcp1 + location: MESH_INTERNAL + resolution: STATIC + endpoints: + # NEEDED FOR VALIDATION - LIKELY BUG + - address: 10.11.0.9 + ports: + tcp1: 7070 +--- + +# TODO: in progres, should bind to 127.0.0.1 +# will resolve using SNI +# DNS or etc/hosts or code must override the address, but pass proper SNI +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: https + namespace: none +spec: + hosts: + # TODO: Bug: without isolation (in the main test) it causes 'duplicated cluster', envoy rejects config + # This will happen if this is defined in multiple namespaces in 1.0 + - www1.googleapis.com + - api1.facebook.com + location: MESH_EXTERNAL + ports: + - number: 443 + name: https + protocol: TLS + resolution: DNS +--- +# TODO: this should be auto-generated from ServiceEntry/protocol=TLS, it's just boilerplate +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: tls-routing + namespace: none +spec: + hosts: + - www1.googleapis.com + - api1.facebook.com + tls: + - match: + - port: 443 + sniHosts: + - www1.googleapis.com + route: + - destination: + host: www1.googleapis.com + - match: + - port: 443 + sniHosts: + - api1.facebook.com + route: + - destination: + host: api1.facebook.com +--- +# DestinationRules attach to services, have no impact on 'none' interception + +# VirtualService for HTTP affect routes, no impact on none interception + diff --git a/tests/testdata/config/rule-content-route.yaml b/tests/testdata/config/rule-content-route.yaml index 15abbb3cd375..cbbaccd1d354 100644 --- a/tests/testdata/config/rule-content-route.yaml +++ b/tests/testdata/config/rule-content-route.yaml @@ -4,6 +4,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: headers-route + namespace: testns spec: hosts: - headers.test.istio.io diff --git a/tests/testdata/config/rule-default-route-append-headers.yaml b/tests/testdata/config/rule-default-route-append-headers.yaml index 38dc5a7f44ba..bf8c97cc49a1 100644 --- a/tests/testdata/config/rule-default-route-append-headers.yaml +++ b/tests/testdata/config/rule-default-route-append-headers.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: appendh + namespace: testns spec: hosts: - appendh.test.istio.io @@ -19,6 +20,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: appendh-route + namespace: testns spec: hosts: - appendh.test.istio.io diff --git a/tests/testdata/config/rule-default-route-cors-policy.yaml b/tests/testdata/config/rule-default-route-cors-policy.yaml index 0376eb891659..5f5fa11b84aa 100644 --- a/tests/testdata/config/rule-default-route-cors-policy.yaml +++ b/tests/testdata/config/rule-default-route-cors-policy.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: cors + namespace: testns spec: hosts: - cors.test.istio.io @@ -19,6 +20,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: cors + namespace: testns spec: hosts: - cors.test.istio.io diff --git a/tests/testdata/config/rule-default-route.yaml b/tests/testdata/config/rule-default-route.yaml index 4a9ef505dd73..3baba81d5d8a 100644 --- a/tests/testdata/config/rule-default-route.yaml +++ b/tests/testdata/config/rule-default-route.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: c + namespace: testns spec: hosts: - c.foo @@ -19,6 +20,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: default-route-1 + namespace: testns spec: hosts: - c.foo @@ -38,6 +40,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: default-route-2 + namespace: testns spec: hosts: - c.foo diff --git a/tests/testdata/config/rule-fault-injection.yaml b/tests/testdata/config/rule-fault-injection.yaml index e59d65838637..43e8d5b9a37f 100644 --- a/tests/testdata/config/rule-fault-injection.yaml +++ b/tests/testdata/config/rule-fault-injection.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: fault + namespace: testns spec: hosts: - fault.test.istio.io @@ -26,6 +27,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: fault + namespace: testns spec: host: c-weighted.extsvc.com subsets: @@ -41,6 +43,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: fault + namespace: testns spec: hosts: - fault.test.istio.io diff --git a/tests/testdata/config/rule-ingressgateway.yaml b/tests/testdata/config/rule-ingressgateway.yaml index 44dd2d21e35e..82fd6b765c00 100644 --- a/tests/testdata/config/rule-ingressgateway.yaml +++ b/tests/testdata/config/rule-ingressgateway.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: gateway-bound-route + namespace: testns spec: hosts: - uk.bookinfo.com diff --git a/tests/testdata/config/rule-redirect-injection.yaml b/tests/testdata/config/rule-redirect-injection.yaml index f928722a344c..887ed8b8ae97 100644 --- a/tests/testdata/config/rule-redirect-injection.yaml +++ b/tests/testdata/config/rule-redirect-injection.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: redirect + namespace: testns spec: hosts: - redirect.test.istio.io @@ -19,6 +20,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: redirect + namespace: testns spec: hosts: - redirect.test.istio.io diff --git a/tests/testdata/config/rule-regex-route.yaml b/tests/testdata/config/rule-regex-route.yaml index a83ab6863a4d..778c595e4079 100644 --- a/tests/testdata/config/rule-regex-route.yaml +++ b/tests/testdata/config/rule-regex-route.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: regex-extsvc + namespace: testns spec: hosts: - regex.extsvc.com @@ -26,6 +27,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: regex + namespace: testns spec: host: regex.extsvc.com subsets: @@ -40,6 +42,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: regex-route + namespace: testns spec: hosts: - regex.extsvc.com diff --git a/tests/testdata/config/rule-route-via-egressgateway.yaml b/tests/testdata/config/rule-route-via-egressgateway.yaml index 6e4df7d1be3d..0df8fbff97a2 100644 --- a/tests/testdata/config/rule-route-via-egressgateway.yaml +++ b/tests/testdata/config/rule-route-via-egressgateway.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: route-via-egressgateway + namespace: testns spec: hosts: - egressgateway.bookinfo.com diff --git a/tests/testdata/config/rule-websocket-route.yaml b/tests/testdata/config/rule-websocket-route.yaml index 0599b090abb2..4b9d6f12e521 100644 --- a/tests/testdata/config/rule-websocket-route.yaml +++ b/tests/testdata/config/rule-websocket-route.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: websocket-extsvc + namespace: testns spec: hosts: - websocket.test.istio.io @@ -22,6 +23,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: websocket + namespace: testns spec: host: websocket.test.istio.io subsets: @@ -36,6 +38,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: websocket-route + namespace: testns spec: hosts: - websocket.test.istio.io @@ -58,6 +61,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: websocket-route2 + namespace: testns spec: hosts: - websocket2.extsvc.com diff --git a/tests/testdata/config/rule-weighted-route.yaml b/tests/testdata/config/rule-weighted-route.yaml index 292728a972c6..78365148fcaf 100644 --- a/tests/testdata/config/rule-weighted-route.yaml +++ b/tests/testdata/config/rule-weighted-route.yaml @@ -4,6 +4,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: weighted-extsvc + namespace: testns spec: hosts: - c-weighted.extsvc.com @@ -28,6 +29,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: c-weighted + namespace: testns spec: host: c-weighted.extsvc.com subsets: @@ -42,6 +44,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: c-weighted + namespace: testns spec: hosts: - c-weighted.extsvc.com diff --git a/tests/testdata/config/se-example-gw.yaml b/tests/testdata/config/se-example-gw.yaml new file mode 100644 index 000000000000..82e5979072a1 --- /dev/null +++ b/tests/testdata/config/se-example-gw.yaml @@ -0,0 +1,101 @@ +#The following example demonstrates the use of a dedicated egress gateway +#through which all external service traffic is forwarded. + + +# Sidecar - no imports defined, isolated namespace. +apiVersion: networking.istio.io/v1alpha3 +kind: Sidecar +metadata: + name: default + namespace: exampleegressgw +spec: + egress: + - hosts: + - exampleegressgw/* +--- +# Test workload entry +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: workload + namespace: exampleegressgw +spec: + hosts: + - test.exampleegressgw + + ports: + - number: 1300 + name: tcplocal + protocol: TCP + + location: MESH_INTERNAL + resolution: STATIC + + endpoints: + - address: 10.13.0.1 + ports: + tcplocal: 31200 +--- + +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: external-svc-httpbin + namespace: exampleegressgw +spec: + hosts: + - httpbin.com + location: MESH_EXTERNAL + ports: + - number: 80 + name: http + protocol: HTTP + resolution: DNS + +--- +apiVersion: networking.istio.io/v1alpha3 +kind: Gateway +metadata: + name: istio-egressgateway + namespace: exampleegressgw +spec: + selector: + istio: egressgateway + servers: + - port: + number: 80 + name: http + protocol: HTTP + hosts: + - "*" +--- +#And the associated VirtualService to route from the sidecar to the +#gateway service (istio-egressgateway.istio-system.svc.cluster.local), as +#well as route from the gateway to the external service. +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: gateway-routing + namespace: exampleegressgw +spec: + hosts: + - httpbin.com + gateways: + - mesh + - istio-egressgateway + http: + - match: + - port: 80 + gateways: + - mesh + route: + - destination: + host: istio-egressgateway.istio-system.svc.cluster.local + - match: + - port: 80 + gateways: + - istio-egressgateway + route: + - destination: + host: httpbin.com +--- diff --git a/tests/testdata/config/se-example.yaml b/tests/testdata/config/se-example.yaml new file mode 100644 index 000000000000..ce655debc0b5 --- /dev/null +++ b/tests/testdata/config/se-example.yaml @@ -0,0 +1,230 @@ +# Examples from the doc and site, in namespace examples +# The 'egress' example conflicts, it's in separate namespace +# +# Ports: +# - 27018 (mongo) - with VIP +# - 443 - SNI routing +# - 80 - *.bar.com resolution:NONE example +# +# - 8000 - virtual entry backed by multiple DNS-based services +# - 8001 - unix domain socket +# +# - 1200 - the inbound service and +# - 21200 - the inbound container +# +apiVersion: networking.istio.io/v1alpha3 +kind: Sidecar +metadata: + name: default + namespace: seexamples +spec: + egress: + - hosts: + - seexamples/* # Doesn't work without this - should be default + +--- +# Test workload entry +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: workload + namespace: seexamples +spec: + hosts: + - test.seexamples + + ports: + - number: 1200 + name: tcplocal + protocol: TCP + + location: MESH_INTERNAL + resolution: STATIC + + endpoints: + - address: 10.12.0.1 + ports: + tcplocal: 21200 +--- + +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: external-svc-mongocluster + namespace: seexamples +spec: + hosts: + - mymongodb.somedomain # not used + + addresses: + - 192.192.192.192/24 # VIPs + + ports: + - number: 27018 + name: mongodb + protocol: MONGO + location: MESH_INTERNAL + resolution: STATIC + endpoints: + - address: 2.2.2.2 + - address: 3.3.3.3 + +--- +apiVersion: networking.istio.io/v1alpha3 +kind: DestinationRule +metadata: + name: mtls-mongocluster + namespace: seexamples +spec: + host: mymongodb.somedomain + trafficPolicy: + tls: + mode: MUTUAL + # Envoy test runs in pilot/pkg/proxy/envoy/v2 directory, but envoy process base dir is set to IstioSrc + clientCertificate: tests/testdata/certs/default/cert-chain.pem + privateKey: tests/testdata/certs/default/key.pem + caCertificates: tests/testdata/certs/default/root-cert.pem + # Not included in the example, added for testing + sni: v1.mymongodb.somedomain + subjectAltNames: + - service.mongodb.somedomain + +--- +#The following example uses a combination of service entry and TLS +#routing in virtual service to demonstrate the use of SNI routing to +#forward unterminated TLS traffic from the application to external +#services via the sidecar. The sidecar inspects the SNI value in the +#ClientHello message to route to the appropriate external service. + +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: external-svc-https + namespace: seexamples +spec: + hosts: + - api.dropboxapi.com + - www.googleapis.com + - api.facebook.com + location: MESH_EXTERNAL + ports: + - number: 443 + name: https + protocol: TLS + resolution: DNS + +--- + +apiVersion: networking.istio.io/v1alpha3 +kind: VirtualService +metadata: + name: tls-routing + namespace: seexamples +spec: + hosts: + - api.dropboxapi.com + - www.googleapis.com + - api.facebook.com + tls: + - match: + - port: 443 + sniHosts: + - api.dropboxapi.com + route: + - destination: + host: api.dropboxapi.com + - match: + - port: 443 + sniHosts: + - www.googleapis.com + route: + - destination: + host: www.googleapis.com + - match: + - port: 443 + sniHosts: + - api.facebook.com + route: + - destination: + host: api.facebook.com +--- +#The following example demonstrates the use of wildcards in the hosts for +#external services. If the connection has to be routed to the IP address +#requested by the application (i.e. application resolves DNS and attempts +#to connect to a specific IP), the discovery mode must be set to `NONE`. +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: external-svc-wildcard-example + namespace: seexamples +spec: + hosts: + - "*.bar.com" + location: MESH_EXTERNAL + ports: + - number: 80 + name: http + protocol: HTTP + resolution: NONE + +--- +# The following example demonstrates a service that is available via a +# Unix Domain Socket on the host of the client. The resolution must be +# set to STATIC to use unix address endpoints. + +# Modified to use port 8001 +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: unix-domain-socket-example + namespace: seexamples +spec: + hosts: + - "example.unix.local" + location: MESH_EXTERNAL + ports: + - number: 8001 + name: http + protocol: HTTP + resolution: STATIC + endpoints: + - address: unix:///var/run/example/socket + +--- + +# For HTTP based services, it is possible to create a VirtualService +# backed by multiple DNS addressable endpoints. In such a scenario, the +# application can use the HTTP_PROXY environment variable to transparently +# reroute API calls for the VirtualService to a chosen backend. For +# example, the following configuration creates a non-existent external +# service called foo.bar.com backed by three domains: us.foo.bar.com:8080, +# uk.foo.bar.com:9080, and in.foo.bar.com:7080 + +# Modified to use port 8000 +apiVersion: networking.istio.io/v1alpha3 +kind: ServiceEntry +metadata: + name: external-svc-dns + namespace: seexamples +spec: + hosts: + - foo.bar.com + location: MESH_EXTERNAL + ports: + - number: 8000 + name: http + protocol: HTTP + resolution: DNS + endpoints: + - address: us.foo.bar.com + ports: + # TODO: example uses 'https', which is rejected currently + http: 8080 + - address: uk.foo.bar.com + ports: + http: 9080 + - address: in.foo.bar.com + ports: + http: 7080 + +--- diff --git a/tests/testdata/config/virtual-service-all.yaml b/tests/testdata/config/virtual-service-all.yaml index 8f657ed9f314..561f513da007 100644 --- a/tests/testdata/config/virtual-service-all.yaml +++ b/tests/testdata/config/virtual-service-all.yaml @@ -2,6 +2,7 @@ apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: all + namespace: testns spec: hosts: - service3.default.svc.cluster.local diff --git a/tests/testdata/local/etc/certs/cert-chain.pem b/tests/testdata/local/etc/certs/cert-chain.pem new file mode 100644 index 000000000000..a6edf50fc40c --- /dev/null +++ b/tests/testdata/local/etc/certs/cert-chain.pem @@ -0,0 +1,19 @@ +-----BEGIN CERTIFICATE----- +MIIDDzCCAfegAwIBAgIRAK/nSuMMcG8W4IBMYkR9SuYwDQYJKoZIhvcNAQELBQAw +EzERMA8GA1UEChMISnVqdSBvcmcwHhcNMTgwNDExMDA0NDU4WhcNMTkwNDExMDA0 +NDU4WjATMREwDwYDVQQKEwhKdWp1IG9yZzCCASIwDQYJKoZIhvcNAQEBBQADggEP +ADCCAQoCggEBANq3dWID1Qv6sOrEAMpEUX3tZXAN56G8FOkg11u1rQUWW7ULz0zS +zXmE6/ZkwAbhA1pJLh25k/qa5lu1VpLsdG5zr0XiExRDU6VvbYk81uTzSyfFWHtZ +q6jNJZx4EzXAGGRfy6iHenMxDDS9l7vBWrqmAv3r35aollZG/ZRPlLKBUIXgF/yE +dhEZ20ULzyruVvj64zDIJG36Ae0DAiuHAqgE/pcFCziM0LTv3A0HQyQNGmC4cxmq +Q/geE71XB3o5BPAJcBC11m/g0gefiRQaxd7YexOgWKj4Iywnb8o2Ae3ToUPwpRT8 +sCJkijqycnTSCz0r4APuroMX37W5S27ZFNsCAwEAAaNeMFwwDgYDVR0PAQH/BAQD +AgWgMAwGA1UdEwEB/wQCMAAwPAYDVR0RBDUwM4Yxc3BpZmZlOi8vY2x1c3Rlci5s +b2NhbC9ucy9pc3Rpby1zeXN0ZW0vc2EvZGVmYXVsdDANBgkqhkiG9w0BAQsFAAOC +AQEAthg/X6cx5uX4tIBrvuEjpEtKtoGiP2Hv/c+wICnGMneFcjTzY9Q0uZeAYNUh +4nb7ypEV0+igIreIDmjSbswh4plqqSsBmsiG5NhnpHjxpfAYALaibLetd6zO+phl +y02MWyzd+bDVDxVhwE5uAjKBwXweUsFo5QlbDjLJOgjDcobAk/neLHl6KtHAzeiK +8Ufqsiy1QuyjP0QWKaUTxxnDeVbPQ7Re7irKz5uMu4iFydfU3AD3OE2nPA1p8T6T +jyxDlacHnu8zjw44jXOpBdxis6oZoBLTeHdCs7axxdBIOr10d41O3SftPDXr6FMF +7P0moMRpvz2mPjcHeihmTiC/zA== +-----END CERTIFICATE----- diff --git a/tests/testdata/local/etc/certs/key.pem b/tests/testdata/local/etc/certs/key.pem new file mode 100644 index 000000000000..477a0e058380 --- /dev/null +++ b/tests/testdata/local/etc/certs/key.pem @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEpAIBAAKCAQEA2rd1YgPVC/qw6sQAykRRfe1lcA3nobwU6SDXW7WtBRZbtQvP +TNLNeYTr9mTABuEDWkkuHbmT+prmW7VWkux0bnOvReITFENTpW9tiTzW5PNLJ8VY +e1mrqM0lnHgTNcAYZF/LqId6czEMNL2Xu8FauqYC/evflqiWVkb9lE+UsoFQheAX +/IR2ERnbRQvPKu5W+PrjMMgkbfoB7QMCK4cCqAT+lwULOIzQtO/cDQdDJA0aYLhz +GapD+B4TvVcHejkE8AlwELXWb+DSB5+JFBrF3th7E6BYqPgjLCdvyjYB7dOhQ/Cl +FPywImSKOrJydNILPSvgA+6ugxfftblLbtkU2wIDAQABAoIBAQChBlSbluqxaR37 +mdZjFC1TIwZ9mx8gChLzGES1tmntxuo5vroee0zf3KbEvqRJ7DvFqv8Sz2BNLuHO +PxHAFeoar30pXCpjzrW0pPbmBS7JXP3GCBr+paQmIPNB4X1zIzxSGd0c9LGIQWIV +Kkid6Nrdc//b5l600uXsG1PybyywxftTZQdqd7hVb6RPJ0YPYH35E8QlctYns8rB +1d+SokRa9HEmy2n1vng+roZSuoeV1iyTBA833hgXFVoywZqPX4/W8IVwoMs3QXyu +ucAEDWUWA1cfTb5yO19bmf8sJbsSKX3eMpPTKaqrHhnPmPw2Mb9t3QDyNOmJAPsU +sTd5tWrRAoGBAPWE6MQX9Z/5OmRgiqxrWZ/mu0mxhFMQByMi+PABhYRKrj/VFK60 +rHE8q3nvc7XV9tL0pcQSBTm5eMo3mLDfS2UUqRoSBftP2+2iq76yDvc7Z6jnF6IA +0yZAJaZ1vWex/H8GrC/gwLK54qFNPCTgX/71SvFkEz4hSMNqn3TWr4WnAoGBAOQN +pRXGvpgNiZvj7Es3f1Yrlv/LkPb9/Xf3fA/OP/qYrG5/O490TqhCBH9Pu1/4n8TK +HlK7eBchnaaa5KC7MmP2qrr88geLS++RKDexKveBjmZI0cY5TzPJzJvOaZojiYsE +lu3z2Nk1Zh0a/c0PF56f6JQidolcm1DadZgZVIWtAoGAL70bIV3jRarJPtFCxYnO +EHhlvEuHBjucl6lqLAivtGxs+z5sWhgJW69HTFtR7W5gIt6tCLXUTEgTA4bR6rpQ +R6Q/yTDt9tKtWcSCjn7CyDHF0yK0Bq0QYWShrX9BR9Nk3DIo8tpJvbbFKUYCRs1V +/RYm707dKvx/0Hd/79D6qgsCgYByHKXDXLxX50Y5G/ZLuMxduNgIzLqP7I2dLtgE +LKb05No7PCz4XjFRnh8T+TiAEC8Z0C0WJrozkN2K1SybhK/1NyM9B36v6bKogFDI +dT1TtZ8kbUGSV3DbMBnSyJksyKV1S2meTYrvPPoIjE39ApVGCSvem9QGbbFF5to6 +rkoNzQKBgQCfTvIlxJubhKo4MVy+Nydf5s+l0hpy+4XQnd5UgFptjn5lnCrwpdlo +f4/pjb+XhpF292UxMtnScu1Q+K3QyWmjApZ7pDhM7AVQ5ShVhGHJtBo6R3kFn+N2 +BvH3QHjEAmWB+yhGPaZ8orOE4kGhSf6FckClN6I9K9j+GeSf7nB56g== +-----END RSA PRIVATE KEY----- diff --git a/tests/testdata/local/etc/certs/root-cert.pem b/tests/testdata/local/etc/certs/root-cert.pem new file mode 100644 index 000000000000..e129762aa483 --- /dev/null +++ b/tests/testdata/local/etc/certs/root-cert.pem @@ -0,0 +1,18 @@ +-----BEGIN CERTIFICATE----- +MIIC1DCCAbygAwIBAgIRAKt68S5UBmmm8B9xAZFQjf4wDQYJKoZIhvcNAQELBQAw +EzERMA8GA1UEChMISnVqdSBvcmcwHhcNMTgwNDExMDA0MzI2WhcNMTkwNDExMDA0 +MzI2WjATMREwDwYDVQQKEwhKdWp1IG9yZzCCASIwDQYJKoZIhvcNAQEBBQADggEP +ADCCAQoCggEBAMlicKoXsVhaWxSHRxK92G5d/YqFGme0RYdII1/JLs09+B6zDjli +FkmB0JIev+AxLXC+uAb0M32vFGBUeoIhX8d4TKz23xWMVafpx7oqk80g18RApxcb +OmgmY35sm8fO5E1gXQ4t87gdKrv2WrPQsi0UD44CUsyfTrrEXIdDL3ctG5QEwMlo +rui7n4VIsneCNdz5n6EwRMXJMpnaWmPbIiNsDgTJ64Gx8Z7RSWiO/tyHzoTqUFJw +uFjGbWY9r7N7iQT2/ricS6CNWoiEG9IwV0d3ablJ+kwGAGPbBSVSIYC4hb7+IXup +WmiiQALZzNLL6xuqCR0vivgCBJtvxaUJMQcCAwEAAaMjMCEwDgYDVR0PAQH/BAQD +AgIEMA8GA1UdEwEB/wQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBAMeGE/D5RVho +2I3x+4fRrT62VMBtgrrYVB3P44QpDbfh9mPp7RPjS3xLQvAXbLWnU9lBzy9gv31n +cK1bpjM0A9kBxBwCvdiAYrHm4jojHv12RDb7l8F5xaWh6emGAHaZVc1KWo7EzF9w +IlNlwfNWLS0QSoF1ZNWN3gj3zqVhQdyDF7JWKYUOjuJQOaIvPDvhSbRSEHNJeQ8j +AAG8mEasH0l09zPrNTthAraCnmeAfwfop41LYMLsjWEFnLOy4cR1EWv8Qk005MT4 +BKaiERfeOyB1P5JAFg/T6twNBgvc9MHP0YJQuHfPheecNONlfE3jp9k0dqDBbvYp +S9I82u5SyDI= +-----END CERTIFICATE----- diff --git a/tests/util/helm_utils.go b/tests/util/helm_utils.go index 3c2059650ed6..1d8e856caf77 100644 --- a/tests/util/helm_utils.go +++ b/tests/util/helm_utils.go @@ -14,6 +14,13 @@ package util +import ( + "context" + "time" + + "istio.io/istio/pkg/log" +) + // HelmInit init helm with a service account func HelmInit(serviceAccount string) error { _, err := Shell("helm init --upgrade --service-account %s", serviceAccount) @@ -74,3 +81,31 @@ func HelmParams(chartDir, chartName, valueFile, namespace, setValue string) stri return helmCmd } + +// Obtain the version of Helm client and server with a timeout of 10s or return an error +func helmVersion() (string, error) { + version, err := Shell("helm version") + return version, err +} + +// HelmTillerRunning will block for up to 120 seconds waiting for Tiller readiness +func HelmTillerRunning() error { + retry := Retrier{ + BaseDelay: 10 * time.Second, + MaxDelay: 10 * time.Second, + Retries: 12, + } + + retryFn := func(_ context.Context, i int) error { + _, err := helmVersion() + return err + } + ctx := context.Background() + _, err := retry.Retry(ctx, retryFn) + if err != nil { + log.Errorf("Tiller failed to start") + return err + } + log.Infof("Tiller is running") + return nil +} diff --git a/tests/util/pilot_server.go b/tests/util/pilot_server.go index 70fca70c5ed6..f08ce361c983 100644 --- a/tests/util/pilot_server.go +++ b/tests/util/pilot_server.go @@ -105,6 +105,8 @@ func setup(additionalArgs ...func(*bootstrap.PilotArgs)) (*bootstrap.Server, Tea MCPMaxMessageSize: bootstrap.DefaultMCPMaxMsgSize, KeepaliveOptions: keepalive.DefaultOption(), ForceStop: true, + // TODO: add the plugins, so local tests are closer to reality and test full generation + // Plugins: bootstrap.DefaultPlugins, } // Static testdata, should include all configs we want to test. args.Config.FileDir = env.IstioSrc + "/tests/testdata/config" diff --git a/tools/deb/envoy_bootstrap_v2.json b/tools/deb/envoy_bootstrap_v2.json index f544e8f85852..d9ff498bfa64 100644 --- a/tools/deb/envoy_bootstrap_v2.json +++ b/tools/deb/envoy_bootstrap_v2.json @@ -279,7 +279,8 @@ "name": "envoy.zipkin", "config": { "collector_cluster": "zipkin", - "collector_endpoint": "/api/v1/spans" + "collector_endpoint": "/api/v1/spans", + "trace_id_128bit": "true" } } } diff --git a/tools/istio-docker.mk b/tools/istio-docker.mk index bdc104381f58..b4d8959141e6 100644 --- a/tools/istio-docker.mk +++ b/tools/istio-docker.mk @@ -262,6 +262,18 @@ docker.push: $(DOCKER_PUSH_TARGETS) docker.basedebug: docker build -t istionightly/base_debug -f docker/Dockerfile.xenial_debug docker/ +# Run this target to generate images based on Bionic Ubuntu +# This must be run as a first step, before the 'docker' step. +docker.basedebug_bionic: + docker build -t istionightly/base_debug_bionic -f docker/Dockerfile.bionic_debug docker/ + docker tag istionightly/base_debug_bionic istionightly/base_debug + +# Run this target to generate images based on Debian Slim +# This must be run as a first step, before the 'docker' step. +docker.basedebug_deb: + docker build -t istionightly/base_debug_deb -f docker/Dockerfile.deb_debug docker/ + docker tag istionightly/base_debug_deb istionightly/base_debug + # Job run from the nightly cron to publish an up-to-date xenial with the debug tools. docker.push.basedebug: docker.basedebug docker push istionightly/base_debug:latest diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go new file mode 100644 index 000000000000..ec765ba257ed --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/build.go @@ -0,0 +1,286 @@ +// Package jsonutil provides JSON serialization of AWS requests and responses. +package jsonutil + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "math" + "reflect" + "sort" + "strconv" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/private/protocol" +) + +var timeType = reflect.ValueOf(time.Time{}).Type() +var byteSliceType = reflect.ValueOf([]byte{}).Type() + +// BuildJSON builds a JSON string for a given object v. +func BuildJSON(v interface{}) ([]byte, error) { + var buf bytes.Buffer + + err := buildAny(reflect.ValueOf(v), &buf, "") + return buf.Bytes(), err +} + +func buildAny(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + origVal := value + value = reflect.Indirect(value) + if !value.IsValid() { + return nil + } + + vtype := value.Type() + + t := tag.Get("type") + if t == "" { + switch vtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if value.Type() != timeType { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := value.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + // cannot be a JSONValue map + if _, ok := value.Interface().(aws.JSONValue); !ok { + t = "map" + } + } + } + + switch t { + case "structure": + if field, ok := vtype.FieldByName("_"); ok { + tag = field.Tag + } + return buildStruct(value, buf, tag) + case "list": + return buildList(value, buf, tag) + case "map": + return buildMap(value, buf, tag) + default: + return buildScalar(origVal, buf, tag) + } +} + +func buildStruct(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + if !value.IsValid() { + return nil + } + + // unwrap payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := value.Type().FieldByName(payload) + tag = field.Tag + value = elemOf(value.FieldByName(payload)) + + if !value.IsValid() { + return nil + } + } + + buf.WriteByte('{') + + t := value.Type() + first := true + for i := 0; i < t.NumField(); i++ { + member := value.Field(i) + + // This allocates the most memory. + // Additionally, we cannot skip nil fields due to + // idempotency auto filling. + field := t.Field(i) + + if field.PkgPath != "" { + continue // ignore unexported fields + } + if field.Tag.Get("json") == "-" { + continue + } + if field.Tag.Get("location") != "" { + continue // ignore non-body elements + } + if field.Tag.Get("ignore") != "" { + continue + } + + if protocol.CanSetIdempotencyToken(member, field) { + token := protocol.GetIdempotencyToken() + member = reflect.ValueOf(&token) + } + + if (member.Kind() == reflect.Ptr || member.Kind() == reflect.Slice || member.Kind() == reflect.Map) && member.IsNil() { + continue // ignore unset fields + } + + if first { + first = false + } else { + buf.WriteByte(',') + } + + // figure out what this field is called + name := field.Name + if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + writeString(name, buf) + buf.WriteString(`:`) + + err := buildAny(member, buf, field.Tag) + if err != nil { + return err + } + + } + + buf.WriteString("}") + + return nil +} + +func buildList(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + buf.WriteString("[") + + for i := 0; i < value.Len(); i++ { + buildAny(value.Index(i), buf, "") + + if i < value.Len()-1 { + buf.WriteString(",") + } + } + + buf.WriteString("]") + + return nil +} + +type sortedValues []reflect.Value + +func (sv sortedValues) Len() int { return len(sv) } +func (sv sortedValues) Swap(i, j int) { sv[i], sv[j] = sv[j], sv[i] } +func (sv sortedValues) Less(i, j int) bool { return sv[i].String() < sv[j].String() } + +func buildMap(value reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + buf.WriteString("{") + + sv := sortedValues(value.MapKeys()) + sort.Sort(sv) + + for i, k := range sv { + if i > 0 { + buf.WriteByte(',') + } + + writeString(k.String(), buf) + buf.WriteString(`:`) + + buildAny(value.MapIndex(k), buf, "") + } + + buf.WriteString("}") + + return nil +} + +func buildScalar(v reflect.Value, buf *bytes.Buffer, tag reflect.StructTag) error { + // prevents allocation on the heap. + scratch := [64]byte{} + switch value := reflect.Indirect(v); value.Kind() { + case reflect.String: + writeString(value.String(), buf) + case reflect.Bool: + if value.Bool() { + buf.WriteString("true") + } else { + buf.WriteString("false") + } + case reflect.Int64: + buf.Write(strconv.AppendInt(scratch[:0], value.Int(), 10)) + case reflect.Float64: + f := value.Float() + if math.IsInf(f, 0) || math.IsNaN(f) { + return &json.UnsupportedValueError{Value: v, Str: strconv.FormatFloat(f, 'f', -1, 64)} + } + buf.Write(strconv.AppendFloat(scratch[:0], f, 'f', -1, 64)) + default: + switch converted := value.Interface().(type) { + case time.Time: + buf.Write(strconv.AppendInt(scratch[:0], converted.UTC().Unix(), 10)) + case []byte: + if !value.IsNil() { + buf.WriteByte('"') + if len(converted) < 1024 { + // for small buffers, using Encode directly is much faster. + dst := make([]byte, base64.StdEncoding.EncodedLen(len(converted))) + base64.StdEncoding.Encode(dst, converted) + buf.Write(dst) + } else { + // for large buffers, avoid unnecessary extra temporary + // buffer space. + enc := base64.NewEncoder(base64.StdEncoding, buf) + enc.Write(converted) + enc.Close() + } + buf.WriteByte('"') + } + case aws.JSONValue: + str, err := protocol.EncodeJSONValue(converted, protocol.QuotedEscape) + if err != nil { + return fmt.Errorf("unable to encode JSONValue, %v", err) + } + buf.WriteString(str) + default: + return fmt.Errorf("unsupported JSON value %v (%s)", value.Interface(), value.Type()) + } + } + return nil +} + +var hex = "0123456789abcdef" + +func writeString(s string, buf *bytes.Buffer) { + buf.WriteByte('"') + for i := 0; i < len(s); i++ { + if s[i] == '"' { + buf.WriteString(`\"`) + } else if s[i] == '\\' { + buf.WriteString(`\\`) + } else if s[i] == '\b' { + buf.WriteString(`\b`) + } else if s[i] == '\f' { + buf.WriteString(`\f`) + } else if s[i] == '\r' { + buf.WriteString(`\r`) + } else if s[i] == '\t' { + buf.WriteString(`\t`) + } else if s[i] == '\n' { + buf.WriteString(`\n`) + } else if s[i] < 32 { + buf.WriteString("\\u00") + buf.WriteByte(hex[s[i]>>4]) + buf.WriteByte(hex[s[i]&0xF]) + } else { + buf.WriteByte(s[i]) + } + } + buf.WriteByte('"') +} + +// Returns the reflection element of a value, if it is a pointer. +func elemOf(value reflect.Value) reflect.Value { + for value.Kind() == reflect.Ptr { + value = value.Elem() + } + return value +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go new file mode 100644 index 000000000000..037e1e7be78d --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/json/jsonutil/unmarshal.go @@ -0,0 +1,226 @@ +package jsonutil + +import ( + "encoding/base64" + "encoding/json" + "fmt" + "io" + "io/ioutil" + "reflect" + "time" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/private/protocol" +) + +// UnmarshalJSON reads a stream and unmarshals the results in object v. +func UnmarshalJSON(v interface{}, stream io.Reader) error { + var out interface{} + + b, err := ioutil.ReadAll(stream) + if err != nil { + return err + } + + if len(b) == 0 { + return nil + } + + if err := json.Unmarshal(b, &out); err != nil { + return err + } + + return unmarshalAny(reflect.ValueOf(v), out, "") +} + +func unmarshalAny(value reflect.Value, data interface{}, tag reflect.StructTag) error { + vtype := value.Type() + if vtype.Kind() == reflect.Ptr { + vtype = vtype.Elem() // check kind of actual element type + } + + t := tag.Get("type") + if t == "" { + switch vtype.Kind() { + case reflect.Struct: + // also it can't be a time object + if _, ok := value.Interface().(*time.Time); !ok { + t = "structure" + } + case reflect.Slice: + // also it can't be a byte slice + if _, ok := value.Interface().([]byte); !ok { + t = "list" + } + case reflect.Map: + // cannot be a JSONValue map + if _, ok := value.Interface().(aws.JSONValue); !ok { + t = "map" + } + } + } + + switch t { + case "structure": + if field, ok := vtype.FieldByName("_"); ok { + tag = field.Tag + } + return unmarshalStruct(value, data, tag) + case "list": + return unmarshalList(value, data, tag) + case "map": + return unmarshalMap(value, data, tag) + default: + return unmarshalScalar(value, data, tag) + } +} + +func unmarshalStruct(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + mapData, ok := data.(map[string]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a structure (%#v)", data) + } + + t := value.Type() + if value.Kind() == reflect.Ptr { + if value.IsNil() { // create the structure if it's nil + s := reflect.New(value.Type().Elem()) + value.Set(s) + value = s + } + + value = value.Elem() + t = t.Elem() + } + + // unwrap any payloads + if payload := tag.Get("payload"); payload != "" { + field, _ := t.FieldByName(payload) + return unmarshalAny(value.FieldByName(payload), data, field.Tag) + } + + for i := 0; i < t.NumField(); i++ { + field := t.Field(i) + if field.PkgPath != "" { + continue // ignore unexported fields + } + + // figure out what this field is called + name := field.Name + if locName := field.Tag.Get("locationName"); locName != "" { + name = locName + } + + member := value.FieldByIndex(field.Index) + err := unmarshalAny(member, mapData[name], field.Tag) + if err != nil { + return err + } + } + return nil +} + +func unmarshalList(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + listData, ok := data.([]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a list (%#v)", data) + } + + if value.IsNil() { + l := len(listData) + value.Set(reflect.MakeSlice(value.Type(), l, l)) + } + + for i, c := range listData { + err := unmarshalAny(value.Index(i), c, "") + if err != nil { + return err + } + } + + return nil +} + +func unmarshalMap(value reflect.Value, data interface{}, tag reflect.StructTag) error { + if data == nil { + return nil + } + mapData, ok := data.(map[string]interface{}) + if !ok { + return fmt.Errorf("JSON value is not a map (%#v)", data) + } + + if value.IsNil() { + value.Set(reflect.MakeMap(value.Type())) + } + + for k, v := range mapData { + kvalue := reflect.ValueOf(k) + vvalue := reflect.New(value.Type().Elem()).Elem() + + unmarshalAny(vvalue, v, "") + value.SetMapIndex(kvalue, vvalue) + } + + return nil +} + +func unmarshalScalar(value reflect.Value, data interface{}, tag reflect.StructTag) error { + errf := func() error { + return fmt.Errorf("unsupported value: %v (%s)", value.Interface(), value.Type()) + } + + switch d := data.(type) { + case nil: + return nil // nothing to do here + case string: + switch value.Interface().(type) { + case *string: + value.Set(reflect.ValueOf(&d)) + case []byte: + b, err := base64.StdEncoding.DecodeString(d) + if err != nil { + return err + } + value.Set(reflect.ValueOf(b)) + case aws.JSONValue: + // No need to use escaping as the value is a non-quoted string. + v, err := protocol.DecodeJSONValue(d, protocol.NoEscape) + if err != nil { + return err + } + value.Set(reflect.ValueOf(v)) + default: + return errf() + } + case float64: + switch value.Interface().(type) { + case *int64: + di := int64(d) + value.Set(reflect.ValueOf(&di)) + case *float64: + value.Set(reflect.ValueOf(&d)) + case *time.Time: + t := time.Unix(int64(d), 0).UTC() + value.Set(reflect.ValueOf(&t)) + default: + return errf() + } + case bool: + switch value.Interface().(type) { + case *bool: + value.Set(reflect.ValueOf(&d)) + default: + return errf() + } + default: + return fmt.Errorf("unsupported JSON value (%v)", data) + } + return nil +} diff --git a/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go new file mode 100644 index 000000000000..56af4dc4426c --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/private/protocol/jsonrpc/jsonrpc.go @@ -0,0 +1,111 @@ +// Package jsonrpc provides JSON RPC utilities for serialization of AWS +// requests and responses. +package jsonrpc + +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/input/json.json build_test.go +//go:generate go run -tags codegen ../../../models/protocol_tests/generate.go ../../../models/protocol_tests/output/json.json unmarshal_test.go + +import ( + "encoding/json" + "io/ioutil" + "strings" + + "github.com/aws/aws-sdk-go/aws/awserr" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol/json/jsonutil" + "github.com/aws/aws-sdk-go/private/protocol/rest" +) + +var emptyJSON = []byte("{}") + +// BuildHandler is a named request handler for building jsonrpc protocol requests +var BuildHandler = request.NamedHandler{Name: "awssdk.jsonrpc.Build", Fn: Build} + +// UnmarshalHandler is a named request handler for unmarshaling jsonrpc protocol requests +var UnmarshalHandler = request.NamedHandler{Name: "awssdk.jsonrpc.Unmarshal", Fn: Unmarshal} + +// UnmarshalMetaHandler is a named request handler for unmarshaling jsonrpc protocol request metadata +var UnmarshalMetaHandler = request.NamedHandler{Name: "awssdk.jsonrpc.UnmarshalMeta", Fn: UnmarshalMeta} + +// UnmarshalErrorHandler is a named request handler for unmarshaling jsonrpc protocol request errors +var UnmarshalErrorHandler = request.NamedHandler{Name: "awssdk.jsonrpc.UnmarshalError", Fn: UnmarshalError} + +// Build builds a JSON payload for a JSON RPC request. +func Build(req *request.Request) { + var buf []byte + var err error + if req.ParamsFilled() { + buf, err = jsonutil.BuildJSON(req.Params) + if err != nil { + req.Error = awserr.New("SerializationError", "failed encoding JSON RPC request", err) + return + } + } else { + buf = emptyJSON + } + + if req.ClientInfo.TargetPrefix != "" || string(buf) != "{}" { + req.SetBufferBody(buf) + } + + if req.ClientInfo.TargetPrefix != "" { + target := req.ClientInfo.TargetPrefix + "." + req.Operation.Name + req.HTTPRequest.Header.Add("X-Amz-Target", target) + } + if req.ClientInfo.JSONVersion != "" { + jsonVersion := req.ClientInfo.JSONVersion + req.HTTPRequest.Header.Add("Content-Type", "application/x-amz-json-"+jsonVersion) + } +} + +// Unmarshal unmarshals a response for a JSON RPC service. +func Unmarshal(req *request.Request) { + defer req.HTTPResponse.Body.Close() + if req.DataFilled() { + err := jsonutil.UnmarshalJSON(req.Data, req.HTTPResponse.Body) + if err != nil { + req.Error = awserr.New("SerializationError", "failed decoding JSON RPC response", err) + } + } + return +} + +// UnmarshalMeta unmarshals headers from a response for a JSON RPC service. +func UnmarshalMeta(req *request.Request) { + rest.UnmarshalMeta(req) +} + +// UnmarshalError unmarshals an error response for a JSON RPC service. +func UnmarshalError(req *request.Request) { + defer req.HTTPResponse.Body.Close() + bodyBytes, err := ioutil.ReadAll(req.HTTPResponse.Body) + if err != nil { + req.Error = awserr.New("SerializationError", "failed reading JSON RPC error response", err) + return + } + if len(bodyBytes) == 0 { + req.Error = awserr.NewRequestFailure( + awserr.New("SerializationError", req.HTTPResponse.Status, nil), + req.HTTPResponse.StatusCode, + "", + ) + return + } + var jsonErr jsonErrorResponse + if err := json.Unmarshal(bodyBytes, &jsonErr); err != nil { + req.Error = awserr.New("SerializationError", "failed decoding JSON RPC error response", err) + return + } + + codes := strings.SplitN(jsonErr.Code, "#", 2) + req.Error = awserr.NewRequestFailure( + awserr.New(codes[len(codes)-1], jsonErr.Message, nil), + req.HTTPResponse.StatusCode, + req.RequestID, + ) +} + +type jsonErrorResponse struct { + Code string `json:"__type"` + Message string `json:"message"` +} diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go new file mode 100644 index 000000000000..aaebe84d35dc --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/api.go @@ -0,0 +1,7456 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudwatchlogs + +import ( + "fmt" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/awsutil" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/private/protocol" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +const opAssociateKmsKey = "AssociateKmsKey" + +// AssociateKmsKeyRequest generates a "aws/request.Request" representing the +// client's request for the AssociateKmsKey operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See AssociateKmsKey for more information on using the AssociateKmsKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the AssociateKmsKeyRequest method. +// req, resp := client.AssociateKmsKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/AssociateKmsKey +func (c *CloudWatchLogs) AssociateKmsKeyRequest(input *AssociateKmsKeyInput) (req *request.Request, output *AssociateKmsKeyOutput) { + op := &request.Operation{ + Name: opAssociateKmsKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &AssociateKmsKeyInput{} + } + + output = &AssociateKmsKeyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// AssociateKmsKey API operation for Amazon CloudWatch Logs. +// +// Associates the specified AWS Key Management Service (AWS KMS) customer master +// key (CMK) with the specified log group. +// +// Associating an AWS KMS CMK with a log group overrides any existing associations +// between the log group and a CMK. After a CMK is associated with a log group, +// all newly ingested data for the log group is encrypted using the CMK. This +// association is stored as long as the data encrypted with the CMK is still +// within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt +// this data whenever it is requested. +// +// Note that it can take up to 5 minutes for this operation to take effect. +// +// If you attempt to associate a CMK with a log group but the CMK does not exist +// or the CMK is disabled, you will receive an InvalidParameterException error. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation AssociateKmsKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/AssociateKmsKey +func (c *CloudWatchLogs) AssociateKmsKey(input *AssociateKmsKeyInput) (*AssociateKmsKeyOutput, error) { + req, out := c.AssociateKmsKeyRequest(input) + return out, req.Send() +} + +// AssociateKmsKeyWithContext is the same as AssociateKmsKey with the addition of +// the ability to pass a context and additional request options. +// +// See AssociateKmsKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) AssociateKmsKeyWithContext(ctx aws.Context, input *AssociateKmsKeyInput, opts ...request.Option) (*AssociateKmsKeyOutput, error) { + req, out := c.AssociateKmsKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCancelExportTask = "CancelExportTask" + +// CancelExportTaskRequest generates a "aws/request.Request" representing the +// client's request for the CancelExportTask operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CancelExportTask for more information on using the CancelExportTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CancelExportTaskRequest method. +// req, resp := client.CancelExportTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CancelExportTask +func (c *CloudWatchLogs) CancelExportTaskRequest(input *CancelExportTaskInput) (req *request.Request, output *CancelExportTaskOutput) { + op := &request.Operation{ + Name: opCancelExportTask, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CancelExportTaskInput{} + } + + output = &CancelExportTaskOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// CancelExportTask API operation for Amazon CloudWatch Logs. +// +// Cancels the specified export task. +// +// The task must be in the PENDING or RUNNING state. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation CancelExportTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidOperationException "InvalidOperationException" +// The operation is not valid on the specified resource. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CancelExportTask +func (c *CloudWatchLogs) CancelExportTask(input *CancelExportTaskInput) (*CancelExportTaskOutput, error) { + req, out := c.CancelExportTaskRequest(input) + return out, req.Send() +} + +// CancelExportTaskWithContext is the same as CancelExportTask with the addition of +// the ability to pass a context and additional request options. +// +// See CancelExportTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) CancelExportTaskWithContext(ctx aws.Context, input *CancelExportTaskInput, opts ...request.Option) (*CancelExportTaskOutput, error) { + req, out := c.CancelExportTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateExportTask = "CreateExportTask" + +// CreateExportTaskRequest generates a "aws/request.Request" representing the +// client's request for the CreateExportTask operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateExportTask for more information on using the CreateExportTask +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateExportTaskRequest method. +// req, resp := client.CreateExportTaskRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CreateExportTask +func (c *CloudWatchLogs) CreateExportTaskRequest(input *CreateExportTaskInput) (req *request.Request, output *CreateExportTaskOutput) { + op := &request.Operation{ + Name: opCreateExportTask, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateExportTaskInput{} + } + + output = &CreateExportTaskOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateExportTask API operation for Amazon CloudWatch Logs. +// +// Creates an export task, which allows you to efficiently export data from +// a log group to an Amazon S3 bucket. +// +// This is an asynchronous call. If all the required information is provided, +// this operation initiates an export task and responds with the ID of the task. +// After the task has started, you can use DescribeExportTasks to get the status +// of the export task. Each account can only have one active (RUNNING or PENDING) +// export task at a time. To cancel an export task, use CancelExportTask. +// +// You can export logs from multiple log groups or multiple time ranges to the +// same S3 bucket. To separate out log data for each export task, you can specify +// a prefix to be used as the Amazon S3 key prefix for all exported objects. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation CreateExportTask for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have reached the maximum number of resources that can be created. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The specified resource already exists. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CreateExportTask +func (c *CloudWatchLogs) CreateExportTask(input *CreateExportTaskInput) (*CreateExportTaskOutput, error) { + req, out := c.CreateExportTaskRequest(input) + return out, req.Send() +} + +// CreateExportTaskWithContext is the same as CreateExportTask with the addition of +// the ability to pass a context and additional request options. +// +// See CreateExportTask for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) CreateExportTaskWithContext(ctx aws.Context, input *CreateExportTaskInput, opts ...request.Option) (*CreateExportTaskOutput, error) { + req, out := c.CreateExportTaskRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateLogGroup = "CreateLogGroup" + +// CreateLogGroupRequest generates a "aws/request.Request" representing the +// client's request for the CreateLogGroup operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateLogGroup for more information on using the CreateLogGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateLogGroupRequest method. +// req, resp := client.CreateLogGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CreateLogGroup +func (c *CloudWatchLogs) CreateLogGroupRequest(input *CreateLogGroupInput) (req *request.Request, output *CreateLogGroupOutput) { + op := &request.Operation{ + Name: opCreateLogGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateLogGroupInput{} + } + + output = &CreateLogGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// CreateLogGroup API operation for Amazon CloudWatch Logs. +// +// Creates a log group with the specified name. +// +// You can create up to 5000 log groups per account. +// +// You must use the following guidelines when naming a log group: +// +// * Log group names must be unique within a region for an AWS account. +// +// * Log group names can be between 1 and 512 characters long. +// +// * Log group names consist of the following characters: a-z, A-Z, 0-9, +// '_' (underscore), '-' (hyphen), '/' (forward slash), and '.' (period). +// +// If you associate a AWS Key Management Service (AWS KMS) customer master key +// (CMK) with the log group, ingested data is encrypted using the CMK. This +// association is stored as long as the data encrypted with the CMK is still +// within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt +// this data whenever it is requested. +// +// If you attempt to associate a CMK with the log group but the CMK does not +// exist or the CMK is disabled, you will receive an InvalidParameterException +// error. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation CreateLogGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The specified resource already exists. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have reached the maximum number of resources that can be created. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CreateLogGroup +func (c *CloudWatchLogs) CreateLogGroup(input *CreateLogGroupInput) (*CreateLogGroupOutput, error) { + req, out := c.CreateLogGroupRequest(input) + return out, req.Send() +} + +// CreateLogGroupWithContext is the same as CreateLogGroup with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLogGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) CreateLogGroupWithContext(ctx aws.Context, input *CreateLogGroupInput, opts ...request.Option) (*CreateLogGroupOutput, error) { + req, out := c.CreateLogGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateLogStream = "CreateLogStream" + +// CreateLogStreamRequest generates a "aws/request.Request" representing the +// client's request for the CreateLogStream operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateLogStream for more information on using the CreateLogStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateLogStreamRequest method. +// req, resp := client.CreateLogStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CreateLogStream +func (c *CloudWatchLogs) CreateLogStreamRequest(input *CreateLogStreamInput) (req *request.Request, output *CreateLogStreamOutput) { + op := &request.Operation{ + Name: opCreateLogStream, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateLogStreamInput{} + } + + output = &CreateLogStreamOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// CreateLogStream API operation for Amazon CloudWatch Logs. +// +// Creates a log stream for the specified log group. +// +// There is no limit on the number of log streams that you can create for a +// log group. +// +// You must use the following guidelines when naming a log stream: +// +// * Log stream names must be unique within the log group. +// +// * Log stream names can be between 1 and 512 characters long. +// +// * The ':' (colon) and '*' (asterisk) characters are not allowed. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation CreateLogStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceAlreadyExistsException "ResourceAlreadyExistsException" +// The specified resource already exists. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/CreateLogStream +func (c *CloudWatchLogs) CreateLogStream(input *CreateLogStreamInput) (*CreateLogStreamOutput, error) { + req, out := c.CreateLogStreamRequest(input) + return out, req.Send() +} + +// CreateLogStreamWithContext is the same as CreateLogStream with the addition of +// the ability to pass a context and additional request options. +// +// See CreateLogStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) CreateLogStreamWithContext(ctx aws.Context, input *CreateLogStreamInput, opts ...request.Option) (*CreateLogStreamOutput, error) { + req, out := c.CreateLogStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteDestination = "DeleteDestination" + +// DeleteDestinationRequest generates a "aws/request.Request" representing the +// client's request for the DeleteDestination operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteDestination for more information on using the DeleteDestination +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteDestinationRequest method. +// req, resp := client.DeleteDestinationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteDestination +func (c *CloudWatchLogs) DeleteDestinationRequest(input *DeleteDestinationInput) (req *request.Request, output *DeleteDestinationOutput) { + op := &request.Operation{ + Name: opDeleteDestination, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteDestinationInput{} + } + + output = &DeleteDestinationOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteDestination API operation for Amazon CloudWatch Logs. +// +// Deletes the specified destination, and eventually disables all the subscription +// filters that publish to it. This operation does not delete the physical resource +// encapsulated by the destination. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DeleteDestination for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteDestination +func (c *CloudWatchLogs) DeleteDestination(input *DeleteDestinationInput) (*DeleteDestinationOutput, error) { + req, out := c.DeleteDestinationRequest(input) + return out, req.Send() +} + +// DeleteDestinationWithContext is the same as DeleteDestination with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteDestination for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DeleteDestinationWithContext(ctx aws.Context, input *DeleteDestinationInput, opts ...request.Option) (*DeleteDestinationOutput, error) { + req, out := c.DeleteDestinationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteLogGroup = "DeleteLogGroup" + +// DeleteLogGroupRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLogGroup operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLogGroup for more information on using the DeleteLogGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLogGroupRequest method. +// req, resp := client.DeleteLogGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteLogGroup +func (c *CloudWatchLogs) DeleteLogGroupRequest(input *DeleteLogGroupInput) (req *request.Request, output *DeleteLogGroupOutput) { + op := &request.Operation{ + Name: opDeleteLogGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLogGroupInput{} + } + + output = &DeleteLogGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteLogGroup API operation for Amazon CloudWatch Logs. +// +// Deletes the specified log group and permanently deletes all the archived +// log events associated with the log group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DeleteLogGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteLogGroup +func (c *CloudWatchLogs) DeleteLogGroup(input *DeleteLogGroupInput) (*DeleteLogGroupOutput, error) { + req, out := c.DeleteLogGroupRequest(input) + return out, req.Send() +} + +// DeleteLogGroupWithContext is the same as DeleteLogGroup with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLogGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DeleteLogGroupWithContext(ctx aws.Context, input *DeleteLogGroupInput, opts ...request.Option) (*DeleteLogGroupOutput, error) { + req, out := c.DeleteLogGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteLogStream = "DeleteLogStream" + +// DeleteLogStreamRequest generates a "aws/request.Request" representing the +// client's request for the DeleteLogStream operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteLogStream for more information on using the DeleteLogStream +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteLogStreamRequest method. +// req, resp := client.DeleteLogStreamRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteLogStream +func (c *CloudWatchLogs) DeleteLogStreamRequest(input *DeleteLogStreamInput) (req *request.Request, output *DeleteLogStreamOutput) { + op := &request.Operation{ + Name: opDeleteLogStream, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteLogStreamInput{} + } + + output = &DeleteLogStreamOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteLogStream API operation for Amazon CloudWatch Logs. +// +// Deletes the specified log stream and permanently deletes all the archived +// log events associated with the log stream. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DeleteLogStream for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteLogStream +func (c *CloudWatchLogs) DeleteLogStream(input *DeleteLogStreamInput) (*DeleteLogStreamOutput, error) { + req, out := c.DeleteLogStreamRequest(input) + return out, req.Send() +} + +// DeleteLogStreamWithContext is the same as DeleteLogStream with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteLogStream for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DeleteLogStreamWithContext(ctx aws.Context, input *DeleteLogStreamInput, opts ...request.Option) (*DeleteLogStreamOutput, error) { + req, out := c.DeleteLogStreamRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteMetricFilter = "DeleteMetricFilter" + +// DeleteMetricFilterRequest generates a "aws/request.Request" representing the +// client's request for the DeleteMetricFilter operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteMetricFilter for more information on using the DeleteMetricFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteMetricFilterRequest method. +// req, resp := client.DeleteMetricFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteMetricFilter +func (c *CloudWatchLogs) DeleteMetricFilterRequest(input *DeleteMetricFilterInput) (req *request.Request, output *DeleteMetricFilterOutput) { + op := &request.Operation{ + Name: opDeleteMetricFilter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteMetricFilterInput{} + } + + output = &DeleteMetricFilterOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteMetricFilter API operation for Amazon CloudWatch Logs. +// +// Deletes the specified metric filter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DeleteMetricFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteMetricFilter +func (c *CloudWatchLogs) DeleteMetricFilter(input *DeleteMetricFilterInput) (*DeleteMetricFilterOutput, error) { + req, out := c.DeleteMetricFilterRequest(input) + return out, req.Send() +} + +// DeleteMetricFilterWithContext is the same as DeleteMetricFilter with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteMetricFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DeleteMetricFilterWithContext(ctx aws.Context, input *DeleteMetricFilterInput, opts ...request.Option) (*DeleteMetricFilterOutput, error) { + req, out := c.DeleteMetricFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteResourcePolicy = "DeleteResourcePolicy" + +// DeleteResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteResourcePolicy for more information on using the DeleteResourcePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteResourcePolicyRequest method. +// req, resp := client.DeleteResourcePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteResourcePolicy +func (c *CloudWatchLogs) DeleteResourcePolicyRequest(input *DeleteResourcePolicyInput) (req *request.Request, output *DeleteResourcePolicyOutput) { + op := &request.Operation{ + Name: opDeleteResourcePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteResourcePolicyInput{} + } + + output = &DeleteResourcePolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteResourcePolicy API operation for Amazon CloudWatch Logs. +// +// Deletes a resource policy from this account. This revokes the access of the +// identities in that policy to put log events to this account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DeleteResourcePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteResourcePolicy +func (c *CloudWatchLogs) DeleteResourcePolicy(input *DeleteResourcePolicyInput) (*DeleteResourcePolicyOutput, error) { + req, out := c.DeleteResourcePolicyRequest(input) + return out, req.Send() +} + +// DeleteResourcePolicyWithContext is the same as DeleteResourcePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteResourcePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DeleteResourcePolicyWithContext(ctx aws.Context, input *DeleteResourcePolicyInput, opts ...request.Option) (*DeleteResourcePolicyOutput, error) { + req, out := c.DeleteResourcePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteRetentionPolicy = "DeleteRetentionPolicy" + +// DeleteRetentionPolicyRequest generates a "aws/request.Request" representing the +// client's request for the DeleteRetentionPolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteRetentionPolicy for more information on using the DeleteRetentionPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteRetentionPolicyRequest method. +// req, resp := client.DeleteRetentionPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteRetentionPolicy +func (c *CloudWatchLogs) DeleteRetentionPolicyRequest(input *DeleteRetentionPolicyInput) (req *request.Request, output *DeleteRetentionPolicyOutput) { + op := &request.Operation{ + Name: opDeleteRetentionPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteRetentionPolicyInput{} + } + + output = &DeleteRetentionPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteRetentionPolicy API operation for Amazon CloudWatch Logs. +// +// Deletes the specified retention policy. +// +// Log events do not expire if they belong to log groups without a retention +// policy. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DeleteRetentionPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteRetentionPolicy +func (c *CloudWatchLogs) DeleteRetentionPolicy(input *DeleteRetentionPolicyInput) (*DeleteRetentionPolicyOutput, error) { + req, out := c.DeleteRetentionPolicyRequest(input) + return out, req.Send() +} + +// DeleteRetentionPolicyWithContext is the same as DeleteRetentionPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteRetentionPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DeleteRetentionPolicyWithContext(ctx aws.Context, input *DeleteRetentionPolicyInput, opts ...request.Option) (*DeleteRetentionPolicyOutput, error) { + req, out := c.DeleteRetentionPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteSubscriptionFilter = "DeleteSubscriptionFilter" + +// DeleteSubscriptionFilterRequest generates a "aws/request.Request" representing the +// client's request for the DeleteSubscriptionFilter operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteSubscriptionFilter for more information on using the DeleteSubscriptionFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteSubscriptionFilterRequest method. +// req, resp := client.DeleteSubscriptionFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteSubscriptionFilter +func (c *CloudWatchLogs) DeleteSubscriptionFilterRequest(input *DeleteSubscriptionFilterInput) (req *request.Request, output *DeleteSubscriptionFilterOutput) { + op := &request.Operation{ + Name: opDeleteSubscriptionFilter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteSubscriptionFilterInput{} + } + + output = &DeleteSubscriptionFilterOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteSubscriptionFilter API operation for Amazon CloudWatch Logs. +// +// Deletes the specified subscription filter. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DeleteSubscriptionFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DeleteSubscriptionFilter +func (c *CloudWatchLogs) DeleteSubscriptionFilter(input *DeleteSubscriptionFilterInput) (*DeleteSubscriptionFilterOutput, error) { + req, out := c.DeleteSubscriptionFilterRequest(input) + return out, req.Send() +} + +// DeleteSubscriptionFilterWithContext is the same as DeleteSubscriptionFilter with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteSubscriptionFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DeleteSubscriptionFilterWithContext(ctx aws.Context, input *DeleteSubscriptionFilterInput, opts ...request.Option) (*DeleteSubscriptionFilterOutput, error) { + req, out := c.DeleteSubscriptionFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeDestinations = "DescribeDestinations" + +// DescribeDestinationsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDestinations operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeDestinations for more information on using the DescribeDestinations +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeDestinationsRequest method. +// req, resp := client.DescribeDestinationsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeDestinations +func (c *CloudWatchLogs) DescribeDestinationsRequest(input *DescribeDestinationsInput) (req *request.Request, output *DescribeDestinationsOutput) { + op := &request.Operation{ + Name: opDescribeDestinations, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeDestinationsInput{} + } + + output = &DescribeDestinationsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeDestinations API operation for Amazon CloudWatch Logs. +// +// Lists all your destinations. The results are ASCII-sorted by destination +// name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DescribeDestinations for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeDestinations +func (c *CloudWatchLogs) DescribeDestinations(input *DescribeDestinationsInput) (*DescribeDestinationsOutput, error) { + req, out := c.DescribeDestinationsRequest(input) + return out, req.Send() +} + +// DescribeDestinationsWithContext is the same as DescribeDestinations with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDestinations for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeDestinationsWithContext(ctx aws.Context, input *DescribeDestinationsInput, opts ...request.Option) (*DescribeDestinationsOutput, error) { + req, out := c.DescribeDestinationsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeDestinationsPages iterates over the pages of a DescribeDestinations operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeDestinations method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeDestinations operation. +// pageNum := 0 +// err := client.DescribeDestinationsPages(params, +// func(page *DescribeDestinationsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudWatchLogs) DescribeDestinationsPages(input *DescribeDestinationsInput, fn func(*DescribeDestinationsOutput, bool) bool) error { + return c.DescribeDestinationsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeDestinationsPagesWithContext same as DescribeDestinationsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeDestinationsPagesWithContext(ctx aws.Context, input *DescribeDestinationsInput, fn func(*DescribeDestinationsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeDestinationsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeDestinationsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeDestinationsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeExportTasks = "DescribeExportTasks" + +// DescribeExportTasksRequest generates a "aws/request.Request" representing the +// client's request for the DescribeExportTasks operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeExportTasks for more information on using the DescribeExportTasks +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeExportTasksRequest method. +// req, resp := client.DescribeExportTasksRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeExportTasks +func (c *CloudWatchLogs) DescribeExportTasksRequest(input *DescribeExportTasksInput) (req *request.Request, output *DescribeExportTasksOutput) { + op := &request.Operation{ + Name: opDescribeExportTasks, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeExportTasksInput{} + } + + output = &DescribeExportTasksOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeExportTasks API operation for Amazon CloudWatch Logs. +// +// Lists the specified export tasks. You can list all your export tasks or filter +// the results based on task ID or task status. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DescribeExportTasks for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeExportTasks +func (c *CloudWatchLogs) DescribeExportTasks(input *DescribeExportTasksInput) (*DescribeExportTasksOutput, error) { + req, out := c.DescribeExportTasksRequest(input) + return out, req.Send() +} + +// DescribeExportTasksWithContext is the same as DescribeExportTasks with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeExportTasks for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeExportTasksWithContext(ctx aws.Context, input *DescribeExportTasksInput, opts ...request.Option) (*DescribeExportTasksOutput, error) { + req, out := c.DescribeExportTasksRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeLogGroups = "DescribeLogGroups" + +// DescribeLogGroupsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeLogGroups operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeLogGroups for more information on using the DescribeLogGroups +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeLogGroupsRequest method. +// req, resp := client.DescribeLogGroupsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeLogGroups +func (c *CloudWatchLogs) DescribeLogGroupsRequest(input *DescribeLogGroupsInput) (req *request.Request, output *DescribeLogGroupsOutput) { + op := &request.Operation{ + Name: opDescribeLogGroups, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeLogGroupsInput{} + } + + output = &DescribeLogGroupsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeLogGroups API operation for Amazon CloudWatch Logs. +// +// Lists the specified log groups. You can list all your log groups or filter +// the results by prefix. The results are ASCII-sorted by log group name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DescribeLogGroups for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeLogGroups +func (c *CloudWatchLogs) DescribeLogGroups(input *DescribeLogGroupsInput) (*DescribeLogGroupsOutput, error) { + req, out := c.DescribeLogGroupsRequest(input) + return out, req.Send() +} + +// DescribeLogGroupsWithContext is the same as DescribeLogGroups with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLogGroups for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeLogGroupsWithContext(ctx aws.Context, input *DescribeLogGroupsInput, opts ...request.Option) (*DescribeLogGroupsOutput, error) { + req, out := c.DescribeLogGroupsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeLogGroupsPages iterates over the pages of a DescribeLogGroups operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeLogGroups method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeLogGroups operation. +// pageNum := 0 +// err := client.DescribeLogGroupsPages(params, +// func(page *DescribeLogGroupsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudWatchLogs) DescribeLogGroupsPages(input *DescribeLogGroupsInput, fn func(*DescribeLogGroupsOutput, bool) bool) error { + return c.DescribeLogGroupsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeLogGroupsPagesWithContext same as DescribeLogGroupsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeLogGroupsPagesWithContext(ctx aws.Context, input *DescribeLogGroupsInput, fn func(*DescribeLogGroupsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeLogGroupsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeLogGroupsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeLogGroupsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeLogStreams = "DescribeLogStreams" + +// DescribeLogStreamsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeLogStreams operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeLogStreams for more information on using the DescribeLogStreams +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeLogStreamsRequest method. +// req, resp := client.DescribeLogStreamsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeLogStreams +func (c *CloudWatchLogs) DescribeLogStreamsRequest(input *DescribeLogStreamsInput) (req *request.Request, output *DescribeLogStreamsOutput) { + op := &request.Operation{ + Name: opDescribeLogStreams, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeLogStreamsInput{} + } + + output = &DescribeLogStreamsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeLogStreams API operation for Amazon CloudWatch Logs. +// +// Lists the log streams for the specified log group. You can list all the log +// streams or filter the results by prefix. You can also control how the results +// are ordered. +// +// This operation has a limit of five transactions per second, after which transactions +// are throttled. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DescribeLogStreams for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeLogStreams +func (c *CloudWatchLogs) DescribeLogStreams(input *DescribeLogStreamsInput) (*DescribeLogStreamsOutput, error) { + req, out := c.DescribeLogStreamsRequest(input) + return out, req.Send() +} + +// DescribeLogStreamsWithContext is the same as DescribeLogStreams with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeLogStreams for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeLogStreamsWithContext(ctx aws.Context, input *DescribeLogStreamsInput, opts ...request.Option) (*DescribeLogStreamsOutput, error) { + req, out := c.DescribeLogStreamsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeLogStreamsPages iterates over the pages of a DescribeLogStreams operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeLogStreams method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeLogStreams operation. +// pageNum := 0 +// err := client.DescribeLogStreamsPages(params, +// func(page *DescribeLogStreamsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudWatchLogs) DescribeLogStreamsPages(input *DescribeLogStreamsInput, fn func(*DescribeLogStreamsOutput, bool) bool) error { + return c.DescribeLogStreamsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeLogStreamsPagesWithContext same as DescribeLogStreamsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeLogStreamsPagesWithContext(ctx aws.Context, input *DescribeLogStreamsInput, fn func(*DescribeLogStreamsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeLogStreamsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeLogStreamsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeLogStreamsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeMetricFilters = "DescribeMetricFilters" + +// DescribeMetricFiltersRequest generates a "aws/request.Request" representing the +// client's request for the DescribeMetricFilters operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeMetricFilters for more information on using the DescribeMetricFilters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeMetricFiltersRequest method. +// req, resp := client.DescribeMetricFiltersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeMetricFilters +func (c *CloudWatchLogs) DescribeMetricFiltersRequest(input *DescribeMetricFiltersInput) (req *request.Request, output *DescribeMetricFiltersOutput) { + op := &request.Operation{ + Name: opDescribeMetricFilters, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeMetricFiltersInput{} + } + + output = &DescribeMetricFiltersOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeMetricFilters API operation for Amazon CloudWatch Logs. +// +// Lists the specified metric filters. You can list all the metric filters or +// filter the results by log name, prefix, metric name, or metric namespace. +// The results are ASCII-sorted by filter name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DescribeMetricFilters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeMetricFilters +func (c *CloudWatchLogs) DescribeMetricFilters(input *DescribeMetricFiltersInput) (*DescribeMetricFiltersOutput, error) { + req, out := c.DescribeMetricFiltersRequest(input) + return out, req.Send() +} + +// DescribeMetricFiltersWithContext is the same as DescribeMetricFilters with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeMetricFilters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeMetricFiltersWithContext(ctx aws.Context, input *DescribeMetricFiltersInput, opts ...request.Option) (*DescribeMetricFiltersOutput, error) { + req, out := c.DescribeMetricFiltersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeMetricFiltersPages iterates over the pages of a DescribeMetricFilters operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeMetricFilters method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeMetricFilters operation. +// pageNum := 0 +// err := client.DescribeMetricFiltersPages(params, +// func(page *DescribeMetricFiltersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudWatchLogs) DescribeMetricFiltersPages(input *DescribeMetricFiltersInput, fn func(*DescribeMetricFiltersOutput, bool) bool) error { + return c.DescribeMetricFiltersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeMetricFiltersPagesWithContext same as DescribeMetricFiltersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeMetricFiltersPagesWithContext(ctx aws.Context, input *DescribeMetricFiltersInput, fn func(*DescribeMetricFiltersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeMetricFiltersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeMetricFiltersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeMetricFiltersOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDescribeResourcePolicies = "DescribeResourcePolicies" + +// DescribeResourcePoliciesRequest generates a "aws/request.Request" representing the +// client's request for the DescribeResourcePolicies operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeResourcePolicies for more information on using the DescribeResourcePolicies +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeResourcePoliciesRequest method. +// req, resp := client.DescribeResourcePoliciesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeResourcePolicies +func (c *CloudWatchLogs) DescribeResourcePoliciesRequest(input *DescribeResourcePoliciesInput) (req *request.Request, output *DescribeResourcePoliciesOutput) { + op := &request.Operation{ + Name: opDescribeResourcePolicies, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeResourcePoliciesInput{} + } + + output = &DescribeResourcePoliciesOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeResourcePolicies API operation for Amazon CloudWatch Logs. +// +// Lists the resource policies in this account. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DescribeResourcePolicies for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeResourcePolicies +func (c *CloudWatchLogs) DescribeResourcePolicies(input *DescribeResourcePoliciesInput) (*DescribeResourcePoliciesOutput, error) { + req, out := c.DescribeResourcePoliciesRequest(input) + return out, req.Send() +} + +// DescribeResourcePoliciesWithContext is the same as DescribeResourcePolicies with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeResourcePolicies for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeResourcePoliciesWithContext(ctx aws.Context, input *DescribeResourcePoliciesInput, opts ...request.Option) (*DescribeResourcePoliciesOutput, error) { + req, out := c.DescribeResourcePoliciesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeSubscriptionFilters = "DescribeSubscriptionFilters" + +// DescribeSubscriptionFiltersRequest generates a "aws/request.Request" representing the +// client's request for the DescribeSubscriptionFilters operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeSubscriptionFilters for more information on using the DescribeSubscriptionFilters +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeSubscriptionFiltersRequest method. +// req, resp := client.DescribeSubscriptionFiltersRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeSubscriptionFilters +func (c *CloudWatchLogs) DescribeSubscriptionFiltersRequest(input *DescribeSubscriptionFiltersInput) (req *request.Request, output *DescribeSubscriptionFiltersOutput) { + op := &request.Operation{ + Name: opDescribeSubscriptionFilters, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeSubscriptionFiltersInput{} + } + + output = &DescribeSubscriptionFiltersOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeSubscriptionFilters API operation for Amazon CloudWatch Logs. +// +// Lists the subscription filters for the specified log group. You can list +// all the subscription filters or filter the results by prefix. The results +// are ASCII-sorted by filter name. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DescribeSubscriptionFilters for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DescribeSubscriptionFilters +func (c *CloudWatchLogs) DescribeSubscriptionFilters(input *DescribeSubscriptionFiltersInput) (*DescribeSubscriptionFiltersOutput, error) { + req, out := c.DescribeSubscriptionFiltersRequest(input) + return out, req.Send() +} + +// DescribeSubscriptionFiltersWithContext is the same as DescribeSubscriptionFilters with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeSubscriptionFilters for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeSubscriptionFiltersWithContext(ctx aws.Context, input *DescribeSubscriptionFiltersInput, opts ...request.Option) (*DescribeSubscriptionFiltersOutput, error) { + req, out := c.DescribeSubscriptionFiltersRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeSubscriptionFiltersPages iterates over the pages of a DescribeSubscriptionFilters operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeSubscriptionFilters method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeSubscriptionFilters operation. +// pageNum := 0 +// err := client.DescribeSubscriptionFiltersPages(params, +// func(page *DescribeSubscriptionFiltersOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudWatchLogs) DescribeSubscriptionFiltersPages(input *DescribeSubscriptionFiltersInput, fn func(*DescribeSubscriptionFiltersOutput, bool) bool) error { + return c.DescribeSubscriptionFiltersPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeSubscriptionFiltersPagesWithContext same as DescribeSubscriptionFiltersPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DescribeSubscriptionFiltersPagesWithContext(ctx aws.Context, input *DescribeSubscriptionFiltersInput, fn func(*DescribeSubscriptionFiltersOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeSubscriptionFiltersInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeSubscriptionFiltersRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*DescribeSubscriptionFiltersOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opDisassociateKmsKey = "DisassociateKmsKey" + +// DisassociateKmsKeyRequest generates a "aws/request.Request" representing the +// client's request for the DisassociateKmsKey operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DisassociateKmsKey for more information on using the DisassociateKmsKey +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DisassociateKmsKeyRequest method. +// req, resp := client.DisassociateKmsKeyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DisassociateKmsKey +func (c *CloudWatchLogs) DisassociateKmsKeyRequest(input *DisassociateKmsKeyInput) (req *request.Request, output *DisassociateKmsKeyOutput) { + op := &request.Operation{ + Name: opDisassociateKmsKey, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DisassociateKmsKeyInput{} + } + + output = &DisassociateKmsKeyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// DisassociateKmsKey API operation for Amazon CloudWatch Logs. +// +// Disassociates the associated AWS Key Management Service (AWS KMS) customer +// master key (CMK) from the specified log group. +// +// After the AWS KMS CMK is disassociated from the log group, AWS CloudWatch +// Logs stops encrypting newly ingested data for the log group. All previously +// ingested data remains encrypted, and AWS CloudWatch Logs requires permissions +// for the CMK whenever the encrypted data is requested. +// +// Note that it can take up to 5 minutes for this operation to take effect. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation DisassociateKmsKey for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/DisassociateKmsKey +func (c *CloudWatchLogs) DisassociateKmsKey(input *DisassociateKmsKeyInput) (*DisassociateKmsKeyOutput, error) { + req, out := c.DisassociateKmsKeyRequest(input) + return out, req.Send() +} + +// DisassociateKmsKeyWithContext is the same as DisassociateKmsKey with the addition of +// the ability to pass a context and additional request options. +// +// See DisassociateKmsKey for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) DisassociateKmsKeyWithContext(ctx aws.Context, input *DisassociateKmsKeyInput, opts ...request.Option) (*DisassociateKmsKeyOutput, error) { + req, out := c.DisassociateKmsKeyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opFilterLogEvents = "FilterLogEvents" + +// FilterLogEventsRequest generates a "aws/request.Request" representing the +// client's request for the FilterLogEvents operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See FilterLogEvents for more information on using the FilterLogEvents +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the FilterLogEventsRequest method. +// req, resp := client.FilterLogEventsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/FilterLogEvents +func (c *CloudWatchLogs) FilterLogEventsRequest(input *FilterLogEventsInput) (req *request.Request, output *FilterLogEventsOutput) { + op := &request.Operation{ + Name: opFilterLogEvents, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &FilterLogEventsInput{} + } + + output = &FilterLogEventsOutput{} + req = c.newRequest(op, input, output) + return +} + +// FilterLogEvents API operation for Amazon CloudWatch Logs. +// +// Lists log events from the specified log group. You can list all the log events +// or filter the results using a filter pattern, a time range, and the name +// of the log stream. +// +// By default, this operation returns as many log events as can fit in 1 MB +// (up to 10,000 log events), or all the events found within the time range +// that you specify. If the results include a token, then there are more log +// events available, and you can get additional results by specifying the token +// in a subsequent call. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation FilterLogEvents for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/FilterLogEvents +func (c *CloudWatchLogs) FilterLogEvents(input *FilterLogEventsInput) (*FilterLogEventsOutput, error) { + req, out := c.FilterLogEventsRequest(input) + return out, req.Send() +} + +// FilterLogEventsWithContext is the same as FilterLogEvents with the addition of +// the ability to pass a context and additional request options. +// +// See FilterLogEvents for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) FilterLogEventsWithContext(ctx aws.Context, input *FilterLogEventsInput, opts ...request.Option) (*FilterLogEventsOutput, error) { + req, out := c.FilterLogEventsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// FilterLogEventsPages iterates over the pages of a FilterLogEvents operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See FilterLogEvents method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a FilterLogEvents operation. +// pageNum := 0 +// err := client.FilterLogEventsPages(params, +// func(page *FilterLogEventsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudWatchLogs) FilterLogEventsPages(input *FilterLogEventsInput, fn func(*FilterLogEventsOutput, bool) bool) error { + return c.FilterLogEventsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// FilterLogEventsPagesWithContext same as FilterLogEventsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) FilterLogEventsPagesWithContext(ctx aws.Context, input *FilterLogEventsInput, fn func(*FilterLogEventsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *FilterLogEventsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.FilterLogEventsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*FilterLogEventsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opGetLogEvents = "GetLogEvents" + +// GetLogEventsRequest generates a "aws/request.Request" representing the +// client's request for the GetLogEvents operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetLogEvents for more information on using the GetLogEvents +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetLogEventsRequest method. +// req, resp := client.GetLogEventsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/GetLogEvents +func (c *CloudWatchLogs) GetLogEventsRequest(input *GetLogEventsInput) (req *request.Request, output *GetLogEventsOutput) { + op := &request.Operation{ + Name: opGetLogEvents, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"nextToken"}, + OutputTokens: []string{"nextForwardToken"}, + LimitToken: "limit", + TruncationToken: "", + }, + } + + if input == nil { + input = &GetLogEventsInput{} + } + + output = &GetLogEventsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetLogEvents API operation for Amazon CloudWatch Logs. +// +// Lists log events from the specified log stream. You can list all the log +// events or filter using a time range. +// +// By default, this operation returns as many log events as can fit in a response +// size of 1MB (up to 10,000 log events). You can get additional log events +// by specifying one of the tokens in a subsequent call. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation GetLogEvents for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/GetLogEvents +func (c *CloudWatchLogs) GetLogEvents(input *GetLogEventsInput) (*GetLogEventsOutput, error) { + req, out := c.GetLogEventsRequest(input) + return out, req.Send() +} + +// GetLogEventsWithContext is the same as GetLogEvents with the addition of +// the ability to pass a context and additional request options. +// +// See GetLogEvents for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) GetLogEventsWithContext(ctx aws.Context, input *GetLogEventsInput, opts ...request.Option) (*GetLogEventsOutput, error) { + req, out := c.GetLogEventsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// GetLogEventsPages iterates over the pages of a GetLogEvents operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See GetLogEvents method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a GetLogEvents operation. +// pageNum := 0 +// err := client.GetLogEventsPages(params, +// func(page *GetLogEventsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *CloudWatchLogs) GetLogEventsPages(input *GetLogEventsInput, fn func(*GetLogEventsOutput, bool) bool) error { + return c.GetLogEventsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// GetLogEventsPagesWithContext same as GetLogEventsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) GetLogEventsPagesWithContext(ctx aws.Context, input *GetLogEventsInput, fn func(*GetLogEventsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *GetLogEventsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.GetLogEventsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + cont := true + for p.Next() && cont { + cont = fn(p.Page().(*GetLogEventsOutput), !p.HasNextPage()) + } + return p.Err() +} + +const opListTagsLogGroup = "ListTagsLogGroup" + +// ListTagsLogGroupRequest generates a "aws/request.Request" representing the +// client's request for the ListTagsLogGroup operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListTagsLogGroup for more information on using the ListTagsLogGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListTagsLogGroupRequest method. +// req, resp := client.ListTagsLogGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/ListTagsLogGroup +func (c *CloudWatchLogs) ListTagsLogGroupRequest(input *ListTagsLogGroupInput) (req *request.Request, output *ListTagsLogGroupOutput) { + op := &request.Operation{ + Name: opListTagsLogGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListTagsLogGroupInput{} + } + + output = &ListTagsLogGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListTagsLogGroup API operation for Amazon CloudWatch Logs. +// +// Lists the tags for the specified log group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation ListTagsLogGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/ListTagsLogGroup +func (c *CloudWatchLogs) ListTagsLogGroup(input *ListTagsLogGroupInput) (*ListTagsLogGroupOutput, error) { + req, out := c.ListTagsLogGroupRequest(input) + return out, req.Send() +} + +// ListTagsLogGroupWithContext is the same as ListTagsLogGroup with the addition of +// the ability to pass a context and additional request options. +// +// See ListTagsLogGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) ListTagsLogGroupWithContext(ctx aws.Context, input *ListTagsLogGroupInput, opts ...request.Option) (*ListTagsLogGroupOutput, error) { + req, out := c.ListTagsLogGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutDestination = "PutDestination" + +// PutDestinationRequest generates a "aws/request.Request" representing the +// client's request for the PutDestination operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutDestination for more information on using the PutDestination +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutDestinationRequest method. +// req, resp := client.PutDestinationRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutDestination +func (c *CloudWatchLogs) PutDestinationRequest(input *PutDestinationInput) (req *request.Request, output *PutDestinationOutput) { + op := &request.Operation{ + Name: opPutDestination, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutDestinationInput{} + } + + output = &PutDestinationOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutDestination API operation for Amazon CloudWatch Logs. +// +// Creates or updates a destination. A destination encapsulates a physical resource +// (such as an Amazon Kinesis stream) and enables you to subscribe to a real-time +// stream of log events for a different account, ingested using PutLogEvents. +// Currently, the only supported physical resource is a Kinesis stream belonging +// to the same account as the destination. +// +// Through an access policy, a destination controls what is written to its Kinesis +// stream. By default, PutDestination does not set any access policy with the +// destination, which means a cross-account user cannot call PutSubscriptionFilter +// against this destination. To enable this, the destination owner must call +// PutDestinationPolicy after PutDestination. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation PutDestination for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutDestination +func (c *CloudWatchLogs) PutDestination(input *PutDestinationInput) (*PutDestinationOutput, error) { + req, out := c.PutDestinationRequest(input) + return out, req.Send() +} + +// PutDestinationWithContext is the same as PutDestination with the addition of +// the ability to pass a context and additional request options. +// +// See PutDestination for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) PutDestinationWithContext(ctx aws.Context, input *PutDestinationInput, opts ...request.Option) (*PutDestinationOutput, error) { + req, out := c.PutDestinationRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutDestinationPolicy = "PutDestinationPolicy" + +// PutDestinationPolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutDestinationPolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutDestinationPolicy for more information on using the PutDestinationPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutDestinationPolicyRequest method. +// req, resp := client.PutDestinationPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutDestinationPolicy +func (c *CloudWatchLogs) PutDestinationPolicyRequest(input *PutDestinationPolicyInput) (req *request.Request, output *PutDestinationPolicyOutput) { + op := &request.Operation{ + Name: opPutDestinationPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutDestinationPolicyInput{} + } + + output = &PutDestinationPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutDestinationPolicy API operation for Amazon CloudWatch Logs. +// +// Creates or updates an access policy associated with an existing destination. +// An access policy is an IAM policy document (http://docs.aws.amazon.com/IAM/latest/UserGuide/policies_overview.html) +// that is used to authorize claims to register a subscription filter against +// a given destination. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation PutDestinationPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutDestinationPolicy +func (c *CloudWatchLogs) PutDestinationPolicy(input *PutDestinationPolicyInput) (*PutDestinationPolicyOutput, error) { + req, out := c.PutDestinationPolicyRequest(input) + return out, req.Send() +} + +// PutDestinationPolicyWithContext is the same as PutDestinationPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutDestinationPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) PutDestinationPolicyWithContext(ctx aws.Context, input *PutDestinationPolicyInput, opts ...request.Option) (*PutDestinationPolicyOutput, error) { + req, out := c.PutDestinationPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutLogEvents = "PutLogEvents" + +// PutLogEventsRequest generates a "aws/request.Request" representing the +// client's request for the PutLogEvents operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutLogEvents for more information on using the PutLogEvents +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutLogEventsRequest method. +// req, resp := client.PutLogEventsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutLogEvents +func (c *CloudWatchLogs) PutLogEventsRequest(input *PutLogEventsInput) (req *request.Request, output *PutLogEventsOutput) { + op := &request.Operation{ + Name: opPutLogEvents, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutLogEventsInput{} + } + + output = &PutLogEventsOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutLogEvents API operation for Amazon CloudWatch Logs. +// +// Uploads a batch of log events to the specified log stream. +// +// You must include the sequence token obtained from the response of the previous +// call. An upload in a newly created log stream does not require a sequence +// token. You can also get the sequence token using DescribeLogStreams. If you +// call PutLogEvents twice within a narrow time period using the same value +// for sequenceToken, both calls may be successful, or one may be rejected. +// +// The batch of events must satisfy the following constraints: +// +// * The maximum batch size is 1,048,576 bytes, and this size is calculated +// as the sum of all event messages in UTF-8, plus 26 bytes for each log +// event. +// +// * None of the log events in the batch can be more than 2 hours in the +// future. +// +// * None of the log events in the batch can be older than 14 days or the +// retention period of the log group. +// +// * The log events in the batch must be in chronological ordered by their +// time stamp (the time the event occurred, expressed as the number of milliseconds +// after Jan 1, 1970 00:00:00 UTC). +// +// * The maximum number of log events in a batch is 10,000. +// +// * A batch of log events in a single request cannot span more than 24 hours. +// Otherwise, the operation fails. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation PutLogEvents for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeInvalidSequenceTokenException "InvalidSequenceTokenException" +// The sequence token is not valid. +// +// * ErrCodeDataAlreadyAcceptedException "DataAlreadyAcceptedException" +// The event was already logged. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutLogEvents +func (c *CloudWatchLogs) PutLogEvents(input *PutLogEventsInput) (*PutLogEventsOutput, error) { + req, out := c.PutLogEventsRequest(input) + return out, req.Send() +} + +// PutLogEventsWithContext is the same as PutLogEvents with the addition of +// the ability to pass a context and additional request options. +// +// See PutLogEvents for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) PutLogEventsWithContext(ctx aws.Context, input *PutLogEventsInput, opts ...request.Option) (*PutLogEventsOutput, error) { + req, out := c.PutLogEventsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutMetricFilter = "PutMetricFilter" + +// PutMetricFilterRequest generates a "aws/request.Request" representing the +// client's request for the PutMetricFilter operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutMetricFilter for more information on using the PutMetricFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutMetricFilterRequest method. +// req, resp := client.PutMetricFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutMetricFilter +func (c *CloudWatchLogs) PutMetricFilterRequest(input *PutMetricFilterInput) (req *request.Request, output *PutMetricFilterOutput) { + op := &request.Operation{ + Name: opPutMetricFilter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutMetricFilterInput{} + } + + output = &PutMetricFilterOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutMetricFilter API operation for Amazon CloudWatch Logs. +// +// Creates or updates a metric filter and associates it with the specified log +// group. Metric filters allow you to configure rules to extract metric data +// from log events ingested through PutLogEvents. +// +// The maximum number of metric filters that can be associated with a log group +// is 100. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation PutMetricFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have reached the maximum number of resources that can be created. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutMetricFilter +func (c *CloudWatchLogs) PutMetricFilter(input *PutMetricFilterInput) (*PutMetricFilterOutput, error) { + req, out := c.PutMetricFilterRequest(input) + return out, req.Send() +} + +// PutMetricFilterWithContext is the same as PutMetricFilter with the addition of +// the ability to pass a context and additional request options. +// +// See PutMetricFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) PutMetricFilterWithContext(ctx aws.Context, input *PutMetricFilterInput, opts ...request.Option) (*PutMetricFilterOutput, error) { + req, out := c.PutMetricFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutResourcePolicy = "PutResourcePolicy" + +// PutResourcePolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutResourcePolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutResourcePolicy for more information on using the PutResourcePolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutResourcePolicyRequest method. +// req, resp := client.PutResourcePolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutResourcePolicy +func (c *CloudWatchLogs) PutResourcePolicyRequest(input *PutResourcePolicyInput) (req *request.Request, output *PutResourcePolicyOutput) { + op := &request.Operation{ + Name: opPutResourcePolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutResourcePolicyInput{} + } + + output = &PutResourcePolicyOutput{} + req = c.newRequest(op, input, output) + return +} + +// PutResourcePolicy API operation for Amazon CloudWatch Logs. +// +// Creates or updates a resource policy allowing other AWS services to put log +// events to this account, such as Amazon Route 53. An account can have up to +// 50 resource policies per region. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation PutResourcePolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have reached the maximum number of resources that can be created. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutResourcePolicy +func (c *CloudWatchLogs) PutResourcePolicy(input *PutResourcePolicyInput) (*PutResourcePolicyOutput, error) { + req, out := c.PutResourcePolicyRequest(input) + return out, req.Send() +} + +// PutResourcePolicyWithContext is the same as PutResourcePolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutResourcePolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) PutResourcePolicyWithContext(ctx aws.Context, input *PutResourcePolicyInput, opts ...request.Option) (*PutResourcePolicyOutput, error) { + req, out := c.PutResourcePolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutRetentionPolicy = "PutRetentionPolicy" + +// PutRetentionPolicyRequest generates a "aws/request.Request" representing the +// client's request for the PutRetentionPolicy operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutRetentionPolicy for more information on using the PutRetentionPolicy +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutRetentionPolicyRequest method. +// req, resp := client.PutRetentionPolicyRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutRetentionPolicy +func (c *CloudWatchLogs) PutRetentionPolicyRequest(input *PutRetentionPolicyInput) (req *request.Request, output *PutRetentionPolicyOutput) { + op := &request.Operation{ + Name: opPutRetentionPolicy, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutRetentionPolicyInput{} + } + + output = &PutRetentionPolicyOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutRetentionPolicy API operation for Amazon CloudWatch Logs. +// +// Sets the retention of the specified log group. A retention policy allows +// you to configure the number of days for which to retain log events in the +// specified log group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation PutRetentionPolicy for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutRetentionPolicy +func (c *CloudWatchLogs) PutRetentionPolicy(input *PutRetentionPolicyInput) (*PutRetentionPolicyOutput, error) { + req, out := c.PutRetentionPolicyRequest(input) + return out, req.Send() +} + +// PutRetentionPolicyWithContext is the same as PutRetentionPolicy with the addition of +// the ability to pass a context and additional request options. +// +// See PutRetentionPolicy for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) PutRetentionPolicyWithContext(ctx aws.Context, input *PutRetentionPolicyInput, opts ...request.Option) (*PutRetentionPolicyOutput, error) { + req, out := c.PutRetentionPolicyRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opPutSubscriptionFilter = "PutSubscriptionFilter" + +// PutSubscriptionFilterRequest generates a "aws/request.Request" representing the +// client's request for the PutSubscriptionFilter operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See PutSubscriptionFilter for more information on using the PutSubscriptionFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the PutSubscriptionFilterRequest method. +// req, resp := client.PutSubscriptionFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutSubscriptionFilter +func (c *CloudWatchLogs) PutSubscriptionFilterRequest(input *PutSubscriptionFilterInput) (req *request.Request, output *PutSubscriptionFilterOutput) { + op := &request.Operation{ + Name: opPutSubscriptionFilter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &PutSubscriptionFilterInput{} + } + + output = &PutSubscriptionFilterOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// PutSubscriptionFilter API operation for Amazon CloudWatch Logs. +// +// Creates or updates a subscription filter and associates it with the specified +// log group. Subscription filters allow you to subscribe to a real-time stream +// of log events ingested through PutLogEvents and have them delivered to a +// specific destination. Currently, the supported destinations are: +// +// * An Amazon Kinesis stream belonging to the same account as the subscription +// filter, for same-account delivery. +// +// * A logical destination that belongs to a different account, for cross-account +// delivery. +// +// * An Amazon Kinesis Firehose delivery stream that belongs to the same +// account as the subscription filter, for same-account delivery. +// +// * An AWS Lambda function that belongs to the same account as the subscription +// filter, for same-account delivery. +// +// There can only be one subscription filter associated with a log group. If +// you are updating an existing filter, you must specify the correct name in +// filterName. Otherwise, the call fails because you cannot associate a second +// filter with a log group. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation PutSubscriptionFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeOperationAbortedException "OperationAbortedException" +// Multiple requests to update the same resource were in conflict. +// +// * ErrCodeLimitExceededException "LimitExceededException" +// You have reached the maximum number of resources that can be created. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/PutSubscriptionFilter +func (c *CloudWatchLogs) PutSubscriptionFilter(input *PutSubscriptionFilterInput) (*PutSubscriptionFilterOutput, error) { + req, out := c.PutSubscriptionFilterRequest(input) + return out, req.Send() +} + +// PutSubscriptionFilterWithContext is the same as PutSubscriptionFilter with the addition of +// the ability to pass a context and additional request options. +// +// See PutSubscriptionFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) PutSubscriptionFilterWithContext(ctx aws.Context, input *PutSubscriptionFilterInput, opts ...request.Option) (*PutSubscriptionFilterOutput, error) { + req, out := c.PutSubscriptionFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTagLogGroup = "TagLogGroup" + +// TagLogGroupRequest generates a "aws/request.Request" representing the +// client's request for the TagLogGroup operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TagLogGroup for more information on using the TagLogGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TagLogGroupRequest method. +// req, resp := client.TagLogGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/TagLogGroup +func (c *CloudWatchLogs) TagLogGroupRequest(input *TagLogGroupInput) (req *request.Request, output *TagLogGroupOutput) { + op := &request.Operation{ + Name: opTagLogGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TagLogGroupInput{} + } + + output = &TagLogGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// TagLogGroup API operation for Amazon CloudWatch Logs. +// +// Adds or updates the specified tags for the specified log group. +// +// To list the tags for a log group, use ListTagsLogGroup. To remove tags, use +// UntagLogGroup. +// +// For more information about tags, see Tag Log Groups in Amazon CloudWatch +// Logs (http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/log-group-tagging.html) +// in the Amazon CloudWatch Logs User Guide. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation TagLogGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/TagLogGroup +func (c *CloudWatchLogs) TagLogGroup(input *TagLogGroupInput) (*TagLogGroupOutput, error) { + req, out := c.TagLogGroupRequest(input) + return out, req.Send() +} + +// TagLogGroupWithContext is the same as TagLogGroup with the addition of +// the ability to pass a context and additional request options. +// +// See TagLogGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) TagLogGroupWithContext(ctx aws.Context, input *TagLogGroupInput, opts ...request.Option) (*TagLogGroupOutput, error) { + req, out := c.TagLogGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opTestMetricFilter = "TestMetricFilter" + +// TestMetricFilterRequest generates a "aws/request.Request" representing the +// client's request for the TestMetricFilter operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See TestMetricFilter for more information on using the TestMetricFilter +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the TestMetricFilterRequest method. +// req, resp := client.TestMetricFilterRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/TestMetricFilter +func (c *CloudWatchLogs) TestMetricFilterRequest(input *TestMetricFilterInput) (req *request.Request, output *TestMetricFilterOutput) { + op := &request.Operation{ + Name: opTestMetricFilter, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &TestMetricFilterInput{} + } + + output = &TestMetricFilterOutput{} + req = c.newRequest(op, input, output) + return +} + +// TestMetricFilter API operation for Amazon CloudWatch Logs. +// +// Tests the filter pattern of a metric filter against a sample of log event +// messages. You can use this operation to validate the correctness of a metric +// filter pattern. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation TestMetricFilter for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidParameterException "InvalidParameterException" +// A parameter is specified incorrectly. +// +// * ErrCodeServiceUnavailableException "ServiceUnavailableException" +// The service cannot complete the request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/TestMetricFilter +func (c *CloudWatchLogs) TestMetricFilter(input *TestMetricFilterInput) (*TestMetricFilterOutput, error) { + req, out := c.TestMetricFilterRequest(input) + return out, req.Send() +} + +// TestMetricFilterWithContext is the same as TestMetricFilter with the addition of +// the ability to pass a context and additional request options. +// +// See TestMetricFilter for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) TestMetricFilterWithContext(ctx aws.Context, input *TestMetricFilterInput, opts ...request.Option) (*TestMetricFilterOutput, error) { + req, out := c.TestMetricFilterRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opUntagLogGroup = "UntagLogGroup" + +// UntagLogGroupRequest generates a "aws/request.Request" representing the +// client's request for the UntagLogGroup operation. The "output" return +// value will be populated with the request's response once the request complets +// successfuly. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UntagLogGroup for more information on using the UntagLogGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UntagLogGroupRequest method. +// req, resp := client.UntagLogGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/UntagLogGroup +func (c *CloudWatchLogs) UntagLogGroupRequest(input *UntagLogGroupInput) (req *request.Request, output *UntagLogGroupOutput) { + op := &request.Operation{ + Name: opUntagLogGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UntagLogGroupInput{} + } + + output = &UntagLogGroupOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Remove(jsonrpc.UnmarshalHandler) + req.Handlers.Unmarshal.PushBackNamed(protocol.UnmarshalDiscardBodyHandler) + return +} + +// UntagLogGroup API operation for Amazon CloudWatch Logs. +// +// Removes the specified tags from the specified log group. +// +// To list the tags for a log group, use ListTagsLogGroup. To add tags, use +// UntagLogGroup. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon CloudWatch Logs's +// API operation UntagLogGroup for usage and error information. +// +// Returned Error Codes: +// * ErrCodeResourceNotFoundException "ResourceNotFoundException" +// The specified resource does not exist. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28/UntagLogGroup +func (c *CloudWatchLogs) UntagLogGroup(input *UntagLogGroupInput) (*UntagLogGroupOutput, error) { + req, out := c.UntagLogGroupRequest(input) + return out, req.Send() +} + +// UntagLogGroupWithContext is the same as UntagLogGroup with the addition of +// the ability to pass a context and additional request options. +// +// See UntagLogGroup for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *CloudWatchLogs) UntagLogGroupWithContext(ctx aws.Context, input *UntagLogGroupInput, opts ...request.Option) (*UntagLogGroupOutput, error) { + req, out := c.UntagLogGroupRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +type AssociateKmsKeyInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. + // For more information, see Amazon Resource Names - AWS Key Management Service + // (AWS KMS) (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kms). + // + // KmsKeyId is a required field + KmsKeyId *string `locationName:"kmsKeyId" type:"string" required:"true"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s AssociateKmsKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateKmsKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateKmsKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "AssociateKmsKeyInput"} + if s.KmsKeyId == nil { + invalidParams.Add(request.NewErrParamRequired("KmsKeyId")) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *AssociateKmsKeyInput) SetKmsKeyId(v string) *AssociateKmsKeyInput { + s.KmsKeyId = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *AssociateKmsKeyInput) SetLogGroupName(v string) *AssociateKmsKeyInput { + s.LogGroupName = &v + return s +} + +type AssociateKmsKeyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AssociateKmsKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AssociateKmsKeyOutput) GoString() string { + return s.String() +} + +type CancelExportTaskInput struct { + _ struct{} `type:"structure"` + + // The ID of the export task. + // + // TaskId is a required field + TaskId *string `locationName:"taskId" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CancelExportTaskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelExportTaskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CancelExportTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CancelExportTaskInput"} + if s.TaskId == nil { + invalidParams.Add(request.NewErrParamRequired("TaskId")) + } + if s.TaskId != nil && len(*s.TaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetTaskId sets the TaskId field's value. +func (s *CancelExportTaskInput) SetTaskId(v string) *CancelExportTaskInput { + s.TaskId = &v + return s +} + +type CancelExportTaskOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CancelExportTaskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CancelExportTaskOutput) GoString() string { + return s.String() +} + +type CreateExportTaskInput struct { + _ struct{} `type:"structure"` + + // The name of S3 bucket for the exported log data. The bucket must be in the + // same AWS region. + // + // Destination is a required field + Destination *string `locationName:"destination" min:"1" type:"string" required:"true"` + + // The prefix used as the start of the key for every object exported. If you + // don't specify a value, the default is exportedlogs. + DestinationPrefix *string `locationName:"destinationPrefix" type:"string"` + + // The start time of the range for the request, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. Events with a time stamp earlier than this + // time are not exported. + // + // From is a required field + From *int64 `locationName:"from" type:"long" required:"true"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // Export only log streams that match the provided prefix. If you don't specify + // a value, no prefix filter is applied. + LogStreamNamePrefix *string `locationName:"logStreamNamePrefix" min:"1" type:"string"` + + // The name of the export task. + TaskName *string `locationName:"taskName" min:"1" type:"string"` + + // The end time of the range for the request, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this + // time are not exported. + // + // To is a required field + To *int64 `locationName:"to" type:"long" required:"true"` +} + +// String returns the string representation +func (s CreateExportTaskInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateExportTaskInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateExportTaskInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateExportTaskInput"} + if s.Destination == nil { + invalidParams.Add(request.NewErrParamRequired("Destination")) + } + if s.Destination != nil && len(*s.Destination) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Destination", 1)) + } + if s.From == nil { + invalidParams.Add(request.NewErrParamRequired("From")) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.LogStreamNamePrefix != nil && len(*s.LogStreamNamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamNamePrefix", 1)) + } + if s.TaskName != nil && len(*s.TaskName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskName", 1)) + } + if s.To == nil { + invalidParams.Add(request.NewErrParamRequired("To")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestination sets the Destination field's value. +func (s *CreateExportTaskInput) SetDestination(v string) *CreateExportTaskInput { + s.Destination = &v + return s +} + +// SetDestinationPrefix sets the DestinationPrefix field's value. +func (s *CreateExportTaskInput) SetDestinationPrefix(v string) *CreateExportTaskInput { + s.DestinationPrefix = &v + return s +} + +// SetFrom sets the From field's value. +func (s *CreateExportTaskInput) SetFrom(v int64) *CreateExportTaskInput { + s.From = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *CreateExportTaskInput) SetLogGroupName(v string) *CreateExportTaskInput { + s.LogGroupName = &v + return s +} + +// SetLogStreamNamePrefix sets the LogStreamNamePrefix field's value. +func (s *CreateExportTaskInput) SetLogStreamNamePrefix(v string) *CreateExportTaskInput { + s.LogStreamNamePrefix = &v + return s +} + +// SetTaskName sets the TaskName field's value. +func (s *CreateExportTaskInput) SetTaskName(v string) *CreateExportTaskInput { + s.TaskName = &v + return s +} + +// SetTo sets the To field's value. +func (s *CreateExportTaskInput) SetTo(v int64) *CreateExportTaskInput { + s.To = &v + return s +} + +type CreateExportTaskOutput struct { + _ struct{} `type:"structure"` + + // The ID of the export task. + TaskId *string `locationName:"taskId" min:"1" type:"string"` +} + +// String returns the string representation +func (s CreateExportTaskOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateExportTaskOutput) GoString() string { + return s.String() +} + +// SetTaskId sets the TaskId field's value. +func (s *CreateExportTaskOutput) SetTaskId(v string) *CreateExportTaskOutput { + s.TaskId = &v + return s +} + +type CreateLogGroupInput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. + // For more information, see Amazon Resource Names - AWS Key Management Service + // (AWS KMS) (http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html#arn-syntax-kms). + KmsKeyId *string `locationName:"kmsKeyId" type:"string"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The key-value pairs to use for the tags. + Tags map[string]*string `locationName:"tags" min:"1" type:"map"` +} + +// String returns the string representation +func (s CreateLogGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLogGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLogGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLogGroupInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.Tags != nil && len(s.Tags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Tags", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *CreateLogGroupInput) SetKmsKeyId(v string) *CreateLogGroupInput { + s.KmsKeyId = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *CreateLogGroupInput) SetLogGroupName(v string) *CreateLogGroupInput { + s.LogGroupName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateLogGroupInput) SetTags(v map[string]*string) *CreateLogGroupInput { + s.Tags = v + return s +} + +type CreateLogGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CreateLogGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLogGroupOutput) GoString() string { + return s.String() +} + +type CreateLogStreamInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The name of the log stream. + // + // LogStreamName is a required field + LogStreamName *string `locationName:"logStreamName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s CreateLogStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLogStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateLogStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateLogStreamInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.LogStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("LogStreamName")) + } + if s.LogStreamName != nil && len(*s.LogStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *CreateLogStreamInput) SetLogGroupName(v string) *CreateLogStreamInput { + s.LogGroupName = &v + return s +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *CreateLogStreamInput) SetLogStreamName(v string) *CreateLogStreamInput { + s.LogStreamName = &v + return s +} + +type CreateLogStreamOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s CreateLogStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateLogStreamOutput) GoString() string { + return s.String() +} + +type DeleteDestinationInput struct { + _ struct{} `type:"structure"` + + // The name of the destination. + // + // DestinationName is a required field + DestinationName *string `locationName:"destinationName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDestinationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDestinationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDestinationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDestinationInput"} + if s.DestinationName == nil { + invalidParams.Add(request.NewErrParamRequired("DestinationName")) + } + if s.DestinationName != nil && len(*s.DestinationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DestinationName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestinationName sets the DestinationName field's value. +func (s *DeleteDestinationInput) SetDestinationName(v string) *DeleteDestinationInput { + s.DestinationName = &v + return s +} + +type DeleteDestinationOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteDestinationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDestinationOutput) GoString() string { + return s.String() +} + +type DeleteLogGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteLogGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLogGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLogGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLogGroupInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DeleteLogGroupInput) SetLogGroupName(v string) *DeleteLogGroupInput { + s.LogGroupName = &v + return s +} + +type DeleteLogGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteLogGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLogGroupOutput) GoString() string { + return s.String() +} + +type DeleteLogStreamInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The name of the log stream. + // + // LogStreamName is a required field + LogStreamName *string `locationName:"logStreamName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteLogStreamInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLogStreamInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteLogStreamInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteLogStreamInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.LogStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("LogStreamName")) + } + if s.LogStreamName != nil && len(*s.LogStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DeleteLogStreamInput) SetLogGroupName(v string) *DeleteLogStreamInput { + s.LogGroupName = &v + return s +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *DeleteLogStreamInput) SetLogStreamName(v string) *DeleteLogStreamInput { + s.LogStreamName = &v + return s +} + +type DeleteLogStreamOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteLogStreamOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteLogStreamOutput) GoString() string { + return s.String() +} + +type DeleteMetricFilterInput struct { + _ struct{} `type:"structure"` + + // The name of the metric filter. + // + // FilterName is a required field + FilterName *string `locationName:"filterName" min:"1" type:"string" required:"true"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteMetricFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteMetricFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteMetricFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteMetricFilterInput"} + if s.FilterName == nil { + invalidParams.Add(request.NewErrParamRequired("FilterName")) + } + if s.FilterName != nil && len(*s.FilterName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FilterName", 1)) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilterName sets the FilterName field's value. +func (s *DeleteMetricFilterInput) SetFilterName(v string) *DeleteMetricFilterInput { + s.FilterName = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DeleteMetricFilterInput) SetLogGroupName(v string) *DeleteMetricFilterInput { + s.LogGroupName = &v + return s +} + +type DeleteMetricFilterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteMetricFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteMetricFilterOutput) GoString() string { + return s.String() +} + +type DeleteResourcePolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the policy to be revoked. This parameter is required. + PolicyName *string `locationName:"policyName" type:"string"` +} + +// String returns the string representation +func (s DeleteResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourcePolicyInput) GoString() string { + return s.String() +} + +// SetPolicyName sets the PolicyName field's value. +func (s *DeleteResourcePolicyInput) SetPolicyName(v string) *DeleteResourcePolicyInput { + s.PolicyName = &v + return s +} + +type DeleteResourcePolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteResourcePolicyOutput) GoString() string { + return s.String() +} + +type DeleteRetentionPolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteRetentionPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRetentionPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteRetentionPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteRetentionPolicyInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DeleteRetentionPolicyInput) SetLogGroupName(v string) *DeleteRetentionPolicyInput { + s.LogGroupName = &v + return s +} + +type DeleteRetentionPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteRetentionPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteRetentionPolicyOutput) GoString() string { + return s.String() +} + +type DeleteSubscriptionFilterInput struct { + _ struct{} `type:"structure"` + + // The name of the subscription filter. + // + // FilterName is a required field + FilterName *string `locationName:"filterName" min:"1" type:"string" required:"true"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteSubscriptionFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSubscriptionFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteSubscriptionFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteSubscriptionFilterInput"} + if s.FilterName == nil { + invalidParams.Add(request.NewErrParamRequired("FilterName")) + } + if s.FilterName != nil && len(*s.FilterName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FilterName", 1)) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilterName sets the FilterName field's value. +func (s *DeleteSubscriptionFilterInput) SetFilterName(v string) *DeleteSubscriptionFilterInput { + s.FilterName = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DeleteSubscriptionFilterInput) SetLogGroupName(v string) *DeleteSubscriptionFilterInput { + s.LogGroupName = &v + return s +} + +type DeleteSubscriptionFilterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DeleteSubscriptionFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteSubscriptionFilterOutput) GoString() string { + return s.String() +} + +type DescribeDestinationsInput struct { + _ struct{} `type:"structure"` + + // The prefix to match. If you don't specify a value, no prefix filter is applied. + DestinationNamePrefix *string `min:"1" type:"string"` + + // The maximum number of items returned. If you don't specify a value, the default + // is up to 50 items. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeDestinationsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDestinationsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDestinationsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDestinationsInput"} + if s.DestinationNamePrefix != nil && len(*s.DestinationNamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DestinationNamePrefix", 1)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestinationNamePrefix sets the DestinationNamePrefix field's value. +func (s *DescribeDestinationsInput) SetDestinationNamePrefix(v string) *DescribeDestinationsInput { + s.DestinationNamePrefix = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeDestinationsInput) SetLimit(v int64) *DescribeDestinationsInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeDestinationsInput) SetNextToken(v string) *DescribeDestinationsInput { + s.NextToken = &v + return s +} + +type DescribeDestinationsOutput struct { + _ struct{} `type:"structure"` + + // The destinations. + Destinations []*Destination `locationName:"destinations" type:"list"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeDestinationsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDestinationsOutput) GoString() string { + return s.String() +} + +// SetDestinations sets the Destinations field's value. +func (s *DescribeDestinationsOutput) SetDestinations(v []*Destination) *DescribeDestinationsOutput { + s.Destinations = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeDestinationsOutput) SetNextToken(v string) *DescribeDestinationsOutput { + s.NextToken = &v + return s +} + +type DescribeExportTasksInput struct { + _ struct{} `type:"structure"` + + // The maximum number of items returned. If you don't specify a value, the default + // is up to 50 items. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // The status code of the export task. Specifying a status code filters the + // results to zero or more export tasks. + StatusCode *string `locationName:"statusCode" type:"string" enum:"ExportTaskStatusCode"` + + // The ID of the export task. Specifying a task ID filters the results to zero + // or one export tasks. + TaskId *string `locationName:"taskId" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeExportTasksInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeExportTasksInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeExportTasksInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeExportTasksInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + if s.TaskId != nil && len(*s.TaskId) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TaskId", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *DescribeExportTasksInput) SetLimit(v int64) *DescribeExportTasksInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeExportTasksInput) SetNextToken(v string) *DescribeExportTasksInput { + s.NextToken = &v + return s +} + +// SetStatusCode sets the StatusCode field's value. +func (s *DescribeExportTasksInput) SetStatusCode(v string) *DescribeExportTasksInput { + s.StatusCode = &v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *DescribeExportTasksInput) SetTaskId(v string) *DescribeExportTasksInput { + s.TaskId = &v + return s +} + +type DescribeExportTasksOutput struct { + _ struct{} `type:"structure"` + + // The export tasks. + ExportTasks []*ExportTask `locationName:"exportTasks" type:"list"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeExportTasksOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeExportTasksOutput) GoString() string { + return s.String() +} + +// SetExportTasks sets the ExportTasks field's value. +func (s *DescribeExportTasksOutput) SetExportTasks(v []*ExportTask) *DescribeExportTasksOutput { + s.ExportTasks = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeExportTasksOutput) SetNextToken(v string) *DescribeExportTasksOutput { + s.NextToken = &v + return s +} + +type DescribeLogGroupsInput struct { + _ struct{} `type:"structure"` + + // The maximum number of items returned. If you don't specify a value, the default + // is up to 50 items. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The prefix to match. + LogGroupNamePrefix *string `locationName:"logGroupNamePrefix" min:"1" type:"string"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeLogGroupsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLogGroupsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeLogGroupsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeLogGroupsInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.LogGroupNamePrefix != nil && len(*s.LogGroupNamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupNamePrefix", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *DescribeLogGroupsInput) SetLimit(v int64) *DescribeLogGroupsInput { + s.Limit = &v + return s +} + +// SetLogGroupNamePrefix sets the LogGroupNamePrefix field's value. +func (s *DescribeLogGroupsInput) SetLogGroupNamePrefix(v string) *DescribeLogGroupsInput { + s.LogGroupNamePrefix = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLogGroupsInput) SetNextToken(v string) *DescribeLogGroupsInput { + s.NextToken = &v + return s +} + +type DescribeLogGroupsOutput struct { + _ struct{} `type:"structure"` + + // The log groups. + LogGroups []*LogGroup `locationName:"logGroups" type:"list"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeLogGroupsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLogGroupsOutput) GoString() string { + return s.String() +} + +// SetLogGroups sets the LogGroups field's value. +func (s *DescribeLogGroupsOutput) SetLogGroups(v []*LogGroup) *DescribeLogGroupsOutput { + s.LogGroups = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLogGroupsOutput) SetNextToken(v string) *DescribeLogGroupsOutput { + s.NextToken = &v + return s +} + +type DescribeLogStreamsInput struct { + _ struct{} `type:"structure"` + + // If the value is true, results are returned in descending order. If the value + // is to false, results are returned in ascending order. The default value is + // false. + Descending *bool `locationName:"descending" type:"boolean"` + + // The maximum number of items returned. If you don't specify a value, the default + // is up to 50 items. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The prefix to match. + // + // iIf orderBy is LastEventTime,you cannot specify this parameter. + LogStreamNamePrefix *string `locationName:"logStreamNamePrefix" min:"1" type:"string"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // If the value is LogStreamName, the results are ordered by log stream name. + // If the value is LastEventTime, the results are ordered by the event time. + // The default value is LogStreamName. + // + // If you order the results by event time, you cannot specify the logStreamNamePrefix + // parameter. + // + // lastEventTimestamp represents the time of the most recent log event in the + // log stream in CloudWatch Logs. This number is expressed as the number of + // milliseconds after Jan 1, 1970 00:00:00 UTC. lastEventTimeStamp updates on + // an eventual consistency basis. It typically updates in less than an hour + // from ingestion, but may take longer in some rare situations. + OrderBy *string `locationName:"orderBy" type:"string" enum:"OrderBy"` +} + +// String returns the string representation +func (s DescribeLogStreamsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLogStreamsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeLogStreamsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeLogStreamsInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.LogStreamNamePrefix != nil && len(*s.LogStreamNamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamNamePrefix", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescending sets the Descending field's value. +func (s *DescribeLogStreamsInput) SetDescending(v bool) *DescribeLogStreamsInput { + s.Descending = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeLogStreamsInput) SetLimit(v int64) *DescribeLogStreamsInput { + s.Limit = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DescribeLogStreamsInput) SetLogGroupName(v string) *DescribeLogStreamsInput { + s.LogGroupName = &v + return s +} + +// SetLogStreamNamePrefix sets the LogStreamNamePrefix field's value. +func (s *DescribeLogStreamsInput) SetLogStreamNamePrefix(v string) *DescribeLogStreamsInput { + s.LogStreamNamePrefix = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLogStreamsInput) SetNextToken(v string) *DescribeLogStreamsInput { + s.NextToken = &v + return s +} + +// SetOrderBy sets the OrderBy field's value. +func (s *DescribeLogStreamsInput) SetOrderBy(v string) *DescribeLogStreamsInput { + s.OrderBy = &v + return s +} + +type DescribeLogStreamsOutput struct { + _ struct{} `type:"structure"` + + // The log streams. + LogStreams []*LogStream `locationName:"logStreams" type:"list"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeLogStreamsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeLogStreamsOutput) GoString() string { + return s.String() +} + +// SetLogStreams sets the LogStreams field's value. +func (s *DescribeLogStreamsOutput) SetLogStreams(v []*LogStream) *DescribeLogStreamsOutput { + s.LogStreams = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeLogStreamsOutput) SetNextToken(v string) *DescribeLogStreamsOutput { + s.NextToken = &v + return s +} + +type DescribeMetricFiltersInput struct { + _ struct{} `type:"structure"` + + // The prefix to match. + FilterNamePrefix *string `locationName:"filterNamePrefix" min:"1" type:"string"` + + // The maximum number of items returned. If you don't specify a value, the default + // is up to 50 items. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The name of the log group. + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string"` + + // The name of the CloudWatch metric to which the monitored log information + // should be published. For example, you may publish to a metric called ErrorCount. + MetricName *string `locationName:"metricName" type:"string"` + + // The namespace of the CloudWatch metric. + MetricNamespace *string `locationName:"metricNamespace" type:"string"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeMetricFiltersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMetricFiltersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeMetricFiltersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeMetricFiltersInput"} + if s.FilterNamePrefix != nil && len(*s.FilterNamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FilterNamePrefix", 1)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilterNamePrefix sets the FilterNamePrefix field's value. +func (s *DescribeMetricFiltersInput) SetFilterNamePrefix(v string) *DescribeMetricFiltersInput { + s.FilterNamePrefix = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeMetricFiltersInput) SetLimit(v int64) *DescribeMetricFiltersInput { + s.Limit = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DescribeMetricFiltersInput) SetLogGroupName(v string) *DescribeMetricFiltersInput { + s.LogGroupName = &v + return s +} + +// SetMetricName sets the MetricName field's value. +func (s *DescribeMetricFiltersInput) SetMetricName(v string) *DescribeMetricFiltersInput { + s.MetricName = &v + return s +} + +// SetMetricNamespace sets the MetricNamespace field's value. +func (s *DescribeMetricFiltersInput) SetMetricNamespace(v string) *DescribeMetricFiltersInput { + s.MetricNamespace = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMetricFiltersInput) SetNextToken(v string) *DescribeMetricFiltersInput { + s.NextToken = &v + return s +} + +type DescribeMetricFiltersOutput struct { + _ struct{} `type:"structure"` + + // The metric filters. + MetricFilters []*MetricFilter `locationName:"metricFilters" type:"list"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeMetricFiltersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeMetricFiltersOutput) GoString() string { + return s.String() +} + +// SetMetricFilters sets the MetricFilters field's value. +func (s *DescribeMetricFiltersOutput) SetMetricFilters(v []*MetricFilter) *DescribeMetricFiltersOutput { + s.MetricFilters = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeMetricFiltersOutput) SetNextToken(v string) *DescribeMetricFiltersOutput { + s.NextToken = &v + return s +} + +type DescribeResourcePoliciesInput struct { + _ struct{} `type:"structure"` + + // The maximum number of resource policies to be displayed with one call of + // this API. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeResourcePoliciesInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeResourcePoliciesInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeResourcePoliciesInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeResourcePoliciesInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLimit sets the Limit field's value. +func (s *DescribeResourcePoliciesInput) SetLimit(v int64) *DescribeResourcePoliciesInput { + s.Limit = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeResourcePoliciesInput) SetNextToken(v string) *DescribeResourcePoliciesInput { + s.NextToken = &v + return s +} + +type DescribeResourcePoliciesOutput struct { + _ struct{} `type:"structure"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // The resource policies that exist in this account. + ResourcePolicies []*ResourcePolicy `locationName:"resourcePolicies" type:"list"` +} + +// String returns the string representation +func (s DescribeResourcePoliciesOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeResourcePoliciesOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeResourcePoliciesOutput) SetNextToken(v string) *DescribeResourcePoliciesOutput { + s.NextToken = &v + return s +} + +// SetResourcePolicies sets the ResourcePolicies field's value. +func (s *DescribeResourcePoliciesOutput) SetResourcePolicies(v []*ResourcePolicy) *DescribeResourcePoliciesOutput { + s.ResourcePolicies = v + return s +} + +type DescribeSubscriptionFiltersInput struct { + _ struct{} `type:"structure"` + + // The prefix to match. If you don't specify a value, no prefix filter is applied. + FilterNamePrefix *string `locationName:"filterNamePrefix" min:"1" type:"string"` + + // The maximum number of items returned. If you don't specify a value, the default + // is up to 50 items. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s DescribeSubscriptionFiltersInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSubscriptionFiltersInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeSubscriptionFiltersInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeSubscriptionFiltersInput"} + if s.FilterNamePrefix != nil && len(*s.FilterNamePrefix) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FilterNamePrefix", 1)) + } + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilterNamePrefix sets the FilterNamePrefix field's value. +func (s *DescribeSubscriptionFiltersInput) SetFilterNamePrefix(v string) *DescribeSubscriptionFiltersInput { + s.FilterNamePrefix = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *DescribeSubscriptionFiltersInput) SetLimit(v int64) *DescribeSubscriptionFiltersInput { + s.Limit = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DescribeSubscriptionFiltersInput) SetLogGroupName(v string) *DescribeSubscriptionFiltersInput { + s.LogGroupName = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSubscriptionFiltersInput) SetNextToken(v string) *DescribeSubscriptionFiltersInput { + s.NextToken = &v + return s +} + +type DescribeSubscriptionFiltersOutput struct { + _ struct{} `type:"structure"` + + // The token for the next set of items to return. The token expires after 24 + // hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // The subscription filters. + SubscriptionFilters []*SubscriptionFilter `locationName:"subscriptionFilters" type:"list"` +} + +// String returns the string representation +func (s DescribeSubscriptionFiltersOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeSubscriptionFiltersOutput) GoString() string { + return s.String() +} + +// SetNextToken sets the NextToken field's value. +func (s *DescribeSubscriptionFiltersOutput) SetNextToken(v string) *DescribeSubscriptionFiltersOutput { + s.NextToken = &v + return s +} + +// SetSubscriptionFilters sets the SubscriptionFilters field's value. +func (s *DescribeSubscriptionFiltersOutput) SetSubscriptionFilters(v []*SubscriptionFilter) *DescribeSubscriptionFiltersOutput { + s.SubscriptionFilters = v + return s +} + +// Represents a cross-account destination that receives subscription log events. +type Destination struct { + _ struct{} `type:"structure"` + + // An IAM policy document that governs which AWS accounts can create subscription + // filters against this destination. + AccessPolicy *string `locationName:"accessPolicy" min:"1" type:"string"` + + // The ARN of this destination. + Arn *string `locationName:"arn" type:"string"` + + // The creation time of the destination, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + CreationTime *int64 `locationName:"creationTime" type:"long"` + + // The name of the destination. + DestinationName *string `locationName:"destinationName" min:"1" type:"string"` + + // A role for impersonation, used when delivering log events to the target. + RoleArn *string `locationName:"roleArn" min:"1" type:"string"` + + // The Amazon Resource Name (ARN) of the physical target to where the log events + // are delivered (for example, a Kinesis stream). + TargetArn *string `locationName:"targetArn" min:"1" type:"string"` +} + +// String returns the string representation +func (s Destination) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s Destination) GoString() string { + return s.String() +} + +// SetAccessPolicy sets the AccessPolicy field's value. +func (s *Destination) SetAccessPolicy(v string) *Destination { + s.AccessPolicy = &v + return s +} + +// SetArn sets the Arn field's value. +func (s *Destination) SetArn(v string) *Destination { + s.Arn = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *Destination) SetCreationTime(v int64) *Destination { + s.CreationTime = &v + return s +} + +// SetDestinationName sets the DestinationName field's value. +func (s *Destination) SetDestinationName(v string) *Destination { + s.DestinationName = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *Destination) SetRoleArn(v string) *Destination { + s.RoleArn = &v + return s +} + +// SetTargetArn sets the TargetArn field's value. +func (s *Destination) SetTargetArn(v string) *Destination { + s.TargetArn = &v + return s +} + +type DisassociateKmsKeyInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s DisassociateKmsKeyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateKmsKeyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisassociateKmsKeyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DisassociateKmsKeyInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *DisassociateKmsKeyInput) SetLogGroupName(v string) *DisassociateKmsKeyInput { + s.LogGroupName = &v + return s +} + +type DisassociateKmsKeyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisassociateKmsKeyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DisassociateKmsKeyOutput) GoString() string { + return s.String() +} + +// Represents an export task. +type ExportTask struct { + _ struct{} `type:"structure"` + + // The name of Amazon S3 bucket to which the log data was exported. + Destination *string `locationName:"destination" min:"1" type:"string"` + + // The prefix that was used as the start of Amazon S3 key for every object exported. + DestinationPrefix *string `locationName:"destinationPrefix" type:"string"` + + // Execution info about the export task. + ExecutionInfo *ExportTaskExecutionInfo `locationName:"executionInfo" type:"structure"` + + // The start time, expressed as the number of milliseconds after Jan 1, 1970 + // 00:00:00 UTC. Events with a time stamp before this time are not exported. + From *int64 `locationName:"from" type:"long"` + + // The name of the log group from which logs data was exported. + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string"` + + // The status of the export task. + Status *ExportTaskStatus `locationName:"status" type:"structure"` + + // The ID of the export task. + TaskId *string `locationName:"taskId" min:"1" type:"string"` + + // The name of the export task. + TaskName *string `locationName:"taskName" min:"1" type:"string"` + + // The end time, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 + // UTC. Events with a time stamp later than this time are not exported. + To *int64 `locationName:"to" type:"long"` +} + +// String returns the string representation +func (s ExportTask) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportTask) GoString() string { + return s.String() +} + +// SetDestination sets the Destination field's value. +func (s *ExportTask) SetDestination(v string) *ExportTask { + s.Destination = &v + return s +} + +// SetDestinationPrefix sets the DestinationPrefix field's value. +func (s *ExportTask) SetDestinationPrefix(v string) *ExportTask { + s.DestinationPrefix = &v + return s +} + +// SetExecutionInfo sets the ExecutionInfo field's value. +func (s *ExportTask) SetExecutionInfo(v *ExportTaskExecutionInfo) *ExportTask { + s.ExecutionInfo = v + return s +} + +// SetFrom sets the From field's value. +func (s *ExportTask) SetFrom(v int64) *ExportTask { + s.From = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *ExportTask) SetLogGroupName(v string) *ExportTask { + s.LogGroupName = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *ExportTask) SetStatus(v *ExportTaskStatus) *ExportTask { + s.Status = v + return s +} + +// SetTaskId sets the TaskId field's value. +func (s *ExportTask) SetTaskId(v string) *ExportTask { + s.TaskId = &v + return s +} + +// SetTaskName sets the TaskName field's value. +func (s *ExportTask) SetTaskName(v string) *ExportTask { + s.TaskName = &v + return s +} + +// SetTo sets the To field's value. +func (s *ExportTask) SetTo(v int64) *ExportTask { + s.To = &v + return s +} + +// Represents the status of an export task. +type ExportTaskExecutionInfo struct { + _ struct{} `type:"structure"` + + // The completion time of the export task, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + CompletionTime *int64 `locationName:"completionTime" type:"long"` + + // The creation time of the export task, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + CreationTime *int64 `locationName:"creationTime" type:"long"` +} + +// String returns the string representation +func (s ExportTaskExecutionInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportTaskExecutionInfo) GoString() string { + return s.String() +} + +// SetCompletionTime sets the CompletionTime field's value. +func (s *ExportTaskExecutionInfo) SetCompletionTime(v int64) *ExportTaskExecutionInfo { + s.CompletionTime = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *ExportTaskExecutionInfo) SetCreationTime(v int64) *ExportTaskExecutionInfo { + s.CreationTime = &v + return s +} + +// Represents the status of an export task. +type ExportTaskStatus struct { + _ struct{} `type:"structure"` + + // The status code of the export task. + Code *string `locationName:"code" type:"string" enum:"ExportTaskStatusCode"` + + // The status message related to the status code. + Message *string `locationName:"message" type:"string"` +} + +// String returns the string representation +func (s ExportTaskStatus) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ExportTaskStatus) GoString() string { + return s.String() +} + +// SetCode sets the Code field's value. +func (s *ExportTaskStatus) SetCode(v string) *ExportTaskStatus { + s.Code = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *ExportTaskStatus) SetMessage(v string) *ExportTaskStatus { + s.Message = &v + return s +} + +type FilterLogEventsInput struct { + _ struct{} `type:"structure"` + + // The end of the time range, expressed as the number of milliseconds after + // Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this time are + // not returned. + EndTime *int64 `locationName:"endTime" type:"long"` + + // The filter pattern to use. If not provided, all the events are matched. + FilterPattern *string `locationName:"filterPattern" type:"string"` + + // If the value is true, the operation makes a best effort to provide responses + // that contain events from multiple log streams within the log group, interleaved + // in a single response. If the value is false, all the matched log events in + // the first log stream are searched first, then those in the next log stream, + // and so on. The default is false. + Interleaved *bool `locationName:"interleaved" type:"boolean"` + + // The maximum number of events to return. The default is 10,000 events. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // Optional list of log stream names. + LogStreamNames []*string `locationName:"logStreamNames" min:"1" type:"list"` + + // The token for the next set of events to return. (You received this token + // from a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // The start of the time range, expressed as the number of milliseconds after + // Jan 1, 1970 00:00:00 UTC. Events with a time stamp before this time are not + // returned. + StartTime *int64 `locationName:"startTime" type:"long"` +} + +// String returns the string representation +func (s FilterLogEventsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FilterLogEventsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *FilterLogEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "FilterLogEventsInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.LogStreamNames != nil && len(s.LogStreamNames) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamNames", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndTime sets the EndTime field's value. +func (s *FilterLogEventsInput) SetEndTime(v int64) *FilterLogEventsInput { + s.EndTime = &v + return s +} + +// SetFilterPattern sets the FilterPattern field's value. +func (s *FilterLogEventsInput) SetFilterPattern(v string) *FilterLogEventsInput { + s.FilterPattern = &v + return s +} + +// SetInterleaved sets the Interleaved field's value. +func (s *FilterLogEventsInput) SetInterleaved(v bool) *FilterLogEventsInput { + s.Interleaved = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *FilterLogEventsInput) SetLimit(v int64) *FilterLogEventsInput { + s.Limit = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *FilterLogEventsInput) SetLogGroupName(v string) *FilterLogEventsInput { + s.LogGroupName = &v + return s +} + +// SetLogStreamNames sets the LogStreamNames field's value. +func (s *FilterLogEventsInput) SetLogStreamNames(v []*string) *FilterLogEventsInput { + s.LogStreamNames = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *FilterLogEventsInput) SetNextToken(v string) *FilterLogEventsInput { + s.NextToken = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *FilterLogEventsInput) SetStartTime(v int64) *FilterLogEventsInput { + s.StartTime = &v + return s +} + +type FilterLogEventsOutput struct { + _ struct{} `type:"structure"` + + // The matched events. + Events []*FilteredLogEvent `locationName:"events" type:"list"` + + // The token to use when requesting the next set of items. The token expires + // after 24 hours. + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // Indicates which log streams have been searched and whether each has been + // searched completely. + SearchedLogStreams []*SearchedLogStream `locationName:"searchedLogStreams" type:"list"` +} + +// String returns the string representation +func (s FilterLogEventsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FilterLogEventsOutput) GoString() string { + return s.String() +} + +// SetEvents sets the Events field's value. +func (s *FilterLogEventsOutput) SetEvents(v []*FilteredLogEvent) *FilterLogEventsOutput { + s.Events = v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *FilterLogEventsOutput) SetNextToken(v string) *FilterLogEventsOutput { + s.NextToken = &v + return s +} + +// SetSearchedLogStreams sets the SearchedLogStreams field's value. +func (s *FilterLogEventsOutput) SetSearchedLogStreams(v []*SearchedLogStream) *FilterLogEventsOutput { + s.SearchedLogStreams = v + return s +} + +// Represents a matched event. +type FilteredLogEvent struct { + _ struct{} `type:"structure"` + + // The ID of the event. + EventId *string `locationName:"eventId" type:"string"` + + // The time the event was ingested, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + IngestionTime *int64 `locationName:"ingestionTime" type:"long"` + + // The name of the log stream this event belongs to. + LogStreamName *string `locationName:"logStreamName" min:"1" type:"string"` + + // The data contained in the log event. + Message *string `locationName:"message" min:"1" type:"string"` + + // The time the event occurred, expressed as the number of milliseconds after + // Jan 1, 1970 00:00:00 UTC. + Timestamp *int64 `locationName:"timestamp" type:"long"` +} + +// String returns the string representation +func (s FilteredLogEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s FilteredLogEvent) GoString() string { + return s.String() +} + +// SetEventId sets the EventId field's value. +func (s *FilteredLogEvent) SetEventId(v string) *FilteredLogEvent { + s.EventId = &v + return s +} + +// SetIngestionTime sets the IngestionTime field's value. +func (s *FilteredLogEvent) SetIngestionTime(v int64) *FilteredLogEvent { + s.IngestionTime = &v + return s +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *FilteredLogEvent) SetLogStreamName(v string) *FilteredLogEvent { + s.LogStreamName = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *FilteredLogEvent) SetMessage(v string) *FilteredLogEvent { + s.Message = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *FilteredLogEvent) SetTimestamp(v int64) *FilteredLogEvent { + s.Timestamp = &v + return s +} + +type GetLogEventsInput struct { + _ struct{} `type:"structure"` + + // The end of the time range, expressed as the number of milliseconds after + // Jan 1, 1970 00:00:00 UTC. Events with a time stamp later than this time are + // not included. + EndTime *int64 `locationName:"endTime" type:"long"` + + // The maximum number of log events returned. If you don't specify a value, + // the maximum is as many log events as can fit in a response size of 1 MB, + // up to 10,000 log events. + Limit *int64 `locationName:"limit" min:"1" type:"integer"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The name of the log stream. + // + // LogStreamName is a required field + LogStreamName *string `locationName:"logStreamName" min:"1" type:"string" required:"true"` + + // The token for the next set of items to return. (You received this token from + // a previous call.) + NextToken *string `locationName:"nextToken" min:"1" type:"string"` + + // If the value is true, the earliest log events are returned first. If the + // value is false, the latest log events are returned first. The default value + // is false. + StartFromHead *bool `locationName:"startFromHead" type:"boolean"` + + // The start of the time range, expressed as the number of milliseconds after + // Jan 1, 1970 00:00:00 UTC. Events with a time stamp earlier than this time + // are not included. + StartTime *int64 `locationName:"startTime" type:"long"` +} + +// String returns the string representation +func (s GetLogEventsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLogEventsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetLogEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetLogEventsInput"} + if s.Limit != nil && *s.Limit < 1 { + invalidParams.Add(request.NewErrParamMinValue("Limit", 1)) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.LogStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("LogStreamName")) + } + if s.LogStreamName != nil && len(*s.LogStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamName", 1)) + } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetEndTime sets the EndTime field's value. +func (s *GetLogEventsInput) SetEndTime(v int64) *GetLogEventsInput { + s.EndTime = &v + return s +} + +// SetLimit sets the Limit field's value. +func (s *GetLogEventsInput) SetLimit(v int64) *GetLogEventsInput { + s.Limit = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *GetLogEventsInput) SetLogGroupName(v string) *GetLogEventsInput { + s.LogGroupName = &v + return s +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *GetLogEventsInput) SetLogStreamName(v string) *GetLogEventsInput { + s.LogStreamName = &v + return s +} + +// SetNextToken sets the NextToken field's value. +func (s *GetLogEventsInput) SetNextToken(v string) *GetLogEventsInput { + s.NextToken = &v + return s +} + +// SetStartFromHead sets the StartFromHead field's value. +func (s *GetLogEventsInput) SetStartFromHead(v bool) *GetLogEventsInput { + s.StartFromHead = &v + return s +} + +// SetStartTime sets the StartTime field's value. +func (s *GetLogEventsInput) SetStartTime(v int64) *GetLogEventsInput { + s.StartTime = &v + return s +} + +type GetLogEventsOutput struct { + _ struct{} `type:"structure"` + + // The events. + Events []*OutputLogEvent `locationName:"events" type:"list"` + + // The token for the next set of items in the backward direction. The token + // expires after 24 hours. + NextBackwardToken *string `locationName:"nextBackwardToken" min:"1" type:"string"` + + // The token for the next set of items in the forward direction. The token expires + // after 24 hours. + NextForwardToken *string `locationName:"nextForwardToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s GetLogEventsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetLogEventsOutput) GoString() string { + return s.String() +} + +// SetEvents sets the Events field's value. +func (s *GetLogEventsOutput) SetEvents(v []*OutputLogEvent) *GetLogEventsOutput { + s.Events = v + return s +} + +// SetNextBackwardToken sets the NextBackwardToken field's value. +func (s *GetLogEventsOutput) SetNextBackwardToken(v string) *GetLogEventsOutput { + s.NextBackwardToken = &v + return s +} + +// SetNextForwardToken sets the NextForwardToken field's value. +func (s *GetLogEventsOutput) SetNextForwardToken(v string) *GetLogEventsOutput { + s.NextForwardToken = &v + return s +} + +// Represents a log event, which is a record of activity that was recorded by +// the application or resource being monitored. +type InputLogEvent struct { + _ struct{} `type:"structure"` + + // The raw event message. + // + // Message is a required field + Message *string `locationName:"message" min:"1" type:"string" required:"true"` + + // The time the event occurred, expressed as the number of milliseconds fter + // Jan 1, 1970 00:00:00 UTC. + // + // Timestamp is a required field + Timestamp *int64 `locationName:"timestamp" type:"long" required:"true"` +} + +// String returns the string representation +func (s InputLogEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s InputLogEvent) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *InputLogEvent) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "InputLogEvent"} + if s.Message == nil { + invalidParams.Add(request.NewErrParamRequired("Message")) + } + if s.Message != nil && len(*s.Message) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Message", 1)) + } + if s.Timestamp == nil { + invalidParams.Add(request.NewErrParamRequired("Timestamp")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetMessage sets the Message field's value. +func (s *InputLogEvent) SetMessage(v string) *InputLogEvent { + s.Message = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *InputLogEvent) SetTimestamp(v int64) *InputLogEvent { + s.Timestamp = &v + return s +} + +type ListTagsLogGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s ListTagsLogGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsLogGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListTagsLogGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ListTagsLogGroupInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *ListTagsLogGroupInput) SetLogGroupName(v string) *ListTagsLogGroupInput { + s.LogGroupName = &v + return s +} + +type ListTagsLogGroupOutput struct { + _ struct{} `type:"structure"` + + // The tags for the log group. + Tags map[string]*string `locationName:"tags" min:"1" type:"map"` +} + +// String returns the string representation +func (s ListTagsLogGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ListTagsLogGroupOutput) GoString() string { + return s.String() +} + +// SetTags sets the Tags field's value. +func (s *ListTagsLogGroupOutput) SetTags(v map[string]*string) *ListTagsLogGroupOutput { + s.Tags = v + return s +} + +// Represents a log group. +type LogGroup struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the log group. + Arn *string `locationName:"arn" type:"string"` + + // The creation time of the log group, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + CreationTime *int64 `locationName:"creationTime" type:"long"` + + // The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. + KmsKeyId *string `locationName:"kmsKeyId" type:"string"` + + // The name of the log group. + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string"` + + // The number of metric filters. + MetricFilterCount *int64 `locationName:"metricFilterCount" type:"integer"` + + // The number of days to retain the log events in the specified log group. Possible + // values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, + // 1827, and 3653. + RetentionInDays *int64 `locationName:"retentionInDays" type:"integer"` + + // The number of bytes stored. + StoredBytes *int64 `locationName:"storedBytes" type:"long"` +} + +// String returns the string representation +func (s LogGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LogGroup) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *LogGroup) SetArn(v string) *LogGroup { + s.Arn = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *LogGroup) SetCreationTime(v int64) *LogGroup { + s.CreationTime = &v + return s +} + +// SetKmsKeyId sets the KmsKeyId field's value. +func (s *LogGroup) SetKmsKeyId(v string) *LogGroup { + s.KmsKeyId = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *LogGroup) SetLogGroupName(v string) *LogGroup { + s.LogGroupName = &v + return s +} + +// SetMetricFilterCount sets the MetricFilterCount field's value. +func (s *LogGroup) SetMetricFilterCount(v int64) *LogGroup { + s.MetricFilterCount = &v + return s +} + +// SetRetentionInDays sets the RetentionInDays field's value. +func (s *LogGroup) SetRetentionInDays(v int64) *LogGroup { + s.RetentionInDays = &v + return s +} + +// SetStoredBytes sets the StoredBytes field's value. +func (s *LogGroup) SetStoredBytes(v int64) *LogGroup { + s.StoredBytes = &v + return s +} + +// Represents a log stream, which is a sequence of log events from a single +// emitter of logs. +type LogStream struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the log stream. + Arn *string `locationName:"arn" type:"string"` + + // The creation time of the stream, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + CreationTime *int64 `locationName:"creationTime" type:"long"` + + // The time of the first event, expressed as the number of milliseconds after + // Jan 1, 1970 00:00:00 UTC. + FirstEventTimestamp *int64 `locationName:"firstEventTimestamp" type:"long"` + + // the time of the most recent log event in the log stream in CloudWatch Logs. + // This number is expressed as the number of milliseconds after Jan 1, 1970 + // 00:00:00 UTC. lastEventTime updates on an eventual consistency basis. It + // typically updates in less than an hour from ingestion, but may take longer + // in some rare situations. + LastEventTimestamp *int64 `locationName:"lastEventTimestamp" type:"long"` + + // The ingestion time, expressed as the number of milliseconds after Jan 1, + // 1970 00:00:00 UTC. + LastIngestionTime *int64 `locationName:"lastIngestionTime" type:"long"` + + // The name of the log stream. + LogStreamName *string `locationName:"logStreamName" min:"1" type:"string"` + + // The number of bytes stored. + StoredBytes *int64 `locationName:"storedBytes" type:"long"` + + // The sequence token. + UploadSequenceToken *string `locationName:"uploadSequenceToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s LogStream) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s LogStream) GoString() string { + return s.String() +} + +// SetArn sets the Arn field's value. +func (s *LogStream) SetArn(v string) *LogStream { + s.Arn = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *LogStream) SetCreationTime(v int64) *LogStream { + s.CreationTime = &v + return s +} + +// SetFirstEventTimestamp sets the FirstEventTimestamp field's value. +func (s *LogStream) SetFirstEventTimestamp(v int64) *LogStream { + s.FirstEventTimestamp = &v + return s +} + +// SetLastEventTimestamp sets the LastEventTimestamp field's value. +func (s *LogStream) SetLastEventTimestamp(v int64) *LogStream { + s.LastEventTimestamp = &v + return s +} + +// SetLastIngestionTime sets the LastIngestionTime field's value. +func (s *LogStream) SetLastIngestionTime(v int64) *LogStream { + s.LastIngestionTime = &v + return s +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *LogStream) SetLogStreamName(v string) *LogStream { + s.LogStreamName = &v + return s +} + +// SetStoredBytes sets the StoredBytes field's value. +func (s *LogStream) SetStoredBytes(v int64) *LogStream { + s.StoredBytes = &v + return s +} + +// SetUploadSequenceToken sets the UploadSequenceToken field's value. +func (s *LogStream) SetUploadSequenceToken(v string) *LogStream { + s.UploadSequenceToken = &v + return s +} + +// Metric filters express how CloudWatch Logs would extract metric observations +// from ingested log events and transform them into metric data in a CloudWatch +// metric. +type MetricFilter struct { + _ struct{} `type:"structure"` + + // The creation time of the metric filter, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + CreationTime *int64 `locationName:"creationTime" type:"long"` + + // The name of the metric filter. + FilterName *string `locationName:"filterName" min:"1" type:"string"` + + // A symbolic description of how CloudWatch Logs should interpret the data in + // each log event. For example, a log event may contain time stamps, IP addresses, + // strings, and so on. You use the filter pattern to specify what to look for + // in the log event message. + FilterPattern *string `locationName:"filterPattern" type:"string"` + + // The name of the log group. + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string"` + + // The metric transformations. + MetricTransformations []*MetricTransformation `locationName:"metricTransformations" min:"1" type:"list"` +} + +// String returns the string representation +func (s MetricFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricFilter) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *MetricFilter) SetCreationTime(v int64) *MetricFilter { + s.CreationTime = &v + return s +} + +// SetFilterName sets the FilterName field's value. +func (s *MetricFilter) SetFilterName(v string) *MetricFilter { + s.FilterName = &v + return s +} + +// SetFilterPattern sets the FilterPattern field's value. +func (s *MetricFilter) SetFilterPattern(v string) *MetricFilter { + s.FilterPattern = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *MetricFilter) SetLogGroupName(v string) *MetricFilter { + s.LogGroupName = &v + return s +} + +// SetMetricTransformations sets the MetricTransformations field's value. +func (s *MetricFilter) SetMetricTransformations(v []*MetricTransformation) *MetricFilter { + s.MetricTransformations = v + return s +} + +// Represents a matched event. +type MetricFilterMatchRecord struct { + _ struct{} `type:"structure"` + + // The raw event data. + EventMessage *string `locationName:"eventMessage" min:"1" type:"string"` + + // The event number. + EventNumber *int64 `locationName:"eventNumber" type:"long"` + + // The values extracted from the event data by the filter. + ExtractedValues map[string]*string `locationName:"extractedValues" type:"map"` +} + +// String returns the string representation +func (s MetricFilterMatchRecord) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricFilterMatchRecord) GoString() string { + return s.String() +} + +// SetEventMessage sets the EventMessage field's value. +func (s *MetricFilterMatchRecord) SetEventMessage(v string) *MetricFilterMatchRecord { + s.EventMessage = &v + return s +} + +// SetEventNumber sets the EventNumber field's value. +func (s *MetricFilterMatchRecord) SetEventNumber(v int64) *MetricFilterMatchRecord { + s.EventNumber = &v + return s +} + +// SetExtractedValues sets the ExtractedValues field's value. +func (s *MetricFilterMatchRecord) SetExtractedValues(v map[string]*string) *MetricFilterMatchRecord { + s.ExtractedValues = v + return s +} + +// Indicates how to transform ingested log events in to metric data in a CloudWatch +// metric. +type MetricTransformation struct { + _ struct{} `type:"structure"` + + // (Optional) The value to emit when a filter pattern does not match a log event. + // This value can be null. + DefaultValue *float64 `locationName:"defaultValue" type:"double"` + + // The name of the CloudWatch metric. + // + // MetricName is a required field + MetricName *string `locationName:"metricName" type:"string" required:"true"` + + // The namespace of the CloudWatch metric. + // + // MetricNamespace is a required field + MetricNamespace *string `locationName:"metricNamespace" type:"string" required:"true"` + + // The value to publish to the CloudWatch metric when a filter pattern matches + // a log event. + // + // MetricValue is a required field + MetricValue *string `locationName:"metricValue" type:"string" required:"true"` +} + +// String returns the string representation +func (s MetricTransformation) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s MetricTransformation) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *MetricTransformation) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "MetricTransformation"} + if s.MetricName == nil { + invalidParams.Add(request.NewErrParamRequired("MetricName")) + } + if s.MetricNamespace == nil { + invalidParams.Add(request.NewErrParamRequired("MetricNamespace")) + } + if s.MetricValue == nil { + invalidParams.Add(request.NewErrParamRequired("MetricValue")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDefaultValue sets the DefaultValue field's value. +func (s *MetricTransformation) SetDefaultValue(v float64) *MetricTransformation { + s.DefaultValue = &v + return s +} + +// SetMetricName sets the MetricName field's value. +func (s *MetricTransformation) SetMetricName(v string) *MetricTransformation { + s.MetricName = &v + return s +} + +// SetMetricNamespace sets the MetricNamespace field's value. +func (s *MetricTransformation) SetMetricNamespace(v string) *MetricTransformation { + s.MetricNamespace = &v + return s +} + +// SetMetricValue sets the MetricValue field's value. +func (s *MetricTransformation) SetMetricValue(v string) *MetricTransformation { + s.MetricValue = &v + return s +} + +// Represents a log event. +type OutputLogEvent struct { + _ struct{} `type:"structure"` + + // The time the event was ingested, expressed as the number of milliseconds + // after Jan 1, 1970 00:00:00 UTC. + IngestionTime *int64 `locationName:"ingestionTime" type:"long"` + + // The data contained in the log event. + Message *string `locationName:"message" min:"1" type:"string"` + + // The time the event occurred, expressed as the number of milliseconds after + // Jan 1, 1970 00:00:00 UTC. + Timestamp *int64 `locationName:"timestamp" type:"long"` +} + +// String returns the string representation +func (s OutputLogEvent) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s OutputLogEvent) GoString() string { + return s.String() +} + +// SetIngestionTime sets the IngestionTime field's value. +func (s *OutputLogEvent) SetIngestionTime(v int64) *OutputLogEvent { + s.IngestionTime = &v + return s +} + +// SetMessage sets the Message field's value. +func (s *OutputLogEvent) SetMessage(v string) *OutputLogEvent { + s.Message = &v + return s +} + +// SetTimestamp sets the Timestamp field's value. +func (s *OutputLogEvent) SetTimestamp(v int64) *OutputLogEvent { + s.Timestamp = &v + return s +} + +type PutDestinationInput struct { + _ struct{} `type:"structure"` + + // A name for the destination. + // + // DestinationName is a required field + DestinationName *string `locationName:"destinationName" min:"1" type:"string" required:"true"` + + // The ARN of an IAM role that grants CloudWatch Logs permissions to call the + // Amazon Kinesis PutRecord operation on the destination stream. + // + // RoleArn is a required field + RoleArn *string `locationName:"roleArn" min:"1" type:"string" required:"true"` + + // The ARN of an Amazon Kinesis stream to which to deliver matching log events. + // + // TargetArn is a required field + TargetArn *string `locationName:"targetArn" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutDestinationInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutDestinationInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutDestinationInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutDestinationInput"} + if s.DestinationName == nil { + invalidParams.Add(request.NewErrParamRequired("DestinationName")) + } + if s.DestinationName != nil && len(*s.DestinationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DestinationName", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 1)) + } + if s.TargetArn == nil { + invalidParams.Add(request.NewErrParamRequired("TargetArn")) + } + if s.TargetArn != nil && len(*s.TargetArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("TargetArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestinationName sets the DestinationName field's value. +func (s *PutDestinationInput) SetDestinationName(v string) *PutDestinationInput { + s.DestinationName = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *PutDestinationInput) SetRoleArn(v string) *PutDestinationInput { + s.RoleArn = &v + return s +} + +// SetTargetArn sets the TargetArn field's value. +func (s *PutDestinationInput) SetTargetArn(v string) *PutDestinationInput { + s.TargetArn = &v + return s +} + +type PutDestinationOutput struct { + _ struct{} `type:"structure"` + + // The destination. + Destination *Destination `locationName:"destination" type:"structure"` +} + +// String returns the string representation +func (s PutDestinationOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutDestinationOutput) GoString() string { + return s.String() +} + +// SetDestination sets the Destination field's value. +func (s *PutDestinationOutput) SetDestination(v *Destination) *PutDestinationOutput { + s.Destination = v + return s +} + +type PutDestinationPolicyInput struct { + _ struct{} `type:"structure"` + + // An IAM policy document that authorizes cross-account users to deliver their + // log events to the associated destination. + // + // AccessPolicy is a required field + AccessPolicy *string `locationName:"accessPolicy" min:"1" type:"string" required:"true"` + + // A name for an existing destination. + // + // DestinationName is a required field + DestinationName *string `locationName:"destinationName" min:"1" type:"string" required:"true"` +} + +// String returns the string representation +func (s PutDestinationPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutDestinationPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutDestinationPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutDestinationPolicyInput"} + if s.AccessPolicy == nil { + invalidParams.Add(request.NewErrParamRequired("AccessPolicy")) + } + if s.AccessPolicy != nil && len(*s.AccessPolicy) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AccessPolicy", 1)) + } + if s.DestinationName == nil { + invalidParams.Add(request.NewErrParamRequired("DestinationName")) + } + if s.DestinationName != nil && len(*s.DestinationName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DestinationName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAccessPolicy sets the AccessPolicy field's value. +func (s *PutDestinationPolicyInput) SetAccessPolicy(v string) *PutDestinationPolicyInput { + s.AccessPolicy = &v + return s +} + +// SetDestinationName sets the DestinationName field's value. +func (s *PutDestinationPolicyInput) SetDestinationName(v string) *PutDestinationPolicyInput { + s.DestinationName = &v + return s +} + +type PutDestinationPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutDestinationPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutDestinationPolicyOutput) GoString() string { + return s.String() +} + +type PutLogEventsInput struct { + _ struct{} `type:"structure"` + + // The log events. + // + // LogEvents is a required field + LogEvents []*InputLogEvent `locationName:"logEvents" min:"1" type:"list" required:"true"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The name of the log stream. + // + // LogStreamName is a required field + LogStreamName *string `locationName:"logStreamName" min:"1" type:"string" required:"true"` + + // The sequence token obtained from the response of the previous PutLogEvents + // call. An upload in a newly created log stream does not require a sequence + // token. You can also get the sequence token using DescribeLogStreams. If you + // call PutLogEvents twice within a narrow time period using the same value + // for sequenceToken, both calls may be successful, or one may be rejected. + SequenceToken *string `locationName:"sequenceToken" min:"1" type:"string"` +} + +// String returns the string representation +func (s PutLogEventsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutLogEventsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutLogEventsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutLogEventsInput"} + if s.LogEvents == nil { + invalidParams.Add(request.NewErrParamRequired("LogEvents")) + } + if s.LogEvents != nil && len(s.LogEvents) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogEvents", 1)) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.LogStreamName == nil { + invalidParams.Add(request.NewErrParamRequired("LogStreamName")) + } + if s.LogStreamName != nil && len(*s.LogStreamName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogStreamName", 1)) + } + if s.SequenceToken != nil && len(*s.SequenceToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SequenceToken", 1)) + } + if s.LogEvents != nil { + for i, v := range s.LogEvents { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LogEvents", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogEvents sets the LogEvents field's value. +func (s *PutLogEventsInput) SetLogEvents(v []*InputLogEvent) *PutLogEventsInput { + s.LogEvents = v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *PutLogEventsInput) SetLogGroupName(v string) *PutLogEventsInput { + s.LogGroupName = &v + return s +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *PutLogEventsInput) SetLogStreamName(v string) *PutLogEventsInput { + s.LogStreamName = &v + return s +} + +// SetSequenceToken sets the SequenceToken field's value. +func (s *PutLogEventsInput) SetSequenceToken(v string) *PutLogEventsInput { + s.SequenceToken = &v + return s +} + +type PutLogEventsOutput struct { + _ struct{} `type:"structure"` + + // The next sequence token. + NextSequenceToken *string `locationName:"nextSequenceToken" min:"1" type:"string"` + + // The rejected events. + RejectedLogEventsInfo *RejectedLogEventsInfo `locationName:"rejectedLogEventsInfo" type:"structure"` +} + +// String returns the string representation +func (s PutLogEventsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutLogEventsOutput) GoString() string { + return s.String() +} + +// SetNextSequenceToken sets the NextSequenceToken field's value. +func (s *PutLogEventsOutput) SetNextSequenceToken(v string) *PutLogEventsOutput { + s.NextSequenceToken = &v + return s +} + +// SetRejectedLogEventsInfo sets the RejectedLogEventsInfo field's value. +func (s *PutLogEventsOutput) SetRejectedLogEventsInfo(v *RejectedLogEventsInfo) *PutLogEventsOutput { + s.RejectedLogEventsInfo = v + return s +} + +type PutMetricFilterInput struct { + _ struct{} `type:"structure"` + + // A name for the metric filter. + // + // FilterName is a required field + FilterName *string `locationName:"filterName" min:"1" type:"string" required:"true"` + + // A filter pattern for extracting metric data out of ingested log events. + // + // FilterPattern is a required field + FilterPattern *string `locationName:"filterPattern" type:"string" required:"true"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // A collection of information that defines how metric data gets emitted. + // + // MetricTransformations is a required field + MetricTransformations []*MetricTransformation `locationName:"metricTransformations" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s PutMetricFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutMetricFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutMetricFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutMetricFilterInput"} + if s.FilterName == nil { + invalidParams.Add(request.NewErrParamRequired("FilterName")) + } + if s.FilterName != nil && len(*s.FilterName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FilterName", 1)) + } + if s.FilterPattern == nil { + invalidParams.Add(request.NewErrParamRequired("FilterPattern")) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.MetricTransformations == nil { + invalidParams.Add(request.NewErrParamRequired("MetricTransformations")) + } + if s.MetricTransformations != nil && len(s.MetricTransformations) < 1 { + invalidParams.Add(request.NewErrParamMinLen("MetricTransformations", 1)) + } + if s.MetricTransformations != nil { + for i, v := range s.MetricTransformations { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "MetricTransformations", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilterName sets the FilterName field's value. +func (s *PutMetricFilterInput) SetFilterName(v string) *PutMetricFilterInput { + s.FilterName = &v + return s +} + +// SetFilterPattern sets the FilterPattern field's value. +func (s *PutMetricFilterInput) SetFilterPattern(v string) *PutMetricFilterInput { + s.FilterPattern = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *PutMetricFilterInput) SetLogGroupName(v string) *PutMetricFilterInput { + s.LogGroupName = &v + return s +} + +// SetMetricTransformations sets the MetricTransformations field's value. +func (s *PutMetricFilterInput) SetMetricTransformations(v []*MetricTransformation) *PutMetricFilterInput { + s.MetricTransformations = v + return s +} + +type PutMetricFilterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutMetricFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutMetricFilterOutput) GoString() string { + return s.String() +} + +type PutResourcePolicyInput struct { + _ struct{} `type:"structure"` + + // Details of the new policy, including the identity of the principal that is + // enabled to put logs to this account. This is formatted as a JSON string. + // + // The following example creates a resource policy enabling the Route 53 service + // to put DNS query logs in to the specified log group. Replace "logArn" with + // the ARN of your CloudWatch Logs resource, such as a log group or log stream. + // + // { "Version": "2012-10-17" "Statement": [ { "Sid": "Route53LogsToCloudWatchLogs", + // "Effect": "Allow", "Principal": { "Service": [ "route53.amazonaws.com" ] + // }, "Action":"logs:PutLogEvents", "Resource": logArn } ] } + PolicyDocument *string `locationName:"policyDocument" min:"1" type:"string"` + + // Name of the new policy. This parameter is required. + PolicyName *string `locationName:"policyName" type:"string"` +} + +// String returns the string representation +func (s PutResourcePolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutResourcePolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutResourcePolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutResourcePolicyInput"} + if s.PolicyDocument != nil && len(*s.PolicyDocument) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PolicyDocument", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *PutResourcePolicyInput) SetPolicyDocument(v string) *PutResourcePolicyInput { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *PutResourcePolicyInput) SetPolicyName(v string) *PutResourcePolicyInput { + s.PolicyName = &v + return s +} + +type PutResourcePolicyOutput struct { + _ struct{} `type:"structure"` + + // The new policy. + ResourcePolicy *ResourcePolicy `locationName:"resourcePolicy" type:"structure"` +} + +// String returns the string representation +func (s PutResourcePolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutResourcePolicyOutput) GoString() string { + return s.String() +} + +// SetResourcePolicy sets the ResourcePolicy field's value. +func (s *PutResourcePolicyOutput) SetResourcePolicy(v *ResourcePolicy) *PutResourcePolicyOutput { + s.ResourcePolicy = v + return s +} + +type PutRetentionPolicyInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The number of days to retain the log events in the specified log group. Possible + // values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, + // 1827, and 3653. + // + // RetentionInDays is a required field + RetentionInDays *int64 `locationName:"retentionInDays" type:"integer" required:"true"` +} + +// String returns the string representation +func (s PutRetentionPolicyInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRetentionPolicyInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutRetentionPolicyInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutRetentionPolicyInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.RetentionInDays == nil { + invalidParams.Add(request.NewErrParamRequired("RetentionInDays")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *PutRetentionPolicyInput) SetLogGroupName(v string) *PutRetentionPolicyInput { + s.LogGroupName = &v + return s +} + +// SetRetentionInDays sets the RetentionInDays field's value. +func (s *PutRetentionPolicyInput) SetRetentionInDays(v int64) *PutRetentionPolicyInput { + s.RetentionInDays = &v + return s +} + +type PutRetentionPolicyOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutRetentionPolicyOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutRetentionPolicyOutput) GoString() string { + return s.String() +} + +type PutSubscriptionFilterInput struct { + _ struct{} `type:"structure"` + + // The ARN of the destination to deliver matching log events to. Currently, + // the supported destinations are: + // + // * An Amazon Kinesis stream belonging to the same account as the subscription + // filter, for same-account delivery. + // + // * A logical destination (specified using an ARN) belonging to a different + // account, for cross-account delivery. + // + // * An Amazon Kinesis Firehose delivery stream belonging to the same account + // as the subscription filter, for same-account delivery. + // + // * An AWS Lambda function belonging to the same account as the subscription + // filter, for same-account delivery. + // + // DestinationArn is a required field + DestinationArn *string `locationName:"destinationArn" min:"1" type:"string" required:"true"` + + // The method used to distribute log data to the destination. By default log + // data is grouped by log stream, but the grouping can be set to random for + // a more even distribution. This property is only applicable when the destination + // is an Amazon Kinesis stream. + Distribution *string `locationName:"distribution" type:"string" enum:"Distribution"` + + // A name for the subscription filter. If you are updating an existing filter, + // you must specify the correct name in filterName. Otherwise, the call fails + // because you cannot associate a second filter with a log group. To find the + // name of the filter currently associated with a log group, use DescribeSubscriptionFilters. + // + // FilterName is a required field + FilterName *string `locationName:"filterName" min:"1" type:"string" required:"true"` + + // A filter pattern for subscribing to a filtered stream of log events. + // + // FilterPattern is a required field + FilterPattern *string `locationName:"filterPattern" type:"string" required:"true"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The ARN of an IAM role that grants CloudWatch Logs permissions to deliver + // ingested log events to the destination stream. You don't need to provide + // the ARN when you are working with a logical destination for cross-account + // delivery. + RoleArn *string `locationName:"roleArn" min:"1" type:"string"` +} + +// String returns the string representation +func (s PutSubscriptionFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutSubscriptionFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *PutSubscriptionFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "PutSubscriptionFilterInput"} + if s.DestinationArn == nil { + invalidParams.Add(request.NewErrParamRequired("DestinationArn")) + } + if s.DestinationArn != nil && len(*s.DestinationArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DestinationArn", 1)) + } + if s.FilterName == nil { + invalidParams.Add(request.NewErrParamRequired("FilterName")) + } + if s.FilterName != nil && len(*s.FilterName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("FilterName", 1)) + } + if s.FilterPattern == nil { + invalidParams.Add(request.NewErrParamRequired("FilterPattern")) + } + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.RoleArn != nil && len(*s.RoleArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDestinationArn sets the DestinationArn field's value. +func (s *PutSubscriptionFilterInput) SetDestinationArn(v string) *PutSubscriptionFilterInput { + s.DestinationArn = &v + return s +} + +// SetDistribution sets the Distribution field's value. +func (s *PutSubscriptionFilterInput) SetDistribution(v string) *PutSubscriptionFilterInput { + s.Distribution = &v + return s +} + +// SetFilterName sets the FilterName field's value. +func (s *PutSubscriptionFilterInput) SetFilterName(v string) *PutSubscriptionFilterInput { + s.FilterName = &v + return s +} + +// SetFilterPattern sets the FilterPattern field's value. +func (s *PutSubscriptionFilterInput) SetFilterPattern(v string) *PutSubscriptionFilterInput { + s.FilterPattern = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *PutSubscriptionFilterInput) SetLogGroupName(v string) *PutSubscriptionFilterInput { + s.LogGroupName = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *PutSubscriptionFilterInput) SetRoleArn(v string) *PutSubscriptionFilterInput { + s.RoleArn = &v + return s +} + +type PutSubscriptionFilterOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s PutSubscriptionFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s PutSubscriptionFilterOutput) GoString() string { + return s.String() +} + +// Represents the rejected events. +type RejectedLogEventsInfo struct { + _ struct{} `type:"structure"` + + // The expired log events. + ExpiredLogEventEndIndex *int64 `locationName:"expiredLogEventEndIndex" type:"integer"` + + // The log events that are too new. + TooNewLogEventStartIndex *int64 `locationName:"tooNewLogEventStartIndex" type:"integer"` + + // The log events that are too old. + TooOldLogEventEndIndex *int64 `locationName:"tooOldLogEventEndIndex" type:"integer"` +} + +// String returns the string representation +func (s RejectedLogEventsInfo) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s RejectedLogEventsInfo) GoString() string { + return s.String() +} + +// SetExpiredLogEventEndIndex sets the ExpiredLogEventEndIndex field's value. +func (s *RejectedLogEventsInfo) SetExpiredLogEventEndIndex(v int64) *RejectedLogEventsInfo { + s.ExpiredLogEventEndIndex = &v + return s +} + +// SetTooNewLogEventStartIndex sets the TooNewLogEventStartIndex field's value. +func (s *RejectedLogEventsInfo) SetTooNewLogEventStartIndex(v int64) *RejectedLogEventsInfo { + s.TooNewLogEventStartIndex = &v + return s +} + +// SetTooOldLogEventEndIndex sets the TooOldLogEventEndIndex field's value. +func (s *RejectedLogEventsInfo) SetTooOldLogEventEndIndex(v int64) *RejectedLogEventsInfo { + s.TooOldLogEventEndIndex = &v + return s +} + +// A policy enabling one or more entities to put logs to a log group in this +// account. +type ResourcePolicy struct { + _ struct{} `type:"structure"` + + // Time stamp showing when this policy was last updated, expressed as the number + // of milliseconds after Jan 1, 1970 00:00:00 UTC. + LastUpdatedTime *int64 `locationName:"lastUpdatedTime" type:"long"` + + // The details of the policy. + PolicyDocument *string `locationName:"policyDocument" min:"1" type:"string"` + + // The name of the resource policy. + PolicyName *string `locationName:"policyName" type:"string"` +} + +// String returns the string representation +func (s ResourcePolicy) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ResourcePolicy) GoString() string { + return s.String() +} + +// SetLastUpdatedTime sets the LastUpdatedTime field's value. +func (s *ResourcePolicy) SetLastUpdatedTime(v int64) *ResourcePolicy { + s.LastUpdatedTime = &v + return s +} + +// SetPolicyDocument sets the PolicyDocument field's value. +func (s *ResourcePolicy) SetPolicyDocument(v string) *ResourcePolicy { + s.PolicyDocument = &v + return s +} + +// SetPolicyName sets the PolicyName field's value. +func (s *ResourcePolicy) SetPolicyName(v string) *ResourcePolicy { + s.PolicyName = &v + return s +} + +// Represents the search status of a log stream. +type SearchedLogStream struct { + _ struct{} `type:"structure"` + + // The name of the log stream. + LogStreamName *string `locationName:"logStreamName" min:"1" type:"string"` + + // Indicates whether all the events in this log stream were searched. + SearchedCompletely *bool `locationName:"searchedCompletely" type:"boolean"` +} + +// String returns the string representation +func (s SearchedLogStream) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SearchedLogStream) GoString() string { + return s.String() +} + +// SetLogStreamName sets the LogStreamName field's value. +func (s *SearchedLogStream) SetLogStreamName(v string) *SearchedLogStream { + s.LogStreamName = &v + return s +} + +// SetSearchedCompletely sets the SearchedCompletely field's value. +func (s *SearchedLogStream) SetSearchedCompletely(v bool) *SearchedLogStream { + s.SearchedCompletely = &v + return s +} + +// Represents a subscription filter. +type SubscriptionFilter struct { + _ struct{} `type:"structure"` + + // The creation time of the subscription filter, expressed as the number of + // milliseconds after Jan 1, 1970 00:00:00 UTC. + CreationTime *int64 `locationName:"creationTime" type:"long"` + + // The Amazon Resource Name (ARN) of the destination. + DestinationArn *string `locationName:"destinationArn" min:"1" type:"string"` + + // The method used to distribute log data to the destination, which can be either + // random or grouped by log stream. + Distribution *string `locationName:"distribution" type:"string" enum:"Distribution"` + + // The name of the subscription filter. + FilterName *string `locationName:"filterName" min:"1" type:"string"` + + // A symbolic description of how CloudWatch Logs should interpret the data in + // each log event. For example, a log event may contain time stamps, IP addresses, + // strings, and so on. You use the filter pattern to specify what to look for + // in the log event message. + FilterPattern *string `locationName:"filterPattern" type:"string"` + + // The name of the log group. + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string"` + + RoleArn *string `locationName:"roleArn" min:"1" type:"string"` +} + +// String returns the string representation +func (s SubscriptionFilter) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s SubscriptionFilter) GoString() string { + return s.String() +} + +// SetCreationTime sets the CreationTime field's value. +func (s *SubscriptionFilter) SetCreationTime(v int64) *SubscriptionFilter { + s.CreationTime = &v + return s +} + +// SetDestinationArn sets the DestinationArn field's value. +func (s *SubscriptionFilter) SetDestinationArn(v string) *SubscriptionFilter { + s.DestinationArn = &v + return s +} + +// SetDistribution sets the Distribution field's value. +func (s *SubscriptionFilter) SetDistribution(v string) *SubscriptionFilter { + s.Distribution = &v + return s +} + +// SetFilterName sets the FilterName field's value. +func (s *SubscriptionFilter) SetFilterName(v string) *SubscriptionFilter { + s.FilterName = &v + return s +} + +// SetFilterPattern sets the FilterPattern field's value. +func (s *SubscriptionFilter) SetFilterPattern(v string) *SubscriptionFilter { + s.FilterPattern = &v + return s +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *SubscriptionFilter) SetLogGroupName(v string) *SubscriptionFilter { + s.LogGroupName = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *SubscriptionFilter) SetRoleArn(v string) *SubscriptionFilter { + s.RoleArn = &v + return s +} + +type TagLogGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The key-value pairs to use for the tags. + // + // Tags is a required field + Tags map[string]*string `locationName:"tags" min:"1" type:"map" required:"true"` +} + +// String returns the string representation +func (s TagLogGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagLogGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TagLogGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TagLogGroupInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil && len(s.Tags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Tags", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *TagLogGroupInput) SetLogGroupName(v string) *TagLogGroupInput { + s.LogGroupName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *TagLogGroupInput) SetTags(v map[string]*string) *TagLogGroupInput { + s.Tags = v + return s +} + +type TagLogGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s TagLogGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TagLogGroupOutput) GoString() string { + return s.String() +} + +type TestMetricFilterInput struct { + _ struct{} `type:"structure"` + + // A symbolic description of how CloudWatch Logs should interpret the data in + // each log event. For example, a log event may contain time stamps, IP addresses, + // strings, and so on. You use the filter pattern to specify what to look for + // in the log event message. + // + // FilterPattern is a required field + FilterPattern *string `locationName:"filterPattern" type:"string" required:"true"` + + // The log event messages to test. + // + // LogEventMessages is a required field + LogEventMessages []*string `locationName:"logEventMessages" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s TestMetricFilterInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TestMetricFilterInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *TestMetricFilterInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "TestMetricFilterInput"} + if s.FilterPattern == nil { + invalidParams.Add(request.NewErrParamRequired("FilterPattern")) + } + if s.LogEventMessages == nil { + invalidParams.Add(request.NewErrParamRequired("LogEventMessages")) + } + if s.LogEventMessages != nil && len(s.LogEventMessages) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogEventMessages", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetFilterPattern sets the FilterPattern field's value. +func (s *TestMetricFilterInput) SetFilterPattern(v string) *TestMetricFilterInput { + s.FilterPattern = &v + return s +} + +// SetLogEventMessages sets the LogEventMessages field's value. +func (s *TestMetricFilterInput) SetLogEventMessages(v []*string) *TestMetricFilterInput { + s.LogEventMessages = v + return s +} + +type TestMetricFilterOutput struct { + _ struct{} `type:"structure"` + + // The matched events. + Matches []*MetricFilterMatchRecord `locationName:"matches" type:"list"` +} + +// String returns the string representation +func (s TestMetricFilterOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s TestMetricFilterOutput) GoString() string { + return s.String() +} + +// SetMatches sets the Matches field's value. +func (s *TestMetricFilterOutput) SetMatches(v []*MetricFilterMatchRecord) *TestMetricFilterOutput { + s.Matches = v + return s +} + +type UntagLogGroupInput struct { + _ struct{} `type:"structure"` + + // The name of the log group. + // + // LogGroupName is a required field + LogGroupName *string `locationName:"logGroupName" min:"1" type:"string" required:"true"` + + // The tag keys. The corresponding tags are removed from the log group. + // + // Tags is a required field + Tags []*string `locationName:"tags" min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s UntagLogGroupInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagLogGroupInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *UntagLogGroupInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "UntagLogGroupInput"} + if s.LogGroupName == nil { + invalidParams.Add(request.NewErrParamRequired("LogGroupName")) + } + if s.LogGroupName != nil && len(*s.LogGroupName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("LogGroupName", 1)) + } + if s.Tags == nil { + invalidParams.Add(request.NewErrParamRequired("Tags")) + } + if s.Tags != nil && len(s.Tags) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Tags", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetLogGroupName sets the LogGroupName field's value. +func (s *UntagLogGroupInput) SetLogGroupName(v string) *UntagLogGroupInput { + s.LogGroupName = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *UntagLogGroupInput) SetTags(v []*string) *UntagLogGroupInput { + s.Tags = v + return s +} + +type UntagLogGroupOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s UntagLogGroupOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s UntagLogGroupOutput) GoString() string { + return s.String() +} + +// The method used to distribute log data to the destination, which can be either +// random or grouped by log stream. +const ( + // DistributionRandom is a Distribution enum value + DistributionRandom = "Random" + + // DistributionByLogStream is a Distribution enum value + DistributionByLogStream = "ByLogStream" +) + +const ( + // ExportTaskStatusCodeCancelled is a ExportTaskStatusCode enum value + ExportTaskStatusCodeCancelled = "CANCELLED" + + // ExportTaskStatusCodeCompleted is a ExportTaskStatusCode enum value + ExportTaskStatusCodeCompleted = "COMPLETED" + + // ExportTaskStatusCodeFailed is a ExportTaskStatusCode enum value + ExportTaskStatusCodeFailed = "FAILED" + + // ExportTaskStatusCodePending is a ExportTaskStatusCode enum value + ExportTaskStatusCodePending = "PENDING" + + // ExportTaskStatusCodePendingCancel is a ExportTaskStatusCode enum value + ExportTaskStatusCodePendingCancel = "PENDING_CANCEL" + + // ExportTaskStatusCodeRunning is a ExportTaskStatusCode enum value + ExportTaskStatusCodeRunning = "RUNNING" +) + +const ( + // OrderByLogStreamName is a OrderBy enum value + OrderByLogStreamName = "LogStreamName" + + // OrderByLastEventTime is a OrderBy enum value + OrderByLastEventTime = "LastEventTime" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface/interface.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface/interface.go new file mode 100644 index 000000000000..4975ea3715c2 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/cloudwatchlogsiface/interface.go @@ -0,0 +1,217 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package cloudwatchlogsiface provides an interface to enable mocking the Amazon CloudWatch Logs service client +// for testing your code. +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. +package cloudwatchlogsiface + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/service/cloudwatchlogs" +) + +// CloudWatchLogsAPI provides an interface to enable mocking the +// cloudwatchlogs.CloudWatchLogs service client's API operation, +// paginators, and waiters. This make unit testing your code that calls out +// to the SDK's service client's calls easier. +// +// The best way to use this interface is so the SDK's service client's calls +// can be stubbed out for unit testing your code with the SDK without needing +// to inject custom request handlers into the SDK's request pipeline. +// +// // myFunc uses an SDK service client to make a request to +// // Amazon CloudWatch Logs. +// func myFunc(svc cloudwatchlogsiface.CloudWatchLogsAPI) bool { +// // Make svc.AssociateKmsKey request +// } +// +// func main() { +// sess := session.New() +// svc := cloudwatchlogs.New(sess) +// +// myFunc(svc) +// } +// +// In your _test.go file: +// +// // Define a mock struct to be used in your unit tests of myFunc. +// type mockCloudWatchLogsClient struct { +// cloudwatchlogsiface.CloudWatchLogsAPI +// } +// func (m *mockCloudWatchLogsClient) AssociateKmsKey(input *cloudwatchlogs.AssociateKmsKeyInput) (*cloudwatchlogs.AssociateKmsKeyOutput, error) { +// // mock response/functionality +// } +// +// func TestMyFunc(t *testing.T) { +// // Setup Test +// mockSvc := &mockCloudWatchLogsClient{} +// +// myfunc(mockSvc) +// +// // Verify myFunc's functionality +// } +// +// It is important to note that this interface will have breaking changes +// when the service model is updated and adds new API operations, paginators, +// and waiters. Its suggested to use the pattern above for testing, or using +// tooling to generate mocks to satisfy the interfaces. +type CloudWatchLogsAPI interface { + AssociateKmsKey(*cloudwatchlogs.AssociateKmsKeyInput) (*cloudwatchlogs.AssociateKmsKeyOutput, error) + AssociateKmsKeyWithContext(aws.Context, *cloudwatchlogs.AssociateKmsKeyInput, ...request.Option) (*cloudwatchlogs.AssociateKmsKeyOutput, error) + AssociateKmsKeyRequest(*cloudwatchlogs.AssociateKmsKeyInput) (*request.Request, *cloudwatchlogs.AssociateKmsKeyOutput) + + CancelExportTask(*cloudwatchlogs.CancelExportTaskInput) (*cloudwatchlogs.CancelExportTaskOutput, error) + CancelExportTaskWithContext(aws.Context, *cloudwatchlogs.CancelExportTaskInput, ...request.Option) (*cloudwatchlogs.CancelExportTaskOutput, error) + CancelExportTaskRequest(*cloudwatchlogs.CancelExportTaskInput) (*request.Request, *cloudwatchlogs.CancelExportTaskOutput) + + CreateExportTask(*cloudwatchlogs.CreateExportTaskInput) (*cloudwatchlogs.CreateExportTaskOutput, error) + CreateExportTaskWithContext(aws.Context, *cloudwatchlogs.CreateExportTaskInput, ...request.Option) (*cloudwatchlogs.CreateExportTaskOutput, error) + CreateExportTaskRequest(*cloudwatchlogs.CreateExportTaskInput) (*request.Request, *cloudwatchlogs.CreateExportTaskOutput) + + CreateLogGroup(*cloudwatchlogs.CreateLogGroupInput) (*cloudwatchlogs.CreateLogGroupOutput, error) + CreateLogGroupWithContext(aws.Context, *cloudwatchlogs.CreateLogGroupInput, ...request.Option) (*cloudwatchlogs.CreateLogGroupOutput, error) + CreateLogGroupRequest(*cloudwatchlogs.CreateLogGroupInput) (*request.Request, *cloudwatchlogs.CreateLogGroupOutput) + + CreateLogStream(*cloudwatchlogs.CreateLogStreamInput) (*cloudwatchlogs.CreateLogStreamOutput, error) + CreateLogStreamWithContext(aws.Context, *cloudwatchlogs.CreateLogStreamInput, ...request.Option) (*cloudwatchlogs.CreateLogStreamOutput, error) + CreateLogStreamRequest(*cloudwatchlogs.CreateLogStreamInput) (*request.Request, *cloudwatchlogs.CreateLogStreamOutput) + + DeleteDestination(*cloudwatchlogs.DeleteDestinationInput) (*cloudwatchlogs.DeleteDestinationOutput, error) + DeleteDestinationWithContext(aws.Context, *cloudwatchlogs.DeleteDestinationInput, ...request.Option) (*cloudwatchlogs.DeleteDestinationOutput, error) + DeleteDestinationRequest(*cloudwatchlogs.DeleteDestinationInput) (*request.Request, *cloudwatchlogs.DeleteDestinationOutput) + + DeleteLogGroup(*cloudwatchlogs.DeleteLogGroupInput) (*cloudwatchlogs.DeleteLogGroupOutput, error) + DeleteLogGroupWithContext(aws.Context, *cloudwatchlogs.DeleteLogGroupInput, ...request.Option) (*cloudwatchlogs.DeleteLogGroupOutput, error) + DeleteLogGroupRequest(*cloudwatchlogs.DeleteLogGroupInput) (*request.Request, *cloudwatchlogs.DeleteLogGroupOutput) + + DeleteLogStream(*cloudwatchlogs.DeleteLogStreamInput) (*cloudwatchlogs.DeleteLogStreamOutput, error) + DeleteLogStreamWithContext(aws.Context, *cloudwatchlogs.DeleteLogStreamInput, ...request.Option) (*cloudwatchlogs.DeleteLogStreamOutput, error) + DeleteLogStreamRequest(*cloudwatchlogs.DeleteLogStreamInput) (*request.Request, *cloudwatchlogs.DeleteLogStreamOutput) + + DeleteMetricFilter(*cloudwatchlogs.DeleteMetricFilterInput) (*cloudwatchlogs.DeleteMetricFilterOutput, error) + DeleteMetricFilterWithContext(aws.Context, *cloudwatchlogs.DeleteMetricFilterInput, ...request.Option) (*cloudwatchlogs.DeleteMetricFilterOutput, error) + DeleteMetricFilterRequest(*cloudwatchlogs.DeleteMetricFilterInput) (*request.Request, *cloudwatchlogs.DeleteMetricFilterOutput) + + DeleteResourcePolicy(*cloudwatchlogs.DeleteResourcePolicyInput) (*cloudwatchlogs.DeleteResourcePolicyOutput, error) + DeleteResourcePolicyWithContext(aws.Context, *cloudwatchlogs.DeleteResourcePolicyInput, ...request.Option) (*cloudwatchlogs.DeleteResourcePolicyOutput, error) + DeleteResourcePolicyRequest(*cloudwatchlogs.DeleteResourcePolicyInput) (*request.Request, *cloudwatchlogs.DeleteResourcePolicyOutput) + + DeleteRetentionPolicy(*cloudwatchlogs.DeleteRetentionPolicyInput) (*cloudwatchlogs.DeleteRetentionPolicyOutput, error) + DeleteRetentionPolicyWithContext(aws.Context, *cloudwatchlogs.DeleteRetentionPolicyInput, ...request.Option) (*cloudwatchlogs.DeleteRetentionPolicyOutput, error) + DeleteRetentionPolicyRequest(*cloudwatchlogs.DeleteRetentionPolicyInput) (*request.Request, *cloudwatchlogs.DeleteRetentionPolicyOutput) + + DeleteSubscriptionFilter(*cloudwatchlogs.DeleteSubscriptionFilterInput) (*cloudwatchlogs.DeleteSubscriptionFilterOutput, error) + DeleteSubscriptionFilterWithContext(aws.Context, *cloudwatchlogs.DeleteSubscriptionFilterInput, ...request.Option) (*cloudwatchlogs.DeleteSubscriptionFilterOutput, error) + DeleteSubscriptionFilterRequest(*cloudwatchlogs.DeleteSubscriptionFilterInput) (*request.Request, *cloudwatchlogs.DeleteSubscriptionFilterOutput) + + DescribeDestinations(*cloudwatchlogs.DescribeDestinationsInput) (*cloudwatchlogs.DescribeDestinationsOutput, error) + DescribeDestinationsWithContext(aws.Context, *cloudwatchlogs.DescribeDestinationsInput, ...request.Option) (*cloudwatchlogs.DescribeDestinationsOutput, error) + DescribeDestinationsRequest(*cloudwatchlogs.DescribeDestinationsInput) (*request.Request, *cloudwatchlogs.DescribeDestinationsOutput) + + DescribeDestinationsPages(*cloudwatchlogs.DescribeDestinationsInput, func(*cloudwatchlogs.DescribeDestinationsOutput, bool) bool) error + DescribeDestinationsPagesWithContext(aws.Context, *cloudwatchlogs.DescribeDestinationsInput, func(*cloudwatchlogs.DescribeDestinationsOutput, bool) bool, ...request.Option) error + + DescribeExportTasks(*cloudwatchlogs.DescribeExportTasksInput) (*cloudwatchlogs.DescribeExportTasksOutput, error) + DescribeExportTasksWithContext(aws.Context, *cloudwatchlogs.DescribeExportTasksInput, ...request.Option) (*cloudwatchlogs.DescribeExportTasksOutput, error) + DescribeExportTasksRequest(*cloudwatchlogs.DescribeExportTasksInput) (*request.Request, *cloudwatchlogs.DescribeExportTasksOutput) + + DescribeLogGroups(*cloudwatchlogs.DescribeLogGroupsInput) (*cloudwatchlogs.DescribeLogGroupsOutput, error) + DescribeLogGroupsWithContext(aws.Context, *cloudwatchlogs.DescribeLogGroupsInput, ...request.Option) (*cloudwatchlogs.DescribeLogGroupsOutput, error) + DescribeLogGroupsRequest(*cloudwatchlogs.DescribeLogGroupsInput) (*request.Request, *cloudwatchlogs.DescribeLogGroupsOutput) + + DescribeLogGroupsPages(*cloudwatchlogs.DescribeLogGroupsInput, func(*cloudwatchlogs.DescribeLogGroupsOutput, bool) bool) error + DescribeLogGroupsPagesWithContext(aws.Context, *cloudwatchlogs.DescribeLogGroupsInput, func(*cloudwatchlogs.DescribeLogGroupsOutput, bool) bool, ...request.Option) error + + DescribeLogStreams(*cloudwatchlogs.DescribeLogStreamsInput) (*cloudwatchlogs.DescribeLogStreamsOutput, error) + DescribeLogStreamsWithContext(aws.Context, *cloudwatchlogs.DescribeLogStreamsInput, ...request.Option) (*cloudwatchlogs.DescribeLogStreamsOutput, error) + DescribeLogStreamsRequest(*cloudwatchlogs.DescribeLogStreamsInput) (*request.Request, *cloudwatchlogs.DescribeLogStreamsOutput) + + DescribeLogStreamsPages(*cloudwatchlogs.DescribeLogStreamsInput, func(*cloudwatchlogs.DescribeLogStreamsOutput, bool) bool) error + DescribeLogStreamsPagesWithContext(aws.Context, *cloudwatchlogs.DescribeLogStreamsInput, func(*cloudwatchlogs.DescribeLogStreamsOutput, bool) bool, ...request.Option) error + + DescribeMetricFilters(*cloudwatchlogs.DescribeMetricFiltersInput) (*cloudwatchlogs.DescribeMetricFiltersOutput, error) + DescribeMetricFiltersWithContext(aws.Context, *cloudwatchlogs.DescribeMetricFiltersInput, ...request.Option) (*cloudwatchlogs.DescribeMetricFiltersOutput, error) + DescribeMetricFiltersRequest(*cloudwatchlogs.DescribeMetricFiltersInput) (*request.Request, *cloudwatchlogs.DescribeMetricFiltersOutput) + + DescribeMetricFiltersPages(*cloudwatchlogs.DescribeMetricFiltersInput, func(*cloudwatchlogs.DescribeMetricFiltersOutput, bool) bool) error + DescribeMetricFiltersPagesWithContext(aws.Context, *cloudwatchlogs.DescribeMetricFiltersInput, func(*cloudwatchlogs.DescribeMetricFiltersOutput, bool) bool, ...request.Option) error + + DescribeResourcePolicies(*cloudwatchlogs.DescribeResourcePoliciesInput) (*cloudwatchlogs.DescribeResourcePoliciesOutput, error) + DescribeResourcePoliciesWithContext(aws.Context, *cloudwatchlogs.DescribeResourcePoliciesInput, ...request.Option) (*cloudwatchlogs.DescribeResourcePoliciesOutput, error) + DescribeResourcePoliciesRequest(*cloudwatchlogs.DescribeResourcePoliciesInput) (*request.Request, *cloudwatchlogs.DescribeResourcePoliciesOutput) + + DescribeSubscriptionFilters(*cloudwatchlogs.DescribeSubscriptionFiltersInput) (*cloudwatchlogs.DescribeSubscriptionFiltersOutput, error) + DescribeSubscriptionFiltersWithContext(aws.Context, *cloudwatchlogs.DescribeSubscriptionFiltersInput, ...request.Option) (*cloudwatchlogs.DescribeSubscriptionFiltersOutput, error) + DescribeSubscriptionFiltersRequest(*cloudwatchlogs.DescribeSubscriptionFiltersInput) (*request.Request, *cloudwatchlogs.DescribeSubscriptionFiltersOutput) + + DescribeSubscriptionFiltersPages(*cloudwatchlogs.DescribeSubscriptionFiltersInput, func(*cloudwatchlogs.DescribeSubscriptionFiltersOutput, bool) bool) error + DescribeSubscriptionFiltersPagesWithContext(aws.Context, *cloudwatchlogs.DescribeSubscriptionFiltersInput, func(*cloudwatchlogs.DescribeSubscriptionFiltersOutput, bool) bool, ...request.Option) error + + DisassociateKmsKey(*cloudwatchlogs.DisassociateKmsKeyInput) (*cloudwatchlogs.DisassociateKmsKeyOutput, error) + DisassociateKmsKeyWithContext(aws.Context, *cloudwatchlogs.DisassociateKmsKeyInput, ...request.Option) (*cloudwatchlogs.DisassociateKmsKeyOutput, error) + DisassociateKmsKeyRequest(*cloudwatchlogs.DisassociateKmsKeyInput) (*request.Request, *cloudwatchlogs.DisassociateKmsKeyOutput) + + FilterLogEvents(*cloudwatchlogs.FilterLogEventsInput) (*cloudwatchlogs.FilterLogEventsOutput, error) + FilterLogEventsWithContext(aws.Context, *cloudwatchlogs.FilterLogEventsInput, ...request.Option) (*cloudwatchlogs.FilterLogEventsOutput, error) + FilterLogEventsRequest(*cloudwatchlogs.FilterLogEventsInput) (*request.Request, *cloudwatchlogs.FilterLogEventsOutput) + + FilterLogEventsPages(*cloudwatchlogs.FilterLogEventsInput, func(*cloudwatchlogs.FilterLogEventsOutput, bool) bool) error + FilterLogEventsPagesWithContext(aws.Context, *cloudwatchlogs.FilterLogEventsInput, func(*cloudwatchlogs.FilterLogEventsOutput, bool) bool, ...request.Option) error + + GetLogEvents(*cloudwatchlogs.GetLogEventsInput) (*cloudwatchlogs.GetLogEventsOutput, error) + GetLogEventsWithContext(aws.Context, *cloudwatchlogs.GetLogEventsInput, ...request.Option) (*cloudwatchlogs.GetLogEventsOutput, error) + GetLogEventsRequest(*cloudwatchlogs.GetLogEventsInput) (*request.Request, *cloudwatchlogs.GetLogEventsOutput) + + GetLogEventsPages(*cloudwatchlogs.GetLogEventsInput, func(*cloudwatchlogs.GetLogEventsOutput, bool) bool) error + GetLogEventsPagesWithContext(aws.Context, *cloudwatchlogs.GetLogEventsInput, func(*cloudwatchlogs.GetLogEventsOutput, bool) bool, ...request.Option) error + + ListTagsLogGroup(*cloudwatchlogs.ListTagsLogGroupInput) (*cloudwatchlogs.ListTagsLogGroupOutput, error) + ListTagsLogGroupWithContext(aws.Context, *cloudwatchlogs.ListTagsLogGroupInput, ...request.Option) (*cloudwatchlogs.ListTagsLogGroupOutput, error) + ListTagsLogGroupRequest(*cloudwatchlogs.ListTagsLogGroupInput) (*request.Request, *cloudwatchlogs.ListTagsLogGroupOutput) + + PutDestination(*cloudwatchlogs.PutDestinationInput) (*cloudwatchlogs.PutDestinationOutput, error) + PutDestinationWithContext(aws.Context, *cloudwatchlogs.PutDestinationInput, ...request.Option) (*cloudwatchlogs.PutDestinationOutput, error) + PutDestinationRequest(*cloudwatchlogs.PutDestinationInput) (*request.Request, *cloudwatchlogs.PutDestinationOutput) + + PutDestinationPolicy(*cloudwatchlogs.PutDestinationPolicyInput) (*cloudwatchlogs.PutDestinationPolicyOutput, error) + PutDestinationPolicyWithContext(aws.Context, *cloudwatchlogs.PutDestinationPolicyInput, ...request.Option) (*cloudwatchlogs.PutDestinationPolicyOutput, error) + PutDestinationPolicyRequest(*cloudwatchlogs.PutDestinationPolicyInput) (*request.Request, *cloudwatchlogs.PutDestinationPolicyOutput) + + PutLogEvents(*cloudwatchlogs.PutLogEventsInput) (*cloudwatchlogs.PutLogEventsOutput, error) + PutLogEventsWithContext(aws.Context, *cloudwatchlogs.PutLogEventsInput, ...request.Option) (*cloudwatchlogs.PutLogEventsOutput, error) + PutLogEventsRequest(*cloudwatchlogs.PutLogEventsInput) (*request.Request, *cloudwatchlogs.PutLogEventsOutput) + + PutMetricFilter(*cloudwatchlogs.PutMetricFilterInput) (*cloudwatchlogs.PutMetricFilterOutput, error) + PutMetricFilterWithContext(aws.Context, *cloudwatchlogs.PutMetricFilterInput, ...request.Option) (*cloudwatchlogs.PutMetricFilterOutput, error) + PutMetricFilterRequest(*cloudwatchlogs.PutMetricFilterInput) (*request.Request, *cloudwatchlogs.PutMetricFilterOutput) + + PutResourcePolicy(*cloudwatchlogs.PutResourcePolicyInput) (*cloudwatchlogs.PutResourcePolicyOutput, error) + PutResourcePolicyWithContext(aws.Context, *cloudwatchlogs.PutResourcePolicyInput, ...request.Option) (*cloudwatchlogs.PutResourcePolicyOutput, error) + PutResourcePolicyRequest(*cloudwatchlogs.PutResourcePolicyInput) (*request.Request, *cloudwatchlogs.PutResourcePolicyOutput) + + PutRetentionPolicy(*cloudwatchlogs.PutRetentionPolicyInput) (*cloudwatchlogs.PutRetentionPolicyOutput, error) + PutRetentionPolicyWithContext(aws.Context, *cloudwatchlogs.PutRetentionPolicyInput, ...request.Option) (*cloudwatchlogs.PutRetentionPolicyOutput, error) + PutRetentionPolicyRequest(*cloudwatchlogs.PutRetentionPolicyInput) (*request.Request, *cloudwatchlogs.PutRetentionPolicyOutput) + + PutSubscriptionFilter(*cloudwatchlogs.PutSubscriptionFilterInput) (*cloudwatchlogs.PutSubscriptionFilterOutput, error) + PutSubscriptionFilterWithContext(aws.Context, *cloudwatchlogs.PutSubscriptionFilterInput, ...request.Option) (*cloudwatchlogs.PutSubscriptionFilterOutput, error) + PutSubscriptionFilterRequest(*cloudwatchlogs.PutSubscriptionFilterInput) (*request.Request, *cloudwatchlogs.PutSubscriptionFilterOutput) + + TagLogGroup(*cloudwatchlogs.TagLogGroupInput) (*cloudwatchlogs.TagLogGroupOutput, error) + TagLogGroupWithContext(aws.Context, *cloudwatchlogs.TagLogGroupInput, ...request.Option) (*cloudwatchlogs.TagLogGroupOutput, error) + TagLogGroupRequest(*cloudwatchlogs.TagLogGroupInput) (*request.Request, *cloudwatchlogs.TagLogGroupOutput) + + TestMetricFilter(*cloudwatchlogs.TestMetricFilterInput) (*cloudwatchlogs.TestMetricFilterOutput, error) + TestMetricFilterWithContext(aws.Context, *cloudwatchlogs.TestMetricFilterInput, ...request.Option) (*cloudwatchlogs.TestMetricFilterOutput, error) + TestMetricFilterRequest(*cloudwatchlogs.TestMetricFilterInput) (*request.Request, *cloudwatchlogs.TestMetricFilterOutput) + + UntagLogGroup(*cloudwatchlogs.UntagLogGroupInput) (*cloudwatchlogs.UntagLogGroupOutput, error) + UntagLogGroupWithContext(aws.Context, *cloudwatchlogs.UntagLogGroupInput, ...request.Option) (*cloudwatchlogs.UntagLogGroupOutput, error) + UntagLogGroupRequest(*cloudwatchlogs.UntagLogGroupInput) (*request.Request, *cloudwatchlogs.UntagLogGroupOutput) +} + +var _ CloudWatchLogsAPI = (*cloudwatchlogs.CloudWatchLogs)(nil) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/doc.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/doc.go new file mode 100644 index 000000000000..a20147e7b506 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/doc.go @@ -0,0 +1,57 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +// Package cloudwatchlogs provides the client and types for making API +// requests to Amazon CloudWatch Logs. +// +// You can use Amazon CloudWatch Logs to monitor, store, and access your log +// files from Amazon EC2 instances, AWS CloudTrail, or other sources. You can +// then retrieve the associated log data from CloudWatch Logs using the CloudWatch +// console, CloudWatch Logs commands in the AWS CLI, CloudWatch Logs API, or +// CloudWatch Logs SDK. +// +// You can use CloudWatch Logs to: +// +// * Monitor logs from EC2 instances in real-time: You can use CloudWatch +// Logs to monitor applications and systems using log data. For example, +// CloudWatch Logs can track the number of errors that occur in your application +// logs and send you a notification whenever the rate of errors exceeds a +// threshold that you specify. CloudWatch Logs uses your log data for monitoring; +// so, no code changes are required. For example, you can monitor application +// logs for specific literal terms (such as "NullReferenceException") or +// count the number of occurrences of a literal term at a particular position +// in log data (such as "404" status codes in an Apache access log). When +// the term you are searching for is found, CloudWatch Logs reports the data +// to a CloudWatch metric that you specify. +// +// * Monitor AWS CloudTrail logged events: You can create alarms in CloudWatch +// and receive notifications of particular API activity as captured by CloudTrail +// and use the notification to perform troubleshooting. +// +// * Archive log data: You can use CloudWatch Logs to store your log data +// in highly durable storage. You can change the log retention setting so +// that any log events older than this setting are automatically deleted. +// The CloudWatch Logs agent makes it easy to quickly send both rotated and +// non-rotated log data off of a host and into the log service. You can then +// access the raw log data when you need it. +// +// See https://docs.aws.amazon.com/goto/WebAPI/logs-2014-03-28 for more information on this service. +// +// See cloudwatchlogs package documentation for more information. +// https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/ +// +// Using the Client +// +// To contact Amazon CloudWatch Logs with the SDK use the New function to create +// a new service client. With that client you can make API requests to the service. +// These clients are safe to use concurrently. +// +// See the SDK's documentation for more information on how to use the SDK. +// https://docs.aws.amazon.com/sdk-for-go/api/ +// +// See aws.Config documentation for more information on configuring SDK clients. +// https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config +// +// See the Amazon CloudWatch Logs client CloudWatchLogs for more +// information on creating client for this service. +// https://docs.aws.amazon.com/sdk-for-go/api/service/cloudwatchlogs/#New +package cloudwatchlogs diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go new file mode 100644 index 000000000000..772141f53a70 --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/errors.go @@ -0,0 +1,60 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudwatchlogs + +const ( + + // ErrCodeDataAlreadyAcceptedException for service response error code + // "DataAlreadyAcceptedException". + // + // The event was already logged. + ErrCodeDataAlreadyAcceptedException = "DataAlreadyAcceptedException" + + // ErrCodeInvalidOperationException for service response error code + // "InvalidOperationException". + // + // The operation is not valid on the specified resource. + ErrCodeInvalidOperationException = "InvalidOperationException" + + // ErrCodeInvalidParameterException for service response error code + // "InvalidParameterException". + // + // A parameter is specified incorrectly. + ErrCodeInvalidParameterException = "InvalidParameterException" + + // ErrCodeInvalidSequenceTokenException for service response error code + // "InvalidSequenceTokenException". + // + // The sequence token is not valid. + ErrCodeInvalidSequenceTokenException = "InvalidSequenceTokenException" + + // ErrCodeLimitExceededException for service response error code + // "LimitExceededException". + // + // You have reached the maximum number of resources that can be created. + ErrCodeLimitExceededException = "LimitExceededException" + + // ErrCodeOperationAbortedException for service response error code + // "OperationAbortedException". + // + // Multiple requests to update the same resource were in conflict. + ErrCodeOperationAbortedException = "OperationAbortedException" + + // ErrCodeResourceAlreadyExistsException for service response error code + // "ResourceAlreadyExistsException". + // + // The specified resource already exists. + ErrCodeResourceAlreadyExistsException = "ResourceAlreadyExistsException" + + // ErrCodeResourceNotFoundException for service response error code + // "ResourceNotFoundException". + // + // The specified resource does not exist. + ErrCodeResourceNotFoundException = "ResourceNotFoundException" + + // ErrCodeServiceUnavailableException for service response error code + // "ServiceUnavailableException". + // + // The service cannot complete the request. + ErrCodeServiceUnavailableException = "ServiceUnavailableException" +) diff --git a/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go new file mode 100644 index 000000000000..8e6094d58a5f --- /dev/null +++ b/vendor/github.com/aws/aws-sdk-go/service/cloudwatchlogs/service.go @@ -0,0 +1,95 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package cloudwatchlogs + +import ( + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/aws/client" + "github.com/aws/aws-sdk-go/aws/client/metadata" + "github.com/aws/aws-sdk-go/aws/request" + "github.com/aws/aws-sdk-go/aws/signer/v4" + "github.com/aws/aws-sdk-go/private/protocol/jsonrpc" +) + +// CloudWatchLogs provides the API operation methods for making requests to +// Amazon CloudWatch Logs. See this package's package overview docs +// for details on the service. +// +// CloudWatchLogs methods are safe to use concurrently. It is not safe to +// modify mutate any of the struct's properties though. +type CloudWatchLogs struct { + *client.Client +} + +// Used for custom client initialization logic +var initClient func(*client.Client) + +// Used for custom request initialization logic +var initRequest func(*request.Request) + +// Service information constants +const ( + ServiceName = "logs" // Service endpoint prefix API calls made to. + EndpointsID = ServiceName // Service ID for Regions and Endpoints metadata. +) + +// New creates a new instance of the CloudWatchLogs client with a session. +// If additional configuration is needed for the client instance use the optional +// aws.Config parameter to add your extra config. +// +// Example: +// // Create a CloudWatchLogs client from just a session. +// svc := cloudwatchlogs.New(mySession) +// +// // Create a CloudWatchLogs client with additional configuration +// svc := cloudwatchlogs.New(mySession, aws.NewConfig().WithRegion("us-west-2")) +func New(p client.ConfigProvider, cfgs ...*aws.Config) *CloudWatchLogs { + c := p.ClientConfig(EndpointsID, cfgs...) + return newClient(*c.Config, c.Handlers, c.Endpoint, c.SigningRegion, c.SigningName) +} + +// newClient creates, initializes and returns a new service client instance. +func newClient(cfg aws.Config, handlers request.Handlers, endpoint, signingRegion, signingName string) *CloudWatchLogs { + svc := &CloudWatchLogs{ + Client: client.New( + cfg, + metadata.ClientInfo{ + ServiceName: ServiceName, + SigningName: signingName, + SigningRegion: signingRegion, + Endpoint: endpoint, + APIVersion: "2014-03-28", + JSONVersion: "1.1", + TargetPrefix: "Logs_20140328", + }, + handlers, + ), + } + + // Handlers + svc.Handlers.Sign.PushBackNamed(v4.SignRequestHandler) + svc.Handlers.Build.PushBackNamed(jsonrpc.BuildHandler) + svc.Handlers.Unmarshal.PushBackNamed(jsonrpc.UnmarshalHandler) + svc.Handlers.UnmarshalMeta.PushBackNamed(jsonrpc.UnmarshalMetaHandler) + svc.Handlers.UnmarshalError.PushBackNamed(jsonrpc.UnmarshalErrorHandler) + + // Run custom client initialization if present + if initClient != nil { + initClient(svc.Client) + } + + return svc +} + +// newRequest creates a new request for a CloudWatchLogs operation and runs any +// custom request initialization. +func (c *CloudWatchLogs) newRequest(op *request.Operation, params, data interface{}) *request.Request { + req := c.NewRequest(op, params, data) + + // Run custom request initialization if present + if initRequest != nil { + initRequest(req) + } + + return req +} diff --git a/vendor/github.com/google/go-cmp/LICENSE b/vendor/github.com/google/go-cmp/LICENSE new file mode 100644 index 000000000000..32017f8fa1d8 --- /dev/null +++ b/vendor/github.com/google/go-cmp/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2017 The Go Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/google/go-cmp/cmp/compare.go b/vendor/github.com/google/go-cmp/cmp/compare.go new file mode 100644 index 000000000000..7e215f220296 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/compare.go @@ -0,0 +1,553 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package cmp determines equality of values. +// +// This package is intended to be a more powerful and safer alternative to +// reflect.DeepEqual for comparing whether two values are semantically equal. +// +// The primary features of cmp are: +// +// • When the default behavior of equality does not suit the needs of the test, +// custom equality functions can override the equality operation. +// For example, an equality function may report floats as equal so long as they +// are within some tolerance of each other. +// +// • Types that have an Equal method may use that method to determine equality. +// This allows package authors to determine the equality operation for the types +// that they define. +// +// • If no custom equality functions are used and no Equal method is defined, +// equality is determined by recursively comparing the primitive kinds on both +// values, much like reflect.DeepEqual. Unlike reflect.DeepEqual, unexported +// fields are not compared by default; they result in panics unless suppressed +// by using an Ignore option (see cmpopts.IgnoreUnexported) or explicitly compared +// using the AllowUnexported option. +package cmp + +import ( + "fmt" + "reflect" + + "github.com/google/go-cmp/cmp/internal/diff" + "github.com/google/go-cmp/cmp/internal/function" + "github.com/google/go-cmp/cmp/internal/value" +) + +// BUG(dsnet): Maps with keys containing NaN values cannot be properly compared due to +// the reflection package's inability to retrieve such entries. Equal will panic +// anytime it comes across a NaN key, but this behavior may change. +// +// See https://golang.org/issue/11104 for more details. + +var nothing = reflect.Value{} + +// Equal reports whether x and y are equal by recursively applying the +// following rules in the given order to x and y and all of their sub-values: +// +// • If two values are not of the same type, then they are never equal +// and the overall result is false. +// +// • Let S be the set of all Ignore, Transformer, and Comparer options that +// remain after applying all path filters, value filters, and type filters. +// If at least one Ignore exists in S, then the comparison is ignored. +// If the number of Transformer and Comparer options in S is greater than one, +// then Equal panics because it is ambiguous which option to use. +// If S contains a single Transformer, then use that to transform the current +// values and recursively call Equal on the output values. +// If S contains a single Comparer, then use that to compare the current values. +// Otherwise, evaluation proceeds to the next rule. +// +// • If the values have an Equal method of the form "(T) Equal(T) bool" or +// "(T) Equal(I) bool" where T is assignable to I, then use the result of +// x.Equal(y) even if x or y is nil. +// Otherwise, no such method exists and evaluation proceeds to the next rule. +// +// • Lastly, try to compare x and y based on their basic kinds. +// Simple kinds like booleans, integers, floats, complex numbers, strings, and +// channels are compared using the equivalent of the == operator in Go. +// Functions are only equal if they are both nil, otherwise they are unequal. +// Pointers are equal if the underlying values they point to are also equal. +// Interfaces are equal if their underlying concrete values are also equal. +// +// Structs are equal if all of their fields are equal. If a struct contains +// unexported fields, Equal panics unless the AllowUnexported option is used or +// an Ignore option (e.g., cmpopts.IgnoreUnexported) ignores that field. +// +// Arrays, slices, and maps are equal if they are both nil or both non-nil +// with the same length and the elements at each index or key are equal. +// Note that a non-nil empty slice and a nil slice are not equal. +// To equate empty slices and maps, consider using cmpopts.EquateEmpty. +// Map keys are equal according to the == operator. +// To use custom comparisons for map keys, consider using cmpopts.SortMaps. +func Equal(x, y interface{}, opts ...Option) bool { + s := newState(opts) + s.compareAny(reflect.ValueOf(x), reflect.ValueOf(y)) + return s.result.Equal() +} + +// Diff returns a human-readable report of the differences between two values. +// It returns an empty string if and only if Equal returns true for the same +// input values and options. The output string will use the "-" symbol to +// indicate elements removed from x, and the "+" symbol to indicate elements +// added to y. +// +// Do not depend on this output being stable. +func Diff(x, y interface{}, opts ...Option) string { + r := new(defaultReporter) + opts = Options{Options(opts), r} + eq := Equal(x, y, opts...) + d := r.String() + if (d == "") != eq { + panic("inconsistent difference and equality results") + } + return d +} + +type state struct { + // These fields represent the "comparison state". + // Calling statelessCompare must not result in observable changes to these. + result diff.Result // The current result of comparison + curPath Path // The current path in the value tree + reporter reporter // Optional reporter used for difference formatting + + // dynChecker triggers pseudo-random checks for option correctness. + // It is safe for statelessCompare to mutate this value. + dynChecker dynChecker + + // These fields, once set by processOption, will not change. + exporters map[reflect.Type]bool // Set of structs with unexported field visibility + opts Options // List of all fundamental and filter options +} + +func newState(opts []Option) *state { + s := new(state) + for _, opt := range opts { + s.processOption(opt) + } + return s +} + +func (s *state) processOption(opt Option) { + switch opt := opt.(type) { + case nil: + case Options: + for _, o := range opt { + s.processOption(o) + } + case coreOption: + type filtered interface { + isFiltered() bool + } + if fopt, ok := opt.(filtered); ok && !fopt.isFiltered() { + panic(fmt.Sprintf("cannot use an unfiltered option: %v", opt)) + } + s.opts = append(s.opts, opt) + case visibleStructs: + if s.exporters == nil { + s.exporters = make(map[reflect.Type]bool) + } + for t := range opt { + s.exporters[t] = true + } + case reporter: + if s.reporter != nil { + panic("difference reporter already registered") + } + s.reporter = opt + default: + panic(fmt.Sprintf("unknown option %T", opt)) + } +} + +// statelessCompare compares two values and returns the result. +// This function is stateless in that it does not alter the current result, +// or output to any registered reporters. +func (s *state) statelessCompare(vx, vy reflect.Value) diff.Result { + // We do not save and restore the curPath because all of the compareX + // methods should properly push and pop from the path. + // It is an implementation bug if the contents of curPath differs from + // when calling this function to when returning from it. + + oldResult, oldReporter := s.result, s.reporter + s.result = diff.Result{} // Reset result + s.reporter = nil // Remove reporter to avoid spurious printouts + s.compareAny(vx, vy) + res := s.result + s.result, s.reporter = oldResult, oldReporter + return res +} + +func (s *state) compareAny(vx, vy reflect.Value) { + // TODO: Support cyclic data structures. + + // Rule 0: Differing types are never equal. + if !vx.IsValid() || !vy.IsValid() { + s.report(vx.IsValid() == vy.IsValid(), vx, vy) + return + } + if vx.Type() != vy.Type() { + s.report(false, vx, vy) // Possible for path to be empty + return + } + t := vx.Type() + if len(s.curPath) == 0 { + s.curPath.push(&pathStep{typ: t}) + defer s.curPath.pop() + } + vx, vy = s.tryExporting(vx, vy) + + // Rule 1: Check whether an option applies on this node in the value tree. + if s.tryOptions(vx, vy, t) { + return + } + + // Rule 2: Check whether the type has a valid Equal method. + if s.tryMethod(vx, vy, t) { + return + } + + // Rule 3: Recursively descend into each value's underlying kind. + switch t.Kind() { + case reflect.Bool: + s.report(vx.Bool() == vy.Bool(), vx, vy) + return + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + s.report(vx.Int() == vy.Int(), vx, vy) + return + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + s.report(vx.Uint() == vy.Uint(), vx, vy) + return + case reflect.Float32, reflect.Float64: + s.report(vx.Float() == vy.Float(), vx, vy) + return + case reflect.Complex64, reflect.Complex128: + s.report(vx.Complex() == vy.Complex(), vx, vy) + return + case reflect.String: + s.report(vx.String() == vy.String(), vx, vy) + return + case reflect.Chan, reflect.UnsafePointer: + s.report(vx.Pointer() == vy.Pointer(), vx, vy) + return + case reflect.Func: + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + case reflect.Ptr: + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + s.curPath.push(&indirect{pathStep{t.Elem()}}) + defer s.curPath.pop() + s.compareAny(vx.Elem(), vy.Elem()) + return + case reflect.Interface: + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + if vx.Elem().Type() != vy.Elem().Type() { + s.report(false, vx.Elem(), vy.Elem()) + return + } + s.curPath.push(&typeAssertion{pathStep{vx.Elem().Type()}}) + defer s.curPath.pop() + s.compareAny(vx.Elem(), vy.Elem()) + return + case reflect.Slice: + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + fallthrough + case reflect.Array: + s.compareArray(vx, vy, t) + return + case reflect.Map: + s.compareMap(vx, vy, t) + return + case reflect.Struct: + s.compareStruct(vx, vy, t) + return + default: + panic(fmt.Sprintf("%v kind not handled", t.Kind())) + } +} + +func (s *state) tryExporting(vx, vy reflect.Value) (reflect.Value, reflect.Value) { + if sf, ok := s.curPath[len(s.curPath)-1].(*structField); ok && sf.unexported { + if sf.force { + // Use unsafe pointer arithmetic to get read-write access to an + // unexported field in the struct. + vx = unsafeRetrieveField(sf.pvx, sf.field) + vy = unsafeRetrieveField(sf.pvy, sf.field) + } else { + // We are not allowed to export the value, so invalidate them + // so that tryOptions can panic later if not explicitly ignored. + vx = nothing + vy = nothing + } + } + return vx, vy +} + +func (s *state) tryOptions(vx, vy reflect.Value, t reflect.Type) bool { + // If there were no FilterValues, we will not detect invalid inputs, + // so manually check for them and append invalid if necessary. + // We still evaluate the options since an ignore can override invalid. + opts := s.opts + if !vx.IsValid() || !vy.IsValid() { + opts = Options{opts, invalid{}} + } + + // Evaluate all filters and apply the remaining options. + if opt := opts.filter(s, vx, vy, t); opt != nil { + opt.apply(s, vx, vy) + return true + } + return false +} + +func (s *state) tryMethod(vx, vy reflect.Value, t reflect.Type) bool { + // Check if this type even has an Equal method. + m, ok := t.MethodByName("Equal") + if !ok || !function.IsType(m.Type, function.EqualAssignable) { + return false + } + + eq := s.callTTBFunc(m.Func, vx, vy) + s.report(eq, vx, vy) + return true +} + +func (s *state) callTRFunc(f, v reflect.Value) reflect.Value { + v = sanitizeValue(v, f.Type().In(0)) + if !s.dynChecker.Next() { + return f.Call([]reflect.Value{v})[0] + } + + // Run the function twice and ensure that we get the same results back. + // We run in goroutines so that the race detector (if enabled) can detect + // unsafe mutations to the input. + c := make(chan reflect.Value) + go detectRaces(c, f, v) + want := f.Call([]reflect.Value{v})[0] + if got := <-c; !s.statelessCompare(got, want).Equal() { + // To avoid false-positives with non-reflexive equality operations, + // we sanity check whether a value is equal to itself. + if !s.statelessCompare(want, want).Equal() { + return want + } + fn := getFuncName(f.Pointer()) + panic(fmt.Sprintf("non-deterministic function detected: %s", fn)) + } + return want +} + +func (s *state) callTTBFunc(f, x, y reflect.Value) bool { + x = sanitizeValue(x, f.Type().In(0)) + y = sanitizeValue(y, f.Type().In(1)) + if !s.dynChecker.Next() { + return f.Call([]reflect.Value{x, y})[0].Bool() + } + + // Swapping the input arguments is sufficient to check that + // f is symmetric and deterministic. + // We run in goroutines so that the race detector (if enabled) can detect + // unsafe mutations to the input. + c := make(chan reflect.Value) + go detectRaces(c, f, y, x) + want := f.Call([]reflect.Value{x, y})[0].Bool() + if got := <-c; !got.IsValid() || got.Bool() != want { + fn := getFuncName(f.Pointer()) + panic(fmt.Sprintf("non-deterministic or non-symmetric function detected: %s", fn)) + } + return want +} + +func detectRaces(c chan<- reflect.Value, f reflect.Value, vs ...reflect.Value) { + var ret reflect.Value + defer func() { + recover() // Ignore panics, let the other call to f panic instead + c <- ret + }() + ret = f.Call(vs)[0] +} + +// sanitizeValue converts nil interfaces of type T to those of type R, +// assuming that T is assignable to R. +// Otherwise, it returns the input value as is. +func sanitizeValue(v reflect.Value, t reflect.Type) reflect.Value { + // TODO(dsnet): Remove this hacky workaround. + // See https://golang.org/issue/22143 + if v.Kind() == reflect.Interface && v.IsNil() && v.Type() != t { + return reflect.New(t).Elem() + } + return v +} + +func (s *state) compareArray(vx, vy reflect.Value, t reflect.Type) { + step := &sliceIndex{pathStep{t.Elem()}, 0, 0} + s.curPath.push(step) + + // Compute an edit-script for slices vx and vy. + es := diff.Difference(vx.Len(), vy.Len(), func(ix, iy int) diff.Result { + step.xkey, step.ykey = ix, iy + return s.statelessCompare(vx.Index(ix), vy.Index(iy)) + }) + + // Report the entire slice as is if the arrays are of primitive kind, + // and the arrays are different enough. + isPrimitive := false + switch t.Elem().Kind() { + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, + reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, + reflect.Bool, reflect.Float32, reflect.Float64, reflect.Complex64, reflect.Complex128: + isPrimitive = true + } + if isPrimitive && es.Dist() > (vx.Len()+vy.Len())/4 { + s.curPath.pop() // Pop first since we are reporting the whole slice + s.report(false, vx, vy) + return + } + + // Replay the edit-script. + var ix, iy int + for _, e := range es { + switch e { + case diff.UniqueX: + step.xkey, step.ykey = ix, -1 + s.report(false, vx.Index(ix), nothing) + ix++ + case diff.UniqueY: + step.xkey, step.ykey = -1, iy + s.report(false, nothing, vy.Index(iy)) + iy++ + default: + step.xkey, step.ykey = ix, iy + if e == diff.Identity { + s.report(true, vx.Index(ix), vy.Index(iy)) + } else { + s.compareAny(vx.Index(ix), vy.Index(iy)) + } + ix++ + iy++ + } + } + s.curPath.pop() + return +} + +func (s *state) compareMap(vx, vy reflect.Value, t reflect.Type) { + if vx.IsNil() || vy.IsNil() { + s.report(vx.IsNil() && vy.IsNil(), vx, vy) + return + } + + // We combine and sort the two map keys so that we can perform the + // comparisons in a deterministic order. + step := &mapIndex{pathStep: pathStep{t.Elem()}} + s.curPath.push(step) + defer s.curPath.pop() + for _, k := range value.SortKeys(append(vx.MapKeys(), vy.MapKeys()...)) { + step.key = k + vvx := vx.MapIndex(k) + vvy := vy.MapIndex(k) + switch { + case vvx.IsValid() && vvy.IsValid(): + s.compareAny(vvx, vvy) + case vvx.IsValid() && !vvy.IsValid(): + s.report(false, vvx, nothing) + case !vvx.IsValid() && vvy.IsValid(): + s.report(false, nothing, vvy) + default: + // It is possible for both vvx and vvy to be invalid if the + // key contained a NaN value in it. There is no way in + // reflection to be able to retrieve these values. + // See https://golang.org/issue/11104 + panic(fmt.Sprintf("%#v has map key with NaNs", s.curPath)) + } + } +} + +func (s *state) compareStruct(vx, vy reflect.Value, t reflect.Type) { + var vax, vay reflect.Value // Addressable versions of vx and vy + + step := &structField{} + s.curPath.push(step) + defer s.curPath.pop() + for i := 0; i < t.NumField(); i++ { + vvx := vx.Field(i) + vvy := vy.Field(i) + step.typ = t.Field(i).Type + step.name = t.Field(i).Name + step.idx = i + step.unexported = !isExported(step.name) + if step.unexported { + // Defer checking of unexported fields until later to give an + // Ignore a chance to ignore the field. + if !vax.IsValid() || !vay.IsValid() { + // For unsafeRetrieveField to work, the parent struct must + // be addressable. Create a new copy of the values if + // necessary to make them addressable. + vax = makeAddressable(vx) + vay = makeAddressable(vy) + } + step.force = s.exporters[t] + step.pvx = vax + step.pvy = vay + step.field = t.Field(i) + } + s.compareAny(vvx, vvy) + } +} + +// report records the result of a single comparison. +// It also calls Report if any reporter is registered. +func (s *state) report(eq bool, vx, vy reflect.Value) { + if eq { + s.result.NSame++ + } else { + s.result.NDiff++ + } + if s.reporter != nil { + s.reporter.Report(vx, vy, eq, s.curPath) + } +} + +// dynChecker tracks the state needed to periodically perform checks that +// user provided functions are symmetric and deterministic. +// The zero value is safe for immediate use. +type dynChecker struct{ curr, next int } + +// Next increments the state and reports whether a check should be performed. +// +// Checks occur every Nth function call, where N is a triangular number: +// 0 1 3 6 10 15 21 28 36 45 55 66 78 91 105 120 136 153 171 190 ... +// See https://en.wikipedia.org/wiki/Triangular_number +// +// This sequence ensures that the cost of checks drops significantly as +// the number of functions calls grows larger. +func (dc *dynChecker) Next() bool { + ok := dc.curr == dc.next + if ok { + dc.curr = 0 + dc.next++ + } + dc.curr++ + return ok +} + +// makeAddressable returns a value that is always addressable. +// It returns the input verbatim if it is already addressable, +// otherwise it creates a new value and returns an addressable copy. +func makeAddressable(v reflect.Value) reflect.Value { + if v.CanAddr() { + return v + } + vc := reflect.New(v.Type()).Elem() + vc.Set(v) + return vc +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go new file mode 100644 index 000000000000..42afa4960efa --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_disable.go @@ -0,0 +1,17 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build !debug + +package diff + +var debug debugger + +type debugger struct{} + +func (debugger) Begin(_, _ int, f EqualFunc, _, _ *EditScript) EqualFunc { + return f +} +func (debugger) Update() {} +func (debugger) Finish() {} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go new file mode 100644 index 000000000000..fd9f7f177399 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/diff/debug_enable.go @@ -0,0 +1,122 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build debug + +package diff + +import ( + "fmt" + "strings" + "sync" + "time" +) + +// The algorithm can be seen running in real-time by enabling debugging: +// go test -tags=debug -v +// +// Example output: +// === RUN TestDifference/#34 +// ┌───────────────────────────────┐ +// │ \ · · · · · · · · · · · · · · │ +// │ · # · · · · · · · · · · · · · │ +// │ · \ · · · · · · · · · · · · · │ +// │ · · \ · · · · · · · · · · · · │ +// │ · · · X # · · · · · · · · · · │ +// │ · · · # \ · · · · · · · · · · │ +// │ · · · · · # # · · · · · · · · │ +// │ · · · · · # \ · · · · · · · · │ +// │ · · · · · · · \ · · · · · · · │ +// │ · · · · · · · · \ · · · · · · │ +// │ · · · · · · · · · \ · · · · · │ +// │ · · · · · · · · · · \ · · # · │ +// │ · · · · · · · · · · · \ # # · │ +// │ · · · · · · · · · · · # # # · │ +// │ · · · · · · · · · · # # # # · │ +// │ · · · · · · · · · # # # # # · │ +// │ · · · · · · · · · · · · · · \ │ +// └───────────────────────────────┘ +// [.Y..M.XY......YXYXY.|] +// +// The grid represents the edit-graph where the horizontal axis represents +// list X and the vertical axis represents list Y. The start of the two lists +// is the top-left, while the ends are the bottom-right. The '·' represents +// an unexplored node in the graph. The '\' indicates that the two symbols +// from list X and Y are equal. The 'X' indicates that two symbols are similar +// (but not exactly equal) to each other. The '#' indicates that the two symbols +// are different (and not similar). The algorithm traverses this graph trying to +// make the paths starting in the top-left and the bottom-right connect. +// +// The series of '.', 'X', 'Y', and 'M' characters at the bottom represents +// the currently established path from the forward and reverse searches, +// separated by a '|' character. + +const ( + updateDelay = 100 * time.Millisecond + finishDelay = 500 * time.Millisecond + ansiTerminal = true // ANSI escape codes used to move terminal cursor +) + +var debug debugger + +type debugger struct { + sync.Mutex + p1, p2 EditScript + fwdPath, revPath *EditScript + grid []byte + lines int +} + +func (dbg *debugger) Begin(nx, ny int, f EqualFunc, p1, p2 *EditScript) EqualFunc { + dbg.Lock() + dbg.fwdPath, dbg.revPath = p1, p2 + top := "┌─" + strings.Repeat("──", nx) + "┐\n" + row := "│ " + strings.Repeat("· ", nx) + "│\n" + btm := "└─" + strings.Repeat("──", nx) + "┘\n" + dbg.grid = []byte(top + strings.Repeat(row, ny) + btm) + dbg.lines = strings.Count(dbg.String(), "\n") + fmt.Print(dbg) + + // Wrap the EqualFunc so that we can intercept each result. + return func(ix, iy int) (r Result) { + cell := dbg.grid[len(top)+iy*len(row):][len("│ ")+len("· ")*ix:][:len("·")] + for i := range cell { + cell[i] = 0 // Zero out the multiple bytes of UTF-8 middle-dot + } + switch r = f(ix, iy); { + case r.Equal(): + cell[0] = '\\' + case r.Similar(): + cell[0] = 'X' + default: + cell[0] = '#' + } + return + } +} + +func (dbg *debugger) Update() { + dbg.print(updateDelay) +} + +func (dbg *debugger) Finish() { + dbg.print(finishDelay) + dbg.Unlock() +} + +func (dbg *debugger) String() string { + dbg.p1, dbg.p2 = *dbg.fwdPath, dbg.p2[:0] + for i := len(*dbg.revPath) - 1; i >= 0; i-- { + dbg.p2 = append(dbg.p2, (*dbg.revPath)[i]) + } + return fmt.Sprintf("%s[%v|%v]\n\n", dbg.grid, dbg.p1, dbg.p2) +} + +func (dbg *debugger) print(d time.Duration) { + if ansiTerminal { + fmt.Printf("\x1b[%dA", dbg.lines) // Reset terminal cursor + } + fmt.Print(dbg) + time.Sleep(d) +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go b/vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go new file mode 100644 index 000000000000..260befea2fd7 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/diff/diff.go @@ -0,0 +1,363 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package diff implements an algorithm for producing edit-scripts. +// The edit-script is a sequence of operations needed to transform one list +// of symbols into another (or vice-versa). The edits allowed are insertions, +// deletions, and modifications. The summation of all edits is called the +// Levenshtein distance as this problem is well-known in computer science. +// +// This package prioritizes performance over accuracy. That is, the run time +// is more important than obtaining a minimal Levenshtein distance. +package diff + +// EditType represents a single operation within an edit-script. +type EditType uint8 + +const ( + // Identity indicates that a symbol pair is identical in both list X and Y. + Identity EditType = iota + // UniqueX indicates that a symbol only exists in X and not Y. + UniqueX + // UniqueY indicates that a symbol only exists in Y and not X. + UniqueY + // Modified indicates that a symbol pair is a modification of each other. + Modified +) + +// EditScript represents the series of differences between two lists. +type EditScript []EditType + +// String returns a human-readable string representing the edit-script where +// Identity, UniqueX, UniqueY, and Modified are represented by the +// '.', 'X', 'Y', and 'M' characters, respectively. +func (es EditScript) String() string { + b := make([]byte, len(es)) + for i, e := range es { + switch e { + case Identity: + b[i] = '.' + case UniqueX: + b[i] = 'X' + case UniqueY: + b[i] = 'Y' + case Modified: + b[i] = 'M' + default: + panic("invalid edit-type") + } + } + return string(b) +} + +// stats returns a histogram of the number of each type of edit operation. +func (es EditScript) stats() (s struct{ NI, NX, NY, NM int }) { + for _, e := range es { + switch e { + case Identity: + s.NI++ + case UniqueX: + s.NX++ + case UniqueY: + s.NY++ + case Modified: + s.NM++ + default: + panic("invalid edit-type") + } + } + return +} + +// Dist is the Levenshtein distance and is guaranteed to be 0 if and only if +// lists X and Y are equal. +func (es EditScript) Dist() int { return len(es) - es.stats().NI } + +// LenX is the length of the X list. +func (es EditScript) LenX() int { return len(es) - es.stats().NY } + +// LenY is the length of the Y list. +func (es EditScript) LenY() int { return len(es) - es.stats().NX } + +// EqualFunc reports whether the symbols at indexes ix and iy are equal. +// When called by Difference, the index is guaranteed to be within nx and ny. +type EqualFunc func(ix int, iy int) Result + +// Result is the result of comparison. +// NSame is the number of sub-elements that are equal. +// NDiff is the number of sub-elements that are not equal. +type Result struct{ NSame, NDiff int } + +// Equal indicates whether the symbols are equal. Two symbols are equal +// if and only if NDiff == 0. If Equal, then they are also Similar. +func (r Result) Equal() bool { return r.NDiff == 0 } + +// Similar indicates whether two symbols are similar and may be represented +// by using the Modified type. As a special case, we consider binary comparisons +// (i.e., those that return Result{1, 0} or Result{0, 1}) to be similar. +// +// The exact ratio of NSame to NDiff to determine similarity may change. +func (r Result) Similar() bool { + // Use NSame+1 to offset NSame so that binary comparisons are similar. + return r.NSame+1 >= r.NDiff +} + +// Difference reports whether two lists of lengths nx and ny are equal +// given the definition of equality provided as f. +// +// This function returns an edit-script, which is a sequence of operations +// needed to convert one list into the other. The following invariants for +// the edit-script are maintained: +// • eq == (es.Dist()==0) +// • nx == es.LenX() +// • ny == es.LenY() +// +// This algorithm is not guaranteed to be an optimal solution (i.e., one that +// produces an edit-script with a minimal Levenshtein distance). This algorithm +// favors performance over optimality. The exact output is not guaranteed to +// be stable and may change over time. +func Difference(nx, ny int, f EqualFunc) (es EditScript) { + // This algorithm is based on traversing what is known as an "edit-graph". + // See Figure 1 from "An O(ND) Difference Algorithm and Its Variations" + // by Eugene W. Myers. Since D can be as large as N itself, this is + // effectively O(N^2). Unlike the algorithm from that paper, we are not + // interested in the optimal path, but at least some "decent" path. + // + // For example, let X and Y be lists of symbols: + // X = [A B C A B B A] + // Y = [C B A B A C] + // + // The edit-graph can be drawn as the following: + // A B C A B B A + // ┌─────────────┐ + // C │_|_|\|_|_|_|_│ 0 + // B │_|\|_|_|\|\|_│ 1 + // A │\|_|_|\|_|_|\│ 2 + // B │_|\|_|_|\|\|_│ 3 + // A │\|_|_|\|_|_|\│ 4 + // C │ | |\| | | | │ 5 + // └─────────────┘ 6 + // 0 1 2 3 4 5 6 7 + // + // List X is written along the horizontal axis, while list Y is written + // along the vertical axis. At any point on this grid, if the symbol in + // list X matches the corresponding symbol in list Y, then a '\' is drawn. + // The goal of any minimal edit-script algorithm is to find a path from the + // top-left corner to the bottom-right corner, while traveling through the + // fewest horizontal or vertical edges. + // A horizontal edge is equivalent to inserting a symbol from list X. + // A vertical edge is equivalent to inserting a symbol from list Y. + // A diagonal edge is equivalent to a matching symbol between both X and Y. + + // Invariants: + // • 0 ≤ fwdPath.X ≤ (fwdFrontier.X, revFrontier.X) ≤ revPath.X ≤ nx + // • 0 ≤ fwdPath.Y ≤ (fwdFrontier.Y, revFrontier.Y) ≤ revPath.Y ≤ ny + // + // In general: + // • fwdFrontier.X < revFrontier.X + // • fwdFrontier.Y < revFrontier.Y + // Unless, it is time for the algorithm to terminate. + fwdPath := path{+1, point{0, 0}, make(EditScript, 0, (nx+ny)/2)} + revPath := path{-1, point{nx, ny}, make(EditScript, 0)} + fwdFrontier := fwdPath.point // Forward search frontier + revFrontier := revPath.point // Reverse search frontier + + // Search budget bounds the cost of searching for better paths. + // The longest sequence of non-matching symbols that can be tolerated is + // approximately the square-root of the search budget. + searchBudget := 4 * (nx + ny) // O(n) + + // The algorithm below is a greedy, meet-in-the-middle algorithm for + // computing sub-optimal edit-scripts between two lists. + // + // The algorithm is approximately as follows: + // • Searching for differences switches back-and-forth between + // a search that starts at the beginning (the top-left corner), and + // a search that starts at the end (the bottom-right corner). The goal of + // the search is connect with the search from the opposite corner. + // • As we search, we build a path in a greedy manner, where the first + // match seen is added to the path (this is sub-optimal, but provides a + // decent result in practice). When matches are found, we try the next pair + // of symbols in the lists and follow all matches as far as possible. + // • When searching for matches, we search along a diagonal going through + // through the "frontier" point. If no matches are found, we advance the + // frontier towards the opposite corner. + // • This algorithm terminates when either the X coordinates or the + // Y coordinates of the forward and reverse frontier points ever intersect. + // + // This algorithm is correct even if searching only in the forward direction + // or in the reverse direction. We do both because it is commonly observed + // that two lists commonly differ because elements were added to the front + // or end of the other list. + // + // Running the tests with the "debug" build tag prints a visualization of + // the algorithm running in real-time. This is educational for understanding + // how the algorithm works. See debug_enable.go. + f = debug.Begin(nx, ny, f, &fwdPath.es, &revPath.es) + for { + // Forward search from the beginning. + if fwdFrontier.X >= revFrontier.X || fwdFrontier.Y >= revFrontier.Y || searchBudget == 0 { + break + } + for stop1, stop2, i := false, false, 0; !(stop1 && stop2) && searchBudget > 0; i++ { + // Search in a diagonal pattern for a match. + z := zigzag(i) + p := point{fwdFrontier.X + z, fwdFrontier.Y - z} + switch { + case p.X >= revPath.X || p.Y < fwdPath.Y: + stop1 = true // Hit top-right corner + case p.Y >= revPath.Y || p.X < fwdPath.X: + stop2 = true // Hit bottom-left corner + case f(p.X, p.Y).Equal(): + // Match found, so connect the path to this point. + fwdPath.connect(p, f) + fwdPath.append(Identity) + // Follow sequence of matches as far as possible. + for fwdPath.X < revPath.X && fwdPath.Y < revPath.Y { + if !f(fwdPath.X, fwdPath.Y).Equal() { + break + } + fwdPath.append(Identity) + } + fwdFrontier = fwdPath.point + stop1, stop2 = true, true + default: + searchBudget-- // Match not found + } + debug.Update() + } + // Advance the frontier towards reverse point. + if revPath.X-fwdFrontier.X >= revPath.Y-fwdFrontier.Y { + fwdFrontier.X++ + } else { + fwdFrontier.Y++ + } + + // Reverse search from the end. + if fwdFrontier.X >= revFrontier.X || fwdFrontier.Y >= revFrontier.Y || searchBudget == 0 { + break + } + for stop1, stop2, i := false, false, 0; !(stop1 && stop2) && searchBudget > 0; i++ { + // Search in a diagonal pattern for a match. + z := zigzag(i) + p := point{revFrontier.X - z, revFrontier.Y + z} + switch { + case fwdPath.X >= p.X || revPath.Y < p.Y: + stop1 = true // Hit bottom-left corner + case fwdPath.Y >= p.Y || revPath.X < p.X: + stop2 = true // Hit top-right corner + case f(p.X-1, p.Y-1).Equal(): + // Match found, so connect the path to this point. + revPath.connect(p, f) + revPath.append(Identity) + // Follow sequence of matches as far as possible. + for fwdPath.X < revPath.X && fwdPath.Y < revPath.Y { + if !f(revPath.X-1, revPath.Y-1).Equal() { + break + } + revPath.append(Identity) + } + revFrontier = revPath.point + stop1, stop2 = true, true + default: + searchBudget-- // Match not found + } + debug.Update() + } + // Advance the frontier towards forward point. + if revFrontier.X-fwdPath.X >= revFrontier.Y-fwdPath.Y { + revFrontier.X-- + } else { + revFrontier.Y-- + } + } + + // Join the forward and reverse paths and then append the reverse path. + fwdPath.connect(revPath.point, f) + for i := len(revPath.es) - 1; i >= 0; i-- { + t := revPath.es[i] + revPath.es = revPath.es[:i] + fwdPath.append(t) + } + debug.Finish() + return fwdPath.es +} + +type path struct { + dir int // +1 if forward, -1 if reverse + point // Leading point of the EditScript path + es EditScript +} + +// connect appends any necessary Identity, Modified, UniqueX, or UniqueY types +// to the edit-script to connect p.point to dst. +func (p *path) connect(dst point, f EqualFunc) { + if p.dir > 0 { + // Connect in forward direction. + for dst.X > p.X && dst.Y > p.Y { + switch r := f(p.X, p.Y); { + case r.Equal(): + p.append(Identity) + case r.Similar(): + p.append(Modified) + case dst.X-p.X >= dst.Y-p.Y: + p.append(UniqueX) + default: + p.append(UniqueY) + } + } + for dst.X > p.X { + p.append(UniqueX) + } + for dst.Y > p.Y { + p.append(UniqueY) + } + } else { + // Connect in reverse direction. + for p.X > dst.X && p.Y > dst.Y { + switch r := f(p.X-1, p.Y-1); { + case r.Equal(): + p.append(Identity) + case r.Similar(): + p.append(Modified) + case p.Y-dst.Y >= p.X-dst.X: + p.append(UniqueY) + default: + p.append(UniqueX) + } + } + for p.X > dst.X { + p.append(UniqueX) + } + for p.Y > dst.Y { + p.append(UniqueY) + } + } +} + +func (p *path) append(t EditType) { + p.es = append(p.es, t) + switch t { + case Identity, Modified: + p.add(p.dir, p.dir) + case UniqueX: + p.add(p.dir, 0) + case UniqueY: + p.add(0, p.dir) + } + debug.Update() +} + +type point struct{ X, Y int } + +func (p *point) add(dx, dy int) { p.X += dx; p.Y += dy } + +// zigzag maps a consecutive sequence of integers to a zig-zag sequence. +// [0 1 2 3 4 5 ...] => [0 -1 +1 -2 +2 ...] +func zigzag(x int) int { + if x&1 != 0 { + x = ^x + } + return x >> 1 +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/function/func.go b/vendor/github.com/google/go-cmp/cmp/internal/function/func.go new file mode 100644 index 000000000000..4c35ff11ee13 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/function/func.go @@ -0,0 +1,49 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package function identifies function types. +package function + +import "reflect" + +type funcType int + +const ( + _ funcType = iota + + ttbFunc // func(T, T) bool + tibFunc // func(T, I) bool + trFunc // func(T) R + + Equal = ttbFunc // func(T, T) bool + EqualAssignable = tibFunc // func(T, I) bool; encapsulates func(T, T) bool + Transformer = trFunc // func(T) R + ValueFilter = ttbFunc // func(T, T) bool + Less = ttbFunc // func(T, T) bool +) + +var boolType = reflect.TypeOf(true) + +// IsType reports whether the reflect.Type is of the specified function type. +func IsType(t reflect.Type, ft funcType) bool { + if t == nil || t.Kind() != reflect.Func || t.IsVariadic() { + return false + } + ni, no := t.NumIn(), t.NumOut() + switch ft { + case ttbFunc: // func(T, T) bool + if ni == 2 && no == 1 && t.In(0) == t.In(1) && t.Out(0) == boolType { + return true + } + case tibFunc: // func(T, I) bool + if ni == 2 && no == 1 && t.In(0).AssignableTo(t.In(1)) && t.Out(0) == boolType { + return true + } + case trFunc: // func(T) R + if ni == 1 && no == 1 { + return true + } + } + return false +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/format.go b/vendor/github.com/google/go-cmp/cmp/internal/value/format.go new file mode 100644 index 000000000000..657e508779db --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/format.go @@ -0,0 +1,277 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// Package value provides functionality for reflect.Value types. +package value + +import ( + "fmt" + "reflect" + "strconv" + "strings" + "unicode" +) + +var stringerIface = reflect.TypeOf((*fmt.Stringer)(nil)).Elem() + +// Format formats the value v as a string. +// +// This is similar to fmt.Sprintf("%+v", v) except this: +// * Prints the type unless it can be elided +// * Avoids printing struct fields that are zero +// * Prints a nil-slice as being nil, not empty +// * Prints map entries in deterministic order +func Format(v reflect.Value, conf FormatConfig) string { + conf.printType = true + conf.followPointers = true + conf.realPointers = true + return formatAny(v, conf, nil) +} + +type FormatConfig struct { + UseStringer bool // Should the String method be used if available? + printType bool // Should we print the type before the value? + PrintPrimitiveType bool // Should we print the type of primitives? + followPointers bool // Should we recursively follow pointers? + realPointers bool // Should we print the real address of pointers? +} + +func formatAny(v reflect.Value, conf FormatConfig, visited map[uintptr]bool) string { + // TODO: Should this be a multi-line printout in certain situations? + + if !v.IsValid() { + return "" + } + if conf.UseStringer && v.Type().Implements(stringerIface) && v.CanInterface() { + if (v.Kind() == reflect.Ptr || v.Kind() == reflect.Interface) && v.IsNil() { + return "" + } + + const stringerPrefix = "s" // Indicates that the String method was used + s := v.Interface().(fmt.Stringer).String() + return stringerPrefix + formatString(s) + } + + switch v.Kind() { + case reflect.Bool: + return formatPrimitive(v.Type(), v.Bool(), conf) + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return formatPrimitive(v.Type(), v.Int(), conf) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + if v.Type().PkgPath() == "" || v.Kind() == reflect.Uintptr { + // Unnamed uints are usually bytes or words, so use hexadecimal. + return formatPrimitive(v.Type(), formatHex(v.Uint()), conf) + } + return formatPrimitive(v.Type(), v.Uint(), conf) + case reflect.Float32, reflect.Float64: + return formatPrimitive(v.Type(), v.Float(), conf) + case reflect.Complex64, reflect.Complex128: + return formatPrimitive(v.Type(), v.Complex(), conf) + case reflect.String: + return formatPrimitive(v.Type(), formatString(v.String()), conf) + case reflect.UnsafePointer, reflect.Chan, reflect.Func: + return formatPointer(v, conf) + case reflect.Ptr: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("(%v)(nil)", v.Type()) + } + return "" + } + if visited[v.Pointer()] || !conf.followPointers { + return formatPointer(v, conf) + } + visited = insertPointer(visited, v.Pointer()) + return "&" + formatAny(v.Elem(), conf, visited) + case reflect.Interface: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("%v(nil)", v.Type()) + } + return "" + } + return formatAny(v.Elem(), conf, visited) + case reflect.Slice: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("%v(nil)", v.Type()) + } + return "" + } + if visited[v.Pointer()] { + return formatPointer(v, conf) + } + visited = insertPointer(visited, v.Pointer()) + fallthrough + case reflect.Array: + var ss []string + subConf := conf + subConf.printType = v.Type().Elem().Kind() == reflect.Interface + for i := 0; i < v.Len(); i++ { + s := formatAny(v.Index(i), subConf, visited) + ss = append(ss, s) + } + s := fmt.Sprintf("{%s}", strings.Join(ss, ", ")) + if conf.printType { + return v.Type().String() + s + } + return s + case reflect.Map: + if v.IsNil() { + if conf.printType { + return fmt.Sprintf("%v(nil)", v.Type()) + } + return "" + } + if visited[v.Pointer()] { + return formatPointer(v, conf) + } + visited = insertPointer(visited, v.Pointer()) + + var ss []string + keyConf, valConf := conf, conf + keyConf.printType = v.Type().Key().Kind() == reflect.Interface + keyConf.followPointers = false + valConf.printType = v.Type().Elem().Kind() == reflect.Interface + for _, k := range SortKeys(v.MapKeys()) { + sk := formatAny(k, keyConf, visited) + sv := formatAny(v.MapIndex(k), valConf, visited) + ss = append(ss, fmt.Sprintf("%s: %s", sk, sv)) + } + s := fmt.Sprintf("{%s}", strings.Join(ss, ", ")) + if conf.printType { + return v.Type().String() + s + } + return s + case reflect.Struct: + var ss []string + subConf := conf + subConf.printType = true + for i := 0; i < v.NumField(); i++ { + vv := v.Field(i) + if isZero(vv) { + continue // Elide zero value fields + } + name := v.Type().Field(i).Name + subConf.UseStringer = conf.UseStringer + s := formatAny(vv, subConf, visited) + ss = append(ss, fmt.Sprintf("%s: %s", name, s)) + } + s := fmt.Sprintf("{%s}", strings.Join(ss, ", ")) + if conf.printType { + return v.Type().String() + s + } + return s + default: + panic(fmt.Sprintf("%v kind not handled", v.Kind())) + } +} + +func formatString(s string) string { + // Use quoted string if it the same length as a raw string literal. + // Otherwise, attempt to use the raw string form. + qs := strconv.Quote(s) + if len(qs) == 1+len(s)+1 { + return qs + } + + // Disallow newlines to ensure output is a single line. + // Only allow printable runes for readability purposes. + rawInvalid := func(r rune) bool { + return r == '`' || r == '\n' || !unicode.IsPrint(r) + } + if strings.IndexFunc(s, rawInvalid) < 0 { + return "`" + s + "`" + } + return qs +} + +func formatPrimitive(t reflect.Type, v interface{}, conf FormatConfig) string { + if conf.printType && (conf.PrintPrimitiveType || t.PkgPath() != "") { + return fmt.Sprintf("%v(%v)", t, v) + } + return fmt.Sprintf("%v", v) +} + +func formatPointer(v reflect.Value, conf FormatConfig) string { + p := v.Pointer() + if !conf.realPointers { + p = 0 // For deterministic printing purposes + } + s := formatHex(uint64(p)) + if conf.printType { + return fmt.Sprintf("(%v)(%s)", v.Type(), s) + } + return s +} + +func formatHex(u uint64) string { + var f string + switch { + case u <= 0xff: + f = "0x%02x" + case u <= 0xffff: + f = "0x%04x" + case u <= 0xffffff: + f = "0x%06x" + case u <= 0xffffffff: + f = "0x%08x" + case u <= 0xffffffffff: + f = "0x%010x" + case u <= 0xffffffffffff: + f = "0x%012x" + case u <= 0xffffffffffffff: + f = "0x%014x" + case u <= 0xffffffffffffffff: + f = "0x%016x" + } + return fmt.Sprintf(f, u) +} + +// insertPointer insert p into m, allocating m if necessary. +func insertPointer(m map[uintptr]bool, p uintptr) map[uintptr]bool { + if m == nil { + m = make(map[uintptr]bool) + } + m[p] = true + return m +} + +// isZero reports whether v is the zero value. +// This does not rely on Interface and so can be used on unexported fields. +func isZero(v reflect.Value) bool { + switch v.Kind() { + case reflect.Bool: + return v.Bool() == false + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return v.Int() == 0 + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return v.Uint() == 0 + case reflect.Float32, reflect.Float64: + return v.Float() == 0 + case reflect.Complex64, reflect.Complex128: + return v.Complex() == 0 + case reflect.String: + return v.String() == "" + case reflect.UnsafePointer: + return v.Pointer() == 0 + case reflect.Chan, reflect.Func, reflect.Interface, reflect.Ptr, reflect.Map, reflect.Slice: + return v.IsNil() + case reflect.Array: + for i := 0; i < v.Len(); i++ { + if !isZero(v.Index(i)) { + return false + } + } + return true + case reflect.Struct: + for i := 0; i < v.NumField(); i++ { + if !isZero(v.Field(i)) { + return false + } + } + return true + } + return false +} diff --git a/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go b/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go new file mode 100644 index 000000000000..fe8aa27a077b --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/internal/value/sort.go @@ -0,0 +1,111 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package value + +import ( + "fmt" + "math" + "reflect" + "sort" +) + +// SortKeys sorts a list of map keys, deduplicating keys if necessary. +// The type of each value must be comparable. +func SortKeys(vs []reflect.Value) []reflect.Value { + if len(vs) == 0 { + return vs + } + + // Sort the map keys. + sort.Sort(valueSorter(vs)) + + // Deduplicate keys (fails for NaNs). + vs2 := vs[:1] + for _, v := range vs[1:] { + if isLess(vs2[len(vs2)-1], v) { + vs2 = append(vs2, v) + } + } + return vs2 +} + +// TODO: Use sort.Slice once Google AppEngine is on Go1.8 or above. +type valueSorter []reflect.Value + +func (vs valueSorter) Len() int { return len(vs) } +func (vs valueSorter) Less(i, j int) bool { return isLess(vs[i], vs[j]) } +func (vs valueSorter) Swap(i, j int) { vs[i], vs[j] = vs[j], vs[i] } + +// isLess is a generic function for sorting arbitrary map keys. +// The inputs must be of the same type and must be comparable. +func isLess(x, y reflect.Value) bool { + switch x.Type().Kind() { + case reflect.Bool: + return !x.Bool() && y.Bool() + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return x.Int() < y.Int() + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr: + return x.Uint() < y.Uint() + case reflect.Float32, reflect.Float64: + fx, fy := x.Float(), y.Float() + return fx < fy || math.IsNaN(fx) && !math.IsNaN(fy) + case reflect.Complex64, reflect.Complex128: + cx, cy := x.Complex(), y.Complex() + rx, ix, ry, iy := real(cx), imag(cx), real(cy), imag(cy) + if rx == ry || (math.IsNaN(rx) && math.IsNaN(ry)) { + return ix < iy || math.IsNaN(ix) && !math.IsNaN(iy) + } + return rx < ry || math.IsNaN(rx) && !math.IsNaN(ry) + case reflect.Ptr, reflect.UnsafePointer, reflect.Chan: + return x.Pointer() < y.Pointer() + case reflect.String: + return x.String() < y.String() + case reflect.Array: + for i := 0; i < x.Len(); i++ { + if isLess(x.Index(i), y.Index(i)) { + return true + } + if isLess(y.Index(i), x.Index(i)) { + return false + } + } + return false + case reflect.Struct: + for i := 0; i < x.NumField(); i++ { + if isLess(x.Field(i), y.Field(i)) { + return true + } + if isLess(y.Field(i), x.Field(i)) { + return false + } + } + return false + case reflect.Interface: + vx, vy := x.Elem(), y.Elem() + if !vx.IsValid() || !vy.IsValid() { + return !vx.IsValid() && vy.IsValid() + } + tx, ty := vx.Type(), vy.Type() + if tx == ty { + return isLess(x.Elem(), y.Elem()) + } + if tx.Kind() != ty.Kind() { + return vx.Kind() < vy.Kind() + } + if tx.String() != ty.String() { + return tx.String() < ty.String() + } + if tx.PkgPath() != ty.PkgPath() { + return tx.PkgPath() < ty.PkgPath() + } + // This can happen in rare situations, so we fallback to just comparing + // the unique pointer for a reflect.Type. This guarantees deterministic + // ordering within a program, but it is obviously not stable. + return reflect.ValueOf(vx.Type()).Pointer() < reflect.ValueOf(vy.Type()).Pointer() + default: + // Must be Func, Map, or Slice; which are not comparable. + panic(fmt.Sprintf("%T is not comparable", x.Type())) + } +} diff --git a/vendor/github.com/google/go-cmp/cmp/options.go b/vendor/github.com/google/go-cmp/cmp/options.go new file mode 100644 index 000000000000..91d4b066e052 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/options.go @@ -0,0 +1,453 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package cmp + +import ( + "fmt" + "reflect" + "runtime" + "strings" + + "github.com/google/go-cmp/cmp/internal/function" +) + +// Option configures for specific behavior of Equal and Diff. In particular, +// the fundamental Option functions (Ignore, Transformer, and Comparer), +// configure how equality is determined. +// +// The fundamental options may be composed with filters (FilterPath and +// FilterValues) to control the scope over which they are applied. +// +// The cmp/cmpopts package provides helper functions for creating options that +// may be used with Equal and Diff. +type Option interface { + // filter applies all filters and returns the option that remains. + // Each option may only read s.curPath and call s.callTTBFunc. + // + // An Options is returned only if multiple comparers or transformers + // can apply simultaneously and will only contain values of those types + // or sub-Options containing values of those types. + filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption +} + +// applicableOption represents the following types: +// Fundamental: ignore | invalid | *comparer | *transformer +// Grouping: Options +type applicableOption interface { + Option + + // apply executes the option, which may mutate s or panic. + apply(s *state, vx, vy reflect.Value) +} + +// coreOption represents the following types: +// Fundamental: ignore | invalid | *comparer | *transformer +// Filters: *pathFilter | *valuesFilter +type coreOption interface { + Option + isCore() +} + +type core struct{} + +func (core) isCore() {} + +// Options is a list of Option values that also satisfies the Option interface. +// Helper comparison packages may return an Options value when packing multiple +// Option values into a single Option. When this package processes an Options, +// it will be implicitly expanded into a flat list. +// +// Applying a filter on an Options is equivalent to applying that same filter +// on all individual options held within. +type Options []Option + +func (opts Options) filter(s *state, vx, vy reflect.Value, t reflect.Type) (out applicableOption) { + for _, opt := range opts { + switch opt := opt.filter(s, vx, vy, t); opt.(type) { + case ignore: + return ignore{} // Only ignore can short-circuit evaluation + case invalid: + out = invalid{} // Takes precedence over comparer or transformer + case *comparer, *transformer, Options: + switch out.(type) { + case nil: + out = opt + case invalid: + // Keep invalid + case *comparer, *transformer, Options: + out = Options{out, opt} // Conflicting comparers or transformers + } + } + } + return out +} + +func (opts Options) apply(s *state, _, _ reflect.Value) { + const warning = "ambiguous set of applicable options" + const help = "consider using filters to ensure at most one Comparer or Transformer may apply" + var ss []string + for _, opt := range flattenOptions(nil, opts) { + ss = append(ss, fmt.Sprint(opt)) + } + set := strings.Join(ss, "\n\t") + panic(fmt.Sprintf("%s at %#v:\n\t%s\n%s", warning, s.curPath, set, help)) +} + +func (opts Options) String() string { + var ss []string + for _, opt := range opts { + ss = append(ss, fmt.Sprint(opt)) + } + return fmt.Sprintf("Options{%s}", strings.Join(ss, ", ")) +} + +// FilterPath returns a new Option where opt is only evaluated if filter f +// returns true for the current Path in the value tree. +// +// The option passed in may be an Ignore, Transformer, Comparer, Options, or +// a previously filtered Option. +func FilterPath(f func(Path) bool, opt Option) Option { + if f == nil { + panic("invalid path filter function") + } + if opt := normalizeOption(opt); opt != nil { + return &pathFilter{fnc: f, opt: opt} + } + return nil +} + +type pathFilter struct { + core + fnc func(Path) bool + opt Option +} + +func (f pathFilter) filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption { + if f.fnc(s.curPath) { + return f.opt.filter(s, vx, vy, t) + } + return nil +} + +func (f pathFilter) String() string { + fn := getFuncName(reflect.ValueOf(f.fnc).Pointer()) + return fmt.Sprintf("FilterPath(%s, %v)", fn, f.opt) +} + +// FilterValues returns a new Option where opt is only evaluated if filter f, +// which is a function of the form "func(T, T) bool", returns true for the +// current pair of values being compared. If the type of the values is not +// assignable to T, then this filter implicitly returns false. +// +// The filter function must be +// symmetric (i.e., agnostic to the order of the inputs) and +// deterministic (i.e., produces the same result when given the same inputs). +// If T is an interface, it is possible that f is called with two values with +// different concrete types that both implement T. +// +// The option passed in may be an Ignore, Transformer, Comparer, Options, or +// a previously filtered Option. +func FilterValues(f interface{}, opt Option) Option { + v := reflect.ValueOf(f) + if !function.IsType(v.Type(), function.ValueFilter) || v.IsNil() { + panic(fmt.Sprintf("invalid values filter function: %T", f)) + } + if opt := normalizeOption(opt); opt != nil { + vf := &valuesFilter{fnc: v, opt: opt} + if ti := v.Type().In(0); ti.Kind() != reflect.Interface || ti.NumMethod() > 0 { + vf.typ = ti + } + return vf + } + return nil +} + +type valuesFilter struct { + core + typ reflect.Type // T + fnc reflect.Value // func(T, T) bool + opt Option +} + +func (f valuesFilter) filter(s *state, vx, vy reflect.Value, t reflect.Type) applicableOption { + if !vx.IsValid() || !vy.IsValid() { + return invalid{} + } + if (f.typ == nil || t.AssignableTo(f.typ)) && s.callTTBFunc(f.fnc, vx, vy) { + return f.opt.filter(s, vx, vy, t) + } + return nil +} + +func (f valuesFilter) String() string { + fn := getFuncName(f.fnc.Pointer()) + return fmt.Sprintf("FilterValues(%s, %v)", fn, f.opt) +} + +// Ignore is an Option that causes all comparisons to be ignored. +// This value is intended to be combined with FilterPath or FilterValues. +// It is an error to pass an unfiltered Ignore option to Equal. +func Ignore() Option { return ignore{} } + +type ignore struct{ core } + +func (ignore) isFiltered() bool { return false } +func (ignore) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { return ignore{} } +func (ignore) apply(_ *state, _, _ reflect.Value) { return } +func (ignore) String() string { return "Ignore()" } + +// invalid is a sentinel Option type to indicate that some options could not +// be evaluated due to unexported fields. +type invalid struct{ core } + +func (invalid) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { return invalid{} } +func (invalid) apply(s *state, _, _ reflect.Value) { + const help = "consider using AllowUnexported or cmpopts.IgnoreUnexported" + panic(fmt.Sprintf("cannot handle unexported field: %#v\n%s", s.curPath, help)) +} + +// Transformer returns an Option that applies a transformation function that +// converts values of a certain type into that of another. +// +// The transformer f must be a function "func(T) R" that converts values of +// type T to those of type R and is implicitly filtered to input values +// assignable to T. The transformer must not mutate T in any way. +// +// To help prevent some cases of infinite recursive cycles applying the +// same transform to the output of itself (e.g., in the case where the +// input and output types are the same), an implicit filter is added such that +// a transformer is applicable only if that exact transformer is not already +// in the tail of the Path since the last non-Transform step. +// +// The name is a user provided label that is used as the Transform.Name in the +// transformation PathStep. If empty, an arbitrary name is used. +func Transformer(name string, f interface{}) Option { + v := reflect.ValueOf(f) + if !function.IsType(v.Type(), function.Transformer) || v.IsNil() { + panic(fmt.Sprintf("invalid transformer function: %T", f)) + } + if name == "" { + name = "λ" // Lambda-symbol as place-holder for anonymous transformer + } + if !isValid(name) { + panic(fmt.Sprintf("invalid name: %q", name)) + } + tr := &transformer{name: name, fnc: reflect.ValueOf(f)} + if ti := v.Type().In(0); ti.Kind() != reflect.Interface || ti.NumMethod() > 0 { + tr.typ = ti + } + return tr +} + +type transformer struct { + core + name string + typ reflect.Type // T + fnc reflect.Value // func(T) R +} + +func (tr *transformer) isFiltered() bool { return tr.typ != nil } + +func (tr *transformer) filter(s *state, _, _ reflect.Value, t reflect.Type) applicableOption { + for i := len(s.curPath) - 1; i >= 0; i-- { + if t, ok := s.curPath[i].(*transform); !ok { + break // Hit most recent non-Transform step + } else if tr == t.trans { + return nil // Cannot directly use same Transform + } + } + if tr.typ == nil || t.AssignableTo(tr.typ) { + return tr + } + return nil +} + +func (tr *transformer) apply(s *state, vx, vy reflect.Value) { + // Update path before calling the Transformer so that dynamic checks + // will use the updated path. + s.curPath.push(&transform{pathStep{tr.fnc.Type().Out(0)}, tr}) + defer s.curPath.pop() + + vx = s.callTRFunc(tr.fnc, vx) + vy = s.callTRFunc(tr.fnc, vy) + s.compareAny(vx, vy) +} + +func (tr transformer) String() string { + return fmt.Sprintf("Transformer(%s, %s)", tr.name, getFuncName(tr.fnc.Pointer())) +} + +// Comparer returns an Option that determines whether two values are equal +// to each other. +// +// The comparer f must be a function "func(T, T) bool" and is implicitly +// filtered to input values assignable to T. If T is an interface, it is +// possible that f is called with two values of different concrete types that +// both implement T. +// +// The equality function must be: +// • Symmetric: equal(x, y) == equal(y, x) +// • Deterministic: equal(x, y) == equal(x, y) +// • Pure: equal(x, y) does not modify x or y +func Comparer(f interface{}) Option { + v := reflect.ValueOf(f) + if !function.IsType(v.Type(), function.Equal) || v.IsNil() { + panic(fmt.Sprintf("invalid comparer function: %T", f)) + } + cm := &comparer{fnc: v} + if ti := v.Type().In(0); ti.Kind() != reflect.Interface || ti.NumMethod() > 0 { + cm.typ = ti + } + return cm +} + +type comparer struct { + core + typ reflect.Type // T + fnc reflect.Value // func(T, T) bool +} + +func (cm *comparer) isFiltered() bool { return cm.typ != nil } + +func (cm *comparer) filter(_ *state, _, _ reflect.Value, t reflect.Type) applicableOption { + if cm.typ == nil || t.AssignableTo(cm.typ) { + return cm + } + return nil +} + +func (cm *comparer) apply(s *state, vx, vy reflect.Value) { + eq := s.callTTBFunc(cm.fnc, vx, vy) + s.report(eq, vx, vy) +} + +func (cm comparer) String() string { + return fmt.Sprintf("Comparer(%s)", getFuncName(cm.fnc.Pointer())) +} + +// AllowUnexported returns an Option that forcibly allows operations on +// unexported fields in certain structs, which are specified by passing in a +// value of each struct type. +// +// Users of this option must understand that comparing on unexported fields +// from external packages is not safe since changes in the internal +// implementation of some external package may cause the result of Equal +// to unexpectedly change. However, it may be valid to use this option on types +// defined in an internal package where the semantic meaning of an unexported +// field is in the control of the user. +// +// For some cases, a custom Comparer should be used instead that defines +// equality as a function of the public API of a type rather than the underlying +// unexported implementation. +// +// For example, the reflect.Type documentation defines equality to be determined +// by the == operator on the interface (essentially performing a shallow pointer +// comparison) and most attempts to compare *regexp.Regexp types are interested +// in only checking that the regular expression strings are equal. +// Both of these are accomplished using Comparers: +// +// Comparer(func(x, y reflect.Type) bool { return x == y }) +// Comparer(func(x, y *regexp.Regexp) bool { return x.String() == y.String() }) +// +// In other cases, the cmpopts.IgnoreUnexported option can be used to ignore +// all unexported fields on specified struct types. +func AllowUnexported(types ...interface{}) Option { + if !supportAllowUnexported { + panic("AllowUnexported is not supported on purego builds, Google App Engine Standard, or GopherJS") + } + m := make(map[reflect.Type]bool) + for _, typ := range types { + t := reflect.TypeOf(typ) + if t.Kind() != reflect.Struct { + panic(fmt.Sprintf("invalid struct type: %T", typ)) + } + m[t] = true + } + return visibleStructs(m) +} + +type visibleStructs map[reflect.Type]bool + +func (visibleStructs) filter(_ *state, _, _ reflect.Value, _ reflect.Type) applicableOption { + panic("not implemented") +} + +// reporter is an Option that configures how differences are reported. +type reporter interface { + // TODO: Not exported yet. + // + // Perhaps add PushStep and PopStep and change Report to only accept + // a PathStep instead of the full-path? Adding a PushStep and PopStep makes + // it clear that we are traversing the value tree in a depth-first-search + // manner, which has an effect on how values are printed. + + Option + + // Report is called for every comparison made and will be provided with + // the two values being compared, the equality result, and the + // current path in the value tree. It is possible for x or y to be an + // invalid reflect.Value if one of the values is non-existent; + // which is possible with maps and slices. + Report(x, y reflect.Value, eq bool, p Path) +} + +// normalizeOption normalizes the input options such that all Options groups +// are flattened and groups with a single element are reduced to that element. +// Only coreOptions and Options containing coreOptions are allowed. +func normalizeOption(src Option) Option { + switch opts := flattenOptions(nil, Options{src}); len(opts) { + case 0: + return nil + case 1: + return opts[0] + default: + return opts + } +} + +// flattenOptions copies all options in src to dst as a flat list. +// Only coreOptions and Options containing coreOptions are allowed. +func flattenOptions(dst, src Options) Options { + for _, opt := range src { + switch opt := opt.(type) { + case nil: + continue + case Options: + dst = flattenOptions(dst, opt) + case coreOption: + dst = append(dst, opt) + default: + panic(fmt.Sprintf("invalid option type: %T", opt)) + } + } + return dst +} + +// getFuncName returns a short function name from the pointer. +// The string parsing logic works up until Go1.9. +func getFuncName(p uintptr) string { + fnc := runtime.FuncForPC(p) + if fnc == nil { + return "" + } + name := fnc.Name() // E.g., "long/path/name/mypkg.(mytype).(long/path/name/mypkg.myfunc)-fm" + if strings.HasSuffix(name, ")-fm") || strings.HasSuffix(name, ")·fm") { + // Strip the package name from method name. + name = strings.TrimSuffix(name, ")-fm") + name = strings.TrimSuffix(name, ")·fm") + if i := strings.LastIndexByte(name, '('); i >= 0 { + methodName := name[i+1:] // E.g., "long/path/name/mypkg.myfunc" + if j := strings.LastIndexByte(methodName, '.'); j >= 0 { + methodName = methodName[j+1:] // E.g., "myfunc" + } + name = name[:i] + methodName // E.g., "long/path/name/mypkg.(mytype)." + "myfunc" + } + } + if i := strings.LastIndexByte(name, '/'); i >= 0 { + // Strip the package name. + name = name[i+1:] // E.g., "mypkg.(mytype).myfunc" + } + return name +} diff --git a/vendor/github.com/google/go-cmp/cmp/path.go b/vendor/github.com/google/go-cmp/cmp/path.go new file mode 100644 index 000000000000..c08a3cf80d9b --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/path.go @@ -0,0 +1,309 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package cmp + +import ( + "fmt" + "reflect" + "strings" + "unicode" + "unicode/utf8" +) + +type ( + // Path is a list of PathSteps describing the sequence of operations to get + // from some root type to the current position in the value tree. + // The first Path element is always an operation-less PathStep that exists + // simply to identify the initial type. + // + // When traversing structs with embedded structs, the embedded struct will + // always be accessed as a field before traversing the fields of the + // embedded struct themselves. That is, an exported field from the + // embedded struct will never be accessed directly from the parent struct. + Path []PathStep + + // PathStep is a union-type for specific operations to traverse + // a value's tree structure. Users of this package never need to implement + // these types as values of this type will be returned by this package. + PathStep interface { + String() string + Type() reflect.Type // Resulting type after performing the path step + isPathStep() + } + + // SliceIndex is an index operation on a slice or array at some index Key. + SliceIndex interface { + PathStep + Key() int // May return -1 if in a split state + + // SplitKeys returns the indexes for indexing into slices in the + // x and y values, respectively. These indexes may differ due to the + // insertion or removal of an element in one of the slices, causing + // all of the indexes to be shifted. If an index is -1, then that + // indicates that the element does not exist in the associated slice. + // + // Key is guaranteed to return -1 if and only if the indexes returned + // by SplitKeys are not the same. SplitKeys will never return -1 for + // both indexes. + SplitKeys() (x int, y int) + + isSliceIndex() + } + // MapIndex is an index operation on a map at some index Key. + MapIndex interface { + PathStep + Key() reflect.Value + isMapIndex() + } + // TypeAssertion represents a type assertion on an interface. + TypeAssertion interface { + PathStep + isTypeAssertion() + } + // StructField represents a struct field access on a field called Name. + StructField interface { + PathStep + Name() string + Index() int + isStructField() + } + // Indirect represents pointer indirection on the parent type. + Indirect interface { + PathStep + isIndirect() + } + // Transform is a transformation from the parent type to the current type. + Transform interface { + PathStep + Name() string + Func() reflect.Value + + // Option returns the originally constructed Transformer option. + // The == operator can be used to detect the exact option used. + Option() Option + + isTransform() + } +) + +func (pa *Path) push(s PathStep) { + *pa = append(*pa, s) +} + +func (pa *Path) pop() { + *pa = (*pa)[:len(*pa)-1] +} + +// Last returns the last PathStep in the Path. +// If the path is empty, this returns a non-nil PathStep that reports a nil Type. +func (pa Path) Last() PathStep { + return pa.Index(-1) +} + +// Index returns the ith step in the Path and supports negative indexing. +// A negative index starts counting from the tail of the Path such that -1 +// refers to the last step, -2 refers to the second-to-last step, and so on. +// If index is invalid, this returns a non-nil PathStep that reports a nil Type. +func (pa Path) Index(i int) PathStep { + if i < 0 { + i = len(pa) + i + } + if i < 0 || i >= len(pa) { + return pathStep{} + } + return pa[i] +} + +// String returns the simplified path to a node. +// The simplified path only contains struct field accesses. +// +// For example: +// MyMap.MySlices.MyField +func (pa Path) String() string { + var ss []string + for _, s := range pa { + if _, ok := s.(*structField); ok { + ss = append(ss, s.String()) + } + } + return strings.TrimPrefix(strings.Join(ss, ""), ".") +} + +// GoString returns the path to a specific node using Go syntax. +// +// For example: +// (*root.MyMap["key"].(*mypkg.MyStruct).MySlices)[2][3].MyField +func (pa Path) GoString() string { + var ssPre, ssPost []string + var numIndirect int + for i, s := range pa { + var nextStep PathStep + if i+1 < len(pa) { + nextStep = pa[i+1] + } + switch s := s.(type) { + case *indirect: + numIndirect++ + pPre, pPost := "(", ")" + switch nextStep.(type) { + case *indirect: + continue // Next step is indirection, so let them batch up + case *structField: + numIndirect-- // Automatic indirection on struct fields + case nil: + pPre, pPost = "", "" // Last step; no need for parenthesis + } + if numIndirect > 0 { + ssPre = append(ssPre, pPre+strings.Repeat("*", numIndirect)) + ssPost = append(ssPost, pPost) + } + numIndirect = 0 + continue + case *transform: + ssPre = append(ssPre, s.trans.name+"(") + ssPost = append(ssPost, ")") + continue + case *typeAssertion: + // As a special-case, elide type assertions on anonymous types + // since they are typically generated dynamically and can be very + // verbose. For example, some transforms return interface{} because + // of Go's lack of generics, but typically take in and return the + // exact same concrete type. + if s.Type().PkgPath() == "" { + continue + } + } + ssPost = append(ssPost, s.String()) + } + for i, j := 0, len(ssPre)-1; i < j; i, j = i+1, j-1 { + ssPre[i], ssPre[j] = ssPre[j], ssPre[i] + } + return strings.Join(ssPre, "") + strings.Join(ssPost, "") +} + +type ( + pathStep struct { + typ reflect.Type + } + + sliceIndex struct { + pathStep + xkey, ykey int + } + mapIndex struct { + pathStep + key reflect.Value + } + typeAssertion struct { + pathStep + } + structField struct { + pathStep + name string + idx int + + // These fields are used for forcibly accessing an unexported field. + // pvx, pvy, and field are only valid if unexported is true. + unexported bool + force bool // Forcibly allow visibility + pvx, pvy reflect.Value // Parent values + field reflect.StructField // Field information + } + indirect struct { + pathStep + } + transform struct { + pathStep + trans *transformer + } +) + +func (ps pathStep) Type() reflect.Type { return ps.typ } +func (ps pathStep) String() string { + if ps.typ == nil { + return "" + } + s := ps.typ.String() + if s == "" || strings.ContainsAny(s, "{}\n") { + return "root" // Type too simple or complex to print + } + return fmt.Sprintf("{%s}", s) +} + +func (si sliceIndex) String() string { + switch { + case si.xkey == si.ykey: + return fmt.Sprintf("[%d]", si.xkey) + case si.ykey == -1: + // [5->?] means "I don't know where X[5] went" + return fmt.Sprintf("[%d->?]", si.xkey) + case si.xkey == -1: + // [?->3] means "I don't know where Y[3] came from" + return fmt.Sprintf("[?->%d]", si.ykey) + default: + // [5->3] means "X[5] moved to Y[3]" + return fmt.Sprintf("[%d->%d]", si.xkey, si.ykey) + } +} +func (mi mapIndex) String() string { return fmt.Sprintf("[%#v]", mi.key) } +func (ta typeAssertion) String() string { return fmt.Sprintf(".(%v)", ta.typ) } +func (sf structField) String() string { return fmt.Sprintf(".%s", sf.name) } +func (in indirect) String() string { return "*" } +func (tf transform) String() string { return fmt.Sprintf("%s()", tf.trans.name) } + +func (si sliceIndex) Key() int { + if si.xkey != si.ykey { + return -1 + } + return si.xkey +} +func (si sliceIndex) SplitKeys() (x, y int) { return si.xkey, si.ykey } +func (mi mapIndex) Key() reflect.Value { return mi.key } +func (sf structField) Name() string { return sf.name } +func (sf structField) Index() int { return sf.idx } +func (tf transform) Name() string { return tf.trans.name } +func (tf transform) Func() reflect.Value { return tf.trans.fnc } +func (tf transform) Option() Option { return tf.trans } + +func (pathStep) isPathStep() {} +func (sliceIndex) isSliceIndex() {} +func (mapIndex) isMapIndex() {} +func (typeAssertion) isTypeAssertion() {} +func (structField) isStructField() {} +func (indirect) isIndirect() {} +func (transform) isTransform() {} + +var ( + _ SliceIndex = sliceIndex{} + _ MapIndex = mapIndex{} + _ TypeAssertion = typeAssertion{} + _ StructField = structField{} + _ Indirect = indirect{} + _ Transform = transform{} + + _ PathStep = sliceIndex{} + _ PathStep = mapIndex{} + _ PathStep = typeAssertion{} + _ PathStep = structField{} + _ PathStep = indirect{} + _ PathStep = transform{} +) + +// isExported reports whether the identifier is exported. +func isExported(id string) bool { + r, _ := utf8.DecodeRuneInString(id) + return unicode.IsUpper(r) +} + +// isValid reports whether the identifier is valid. +// Empty and underscore-only strings are not valid. +func isValid(id string) bool { + ok := id != "" && id != "_" + for j, c := range id { + ok = ok && (j > 0 || !unicode.IsDigit(c)) + ok = ok && (c == '_' || unicode.IsLetter(c) || unicode.IsDigit(c)) + } + return ok +} diff --git a/vendor/github.com/google/go-cmp/cmp/reporter.go b/vendor/github.com/google/go-cmp/cmp/reporter.go new file mode 100644 index 000000000000..20e9f18e0dd5 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/reporter.go @@ -0,0 +1,53 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +package cmp + +import ( + "fmt" + "reflect" + "strings" + + "github.com/google/go-cmp/cmp/internal/value" +) + +type defaultReporter struct { + Option + diffs []string // List of differences, possibly truncated + ndiffs int // Total number of differences + nbytes int // Number of bytes in diffs + nlines int // Number of lines in diffs +} + +var _ reporter = (*defaultReporter)(nil) + +func (r *defaultReporter) Report(x, y reflect.Value, eq bool, p Path) { + if eq { + return // Ignore equal results + } + const maxBytes = 4096 + const maxLines = 256 + r.ndiffs++ + if r.nbytes < maxBytes && r.nlines < maxLines { + sx := value.Format(x, value.FormatConfig{UseStringer: true}) + sy := value.Format(y, value.FormatConfig{UseStringer: true}) + if sx == sy { + // Unhelpful output, so use more exact formatting. + sx = value.Format(x, value.FormatConfig{PrintPrimitiveType: true}) + sy = value.Format(y, value.FormatConfig{PrintPrimitiveType: true}) + } + s := fmt.Sprintf("%#v:\n\t-: %s\n\t+: %s\n", p, sx, sy) + r.diffs = append(r.diffs, s) + r.nbytes += len(s) + r.nlines += strings.Count(s, "\n") + } +} + +func (r *defaultReporter) String() string { + s := strings.Join(r.diffs, "") + if r.ndiffs == len(r.diffs) { + return s + } + return fmt.Sprintf("%s... %d more differences ...", s, r.ndiffs-len(r.diffs)) +} diff --git a/vendor/github.com/google/go-cmp/cmp/unsafe_panic.go b/vendor/github.com/google/go-cmp/cmp/unsafe_panic.go new file mode 100644 index 000000000000..d1518eb3a8c7 --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/unsafe_panic.go @@ -0,0 +1,15 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build purego appengine js + +package cmp + +import "reflect" + +const supportAllowUnexported = false + +func unsafeRetrieveField(reflect.Value, reflect.StructField) reflect.Value { + panic("unsafeRetrieveField is not implemented") +} diff --git a/vendor/github.com/google/go-cmp/cmp/unsafe_reflect.go b/vendor/github.com/google/go-cmp/cmp/unsafe_reflect.go new file mode 100644 index 000000000000..579b65507f6b --- /dev/null +++ b/vendor/github.com/google/go-cmp/cmp/unsafe_reflect.go @@ -0,0 +1,23 @@ +// Copyright 2017, The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE.md file. + +// +build !purego,!appengine,!js + +package cmp + +import ( + "reflect" + "unsafe" +) + +const supportAllowUnexported = true + +// unsafeRetrieveField uses unsafe to forcibly retrieve any field from a struct +// such that the value has read-write permissions. +// +// The parent struct, v, must be addressable, while f must be a StructField +// describing the field to retrieve. +func unsafeRetrieveField(v reflect.Value, f reflect.StructField) reflect.Value { + return reflect.NewAt(f.Type, unsafe.Pointer(v.UnsafeAddr()+f.Offset)).Elem() +} diff --git a/vendor/github.com/opentracing/opentracing-go/mocktracer/mocklogrecord.go b/vendor/github.com/opentracing/opentracing-go/mocktracer/mocklogrecord.go new file mode 100644 index 000000000000..2ce96d9d3887 --- /dev/null +++ b/vendor/github.com/opentracing/opentracing-go/mocktracer/mocklogrecord.go @@ -0,0 +1,105 @@ +package mocktracer + +import ( + "fmt" + "reflect" + "time" + + "github.com/opentracing/opentracing-go/log" +) + +// MockLogRecord represents data logged to a Span via Span.LogFields or +// Span.LogKV. +type MockLogRecord struct { + Timestamp time.Time + Fields []MockKeyValue +} + +// MockKeyValue represents a single key:value pair. +type MockKeyValue struct { + Key string + + // All MockLogRecord values are coerced to strings via fmt.Sprint(), though + // we retain their type separately. + ValueKind reflect.Kind + ValueString string +} + +// EmitString belongs to the log.Encoder interface +func (m *MockKeyValue) EmitString(key, value string) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitBool belongs to the log.Encoder interface +func (m *MockKeyValue) EmitBool(key string, value bool) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitInt belongs to the log.Encoder interface +func (m *MockKeyValue) EmitInt(key string, value int) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitInt32 belongs to the log.Encoder interface +func (m *MockKeyValue) EmitInt32(key string, value int32) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitInt64 belongs to the log.Encoder interface +func (m *MockKeyValue) EmitInt64(key string, value int64) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitUint32 belongs to the log.Encoder interface +func (m *MockKeyValue) EmitUint32(key string, value uint32) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitUint64 belongs to the log.Encoder interface +func (m *MockKeyValue) EmitUint64(key string, value uint64) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitFloat32 belongs to the log.Encoder interface +func (m *MockKeyValue) EmitFloat32(key string, value float32) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitFloat64 belongs to the log.Encoder interface +func (m *MockKeyValue) EmitFloat64(key string, value float64) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitObject belongs to the log.Encoder interface +func (m *MockKeyValue) EmitObject(key string, value interface{}) { + m.Key = key + m.ValueKind = reflect.TypeOf(value).Kind() + m.ValueString = fmt.Sprint(value) +} + +// EmitLazyLogger belongs to the log.Encoder interface +func (m *MockKeyValue) EmitLazyLogger(value log.LazyLogger) { + var meta MockKeyValue + value(&meta) + m.Key = meta.Key + m.ValueKind = meta.ValueKind + m.ValueString = meta.ValueString +} diff --git a/vendor/github.com/opentracing/opentracing-go/mocktracer/mockspan.go b/vendor/github.com/opentracing/opentracing-go/mocktracer/mockspan.go new file mode 100644 index 000000000000..69defda23e51 --- /dev/null +++ b/vendor/github.com/opentracing/opentracing-go/mocktracer/mockspan.go @@ -0,0 +1,282 @@ +package mocktracer + +import ( + "fmt" + "sync" + "sync/atomic" + "time" + + "github.com/opentracing/opentracing-go" + "github.com/opentracing/opentracing-go/ext" + "github.com/opentracing/opentracing-go/log" +) + +// MockSpanContext is an opentracing.SpanContext implementation. +// +// It is entirely unsuitable for production use, but appropriate for tests +// that want to verify tracing behavior in other frameworks/applications. +// +// By default all spans have Sampled=true flag, unless {"sampling.priority": 0} +// tag is set. +type MockSpanContext struct { + TraceID int + SpanID int + Sampled bool + Baggage map[string]string +} + +var mockIDSource = uint32(42) + +func nextMockID() int { + return int(atomic.AddUint32(&mockIDSource, 1)) +} + +// ForeachBaggageItem belongs to the SpanContext interface +func (c MockSpanContext) ForeachBaggageItem(handler func(k, v string) bool) { + for k, v := range c.Baggage { + if !handler(k, v) { + break + } + } +} + +// WithBaggageItem creates a new context with an extra baggage item. +func (c MockSpanContext) WithBaggageItem(key, value string) MockSpanContext { + var newBaggage map[string]string + if c.Baggage == nil { + newBaggage = map[string]string{key: value} + } else { + newBaggage = make(map[string]string, len(c.Baggage)+1) + for k, v := range c.Baggage { + newBaggage[k] = v + } + newBaggage[key] = value + } + // Use positional parameters so the compiler will help catch new fields. + return MockSpanContext{c.TraceID, c.SpanID, c.Sampled, newBaggage} +} + +// MockSpan is an opentracing.Span implementation that exports its internal +// state for testing purposes. +type MockSpan struct { + sync.RWMutex + + ParentID int + + OperationName string + StartTime time.Time + FinishTime time.Time + + // All of the below are protected by the embedded RWMutex. + SpanContext MockSpanContext + tags map[string]interface{} + logs []MockLogRecord + tracer *MockTracer +} + +func newMockSpan(t *MockTracer, name string, opts opentracing.StartSpanOptions) *MockSpan { + tags := opts.Tags + if tags == nil { + tags = map[string]interface{}{} + } + traceID := nextMockID() + parentID := int(0) + var baggage map[string]string + sampled := true + if len(opts.References) > 0 { + traceID = opts.References[0].ReferencedContext.(MockSpanContext).TraceID + parentID = opts.References[0].ReferencedContext.(MockSpanContext).SpanID + sampled = opts.References[0].ReferencedContext.(MockSpanContext).Sampled + baggage = opts.References[0].ReferencedContext.(MockSpanContext).Baggage + } + spanContext := MockSpanContext{traceID, nextMockID(), sampled, baggage} + startTime := opts.StartTime + if startTime.IsZero() { + startTime = time.Now() + } + return &MockSpan{ + ParentID: parentID, + OperationName: name, + StartTime: startTime, + tags: tags, + logs: []MockLogRecord{}, + SpanContext: spanContext, + + tracer: t, + } +} + +// Tags returns a copy of tags accumulated by the span so far +func (s *MockSpan) Tags() map[string]interface{} { + s.RLock() + defer s.RUnlock() + tags := make(map[string]interface{}) + for k, v := range s.tags { + tags[k] = v + } + return tags +} + +// Tag returns a single tag +func (s *MockSpan) Tag(k string) interface{} { + s.RLock() + defer s.RUnlock() + return s.tags[k] +} + +// Logs returns a copy of logs accumulated in the span so far +func (s *MockSpan) Logs() []MockLogRecord { + s.RLock() + defer s.RUnlock() + logs := make([]MockLogRecord, len(s.logs)) + copy(logs, s.logs) + return logs +} + +// Context belongs to the Span interface +func (s *MockSpan) Context() opentracing.SpanContext { + return s.SpanContext +} + +// SetTag belongs to the Span interface +func (s *MockSpan) SetTag(key string, value interface{}) opentracing.Span { + s.Lock() + defer s.Unlock() + if key == string(ext.SamplingPriority) { + if v, ok := value.(uint16); ok { + s.SpanContext.Sampled = v > 0 + return s + } + if v, ok := value.(int); ok { + s.SpanContext.Sampled = v > 0 + return s + } + } + s.tags[key] = value + return s +} + +// SetBaggageItem belongs to the Span interface +func (s *MockSpan) SetBaggageItem(key, val string) opentracing.Span { + s.Lock() + defer s.Unlock() + s.SpanContext = s.SpanContext.WithBaggageItem(key, val) + return s +} + +// BaggageItem belongs to the Span interface +func (s *MockSpan) BaggageItem(key string) string { + s.RLock() + defer s.RUnlock() + return s.SpanContext.Baggage[key] +} + +// Finish belongs to the Span interface +func (s *MockSpan) Finish() { + s.Lock() + s.FinishTime = time.Now() + s.Unlock() + s.tracer.recordSpan(s) +} + +// FinishWithOptions belongs to the Span interface +func (s *MockSpan) FinishWithOptions(opts opentracing.FinishOptions) { + s.Lock() + s.FinishTime = opts.FinishTime + s.Unlock() + + // Handle any late-bound LogRecords. + for _, lr := range opts.LogRecords { + s.logFieldsWithTimestamp(lr.Timestamp, lr.Fields...) + } + // Handle (deprecated) BulkLogData. + for _, ld := range opts.BulkLogData { + if ld.Payload != nil { + s.logFieldsWithTimestamp( + ld.Timestamp, + log.String("event", ld.Event), + log.Object("payload", ld.Payload)) + } else { + s.logFieldsWithTimestamp( + ld.Timestamp, + log.String("event", ld.Event)) + } + } + + s.tracer.recordSpan(s) +} + +// String allows printing span for debugging +func (s *MockSpan) String() string { + return fmt.Sprintf( + "traceId=%d, spanId=%d, parentId=%d, sampled=%t, name=%s", + s.SpanContext.TraceID, s.SpanContext.SpanID, s.ParentID, + s.SpanContext.Sampled, s.OperationName) +} + +// LogFields belongs to the Span interface +func (s *MockSpan) LogFields(fields ...log.Field) { + s.logFieldsWithTimestamp(time.Now(), fields...) +} + +// The caller MUST NOT hold s.Lock +func (s *MockSpan) logFieldsWithTimestamp(ts time.Time, fields ...log.Field) { + lr := MockLogRecord{ + Timestamp: ts, + Fields: make([]MockKeyValue, len(fields)), + } + for i, f := range fields { + outField := &(lr.Fields[i]) + f.Marshal(outField) + } + + s.Lock() + defer s.Unlock() + s.logs = append(s.logs, lr) +} + +// LogKV belongs to the Span interface. +// +// This implementations coerces all "values" to strings, though that is not +// something all implementations need to do. Indeed, a motivated person can and +// probably should have this do a typed switch on the values. +func (s *MockSpan) LogKV(keyValues ...interface{}) { + if len(keyValues)%2 != 0 { + s.LogFields(log.Error(fmt.Errorf("Non-even keyValues len: %v", len(keyValues)))) + return + } + fields, err := log.InterleavedKVToFields(keyValues...) + if err != nil { + s.LogFields(log.Error(err), log.String("function", "LogKV")) + return + } + s.LogFields(fields...) +} + +// LogEvent belongs to the Span interface +func (s *MockSpan) LogEvent(event string) { + s.LogFields(log.String("event", event)) +} + +// LogEventWithPayload belongs to the Span interface +func (s *MockSpan) LogEventWithPayload(event string, payload interface{}) { + s.LogFields(log.String("event", event), log.Object("payload", payload)) +} + +// Log belongs to the Span interface +func (s *MockSpan) Log(data opentracing.LogData) { + panic("MockSpan.Log() no longer supported") +} + +// SetOperationName belongs to the Span interface +func (s *MockSpan) SetOperationName(operationName string) opentracing.Span { + s.Lock() + defer s.Unlock() + s.OperationName = operationName + return s +} + +// Tracer belongs to the Span interface +func (s *MockSpan) Tracer() opentracing.Tracer { + return s.tracer +} diff --git a/vendor/github.com/opentracing/opentracing-go/mocktracer/mocktracer.go b/vendor/github.com/opentracing/opentracing-go/mocktracer/mocktracer.go new file mode 100644 index 000000000000..a74c1458abbf --- /dev/null +++ b/vendor/github.com/opentracing/opentracing-go/mocktracer/mocktracer.go @@ -0,0 +1,105 @@ +package mocktracer + +import ( + "sync" + + "github.com/opentracing/opentracing-go" +) + +// New returns a MockTracer opentracing.Tracer implementation that's intended +// to facilitate tests of OpenTracing instrumentation. +func New() *MockTracer { + t := &MockTracer{ + finishedSpans: []*MockSpan{}, + injectors: make(map[interface{}]Injector), + extractors: make(map[interface{}]Extractor), + } + + // register default injectors/extractors + textPropagator := new(TextMapPropagator) + t.RegisterInjector(opentracing.TextMap, textPropagator) + t.RegisterExtractor(opentracing.TextMap, textPropagator) + + httpPropagator := &TextMapPropagator{HTTPHeaders: true} + t.RegisterInjector(opentracing.HTTPHeaders, httpPropagator) + t.RegisterExtractor(opentracing.HTTPHeaders, httpPropagator) + + return t +} + +// MockTracer is only intended for testing OpenTracing instrumentation. +// +// It is entirely unsuitable for production use, but appropriate for tests +// that want to verify tracing behavior in other frameworks/applications. +type MockTracer struct { + sync.RWMutex + finishedSpans []*MockSpan + injectors map[interface{}]Injector + extractors map[interface{}]Extractor +} + +// FinishedSpans returns all spans that have been Finish()'ed since the +// MockTracer was constructed or since the last call to its Reset() method. +func (t *MockTracer) FinishedSpans() []*MockSpan { + t.RLock() + defer t.RUnlock() + spans := make([]*MockSpan, len(t.finishedSpans)) + copy(spans, t.finishedSpans) + return spans +} + +// Reset clears the internally accumulated finished spans. Note that any +// extant MockSpans will still append to finishedSpans when they Finish(), +// even after a call to Reset(). +func (t *MockTracer) Reset() { + t.Lock() + defer t.Unlock() + t.finishedSpans = []*MockSpan{} +} + +// StartSpan belongs to the Tracer interface. +func (t *MockTracer) StartSpan(operationName string, opts ...opentracing.StartSpanOption) opentracing.Span { + sso := opentracing.StartSpanOptions{} + for _, o := range opts { + o.Apply(&sso) + } + return newMockSpan(t, operationName, sso) +} + +// RegisterInjector registers injector for given format +func (t *MockTracer) RegisterInjector(format interface{}, injector Injector) { + t.injectors[format] = injector +} + +// RegisterExtractor registers extractor for given format +func (t *MockTracer) RegisterExtractor(format interface{}, extractor Extractor) { + t.extractors[format] = extractor +} + +// Inject belongs to the Tracer interface. +func (t *MockTracer) Inject(sm opentracing.SpanContext, format interface{}, carrier interface{}) error { + spanContext, ok := sm.(MockSpanContext) + if !ok { + return opentracing.ErrInvalidCarrier + } + injector, ok := t.injectors[format] + if !ok { + return opentracing.ErrUnsupportedFormat + } + return injector.Inject(spanContext, carrier) +} + +// Extract belongs to the Tracer interface. +func (t *MockTracer) Extract(format interface{}, carrier interface{}) (opentracing.SpanContext, error) { + extractor, ok := t.extractors[format] + if !ok { + return nil, opentracing.ErrUnsupportedFormat + } + return extractor.Extract(carrier) +} + +func (t *MockTracer) recordSpan(span *MockSpan) { + t.Lock() + defer t.Unlock() + t.finishedSpans = append(t.finishedSpans, span) +} diff --git a/vendor/github.com/opentracing/opentracing-go/mocktracer/propagation.go b/vendor/github.com/opentracing/opentracing-go/mocktracer/propagation.go new file mode 100644 index 000000000000..8364f1d18252 --- /dev/null +++ b/vendor/github.com/opentracing/opentracing-go/mocktracer/propagation.go @@ -0,0 +1,120 @@ +package mocktracer + +import ( + "fmt" + "net/url" + "strconv" + "strings" + + "github.com/opentracing/opentracing-go" +) + +const mockTextMapIdsPrefix = "mockpfx-ids-" +const mockTextMapBaggagePrefix = "mockpfx-baggage-" + +var emptyContext = MockSpanContext{} + +// Injector is responsible for injecting SpanContext instances in a manner suitable +// for propagation via a format-specific "carrier" object. Typically the +// injection will take place across an RPC boundary, but message queues and +// other IPC mechanisms are also reasonable places to use an Injector. +type Injector interface { + // Inject takes `SpanContext` and injects it into `carrier`. The actual type + // of `carrier` depends on the `format` passed to `Tracer.Inject()`. + // + // Implementations may return opentracing.ErrInvalidCarrier or any other + // implementation-specific error if injection fails. + Inject(ctx MockSpanContext, carrier interface{}) error +} + +// Extractor is responsible for extracting SpanContext instances from a +// format-specific "carrier" object. Typically the extraction will take place +// on the server side of an RPC boundary, but message queues and other IPC +// mechanisms are also reasonable places to use an Extractor. +type Extractor interface { + // Extract decodes a SpanContext instance from the given `carrier`, + // or (nil, opentracing.ErrSpanContextNotFound) if no context could + // be found in the `carrier`. + Extract(carrier interface{}) (MockSpanContext, error) +} + +// TextMapPropagator implements Injector/Extractor for TextMap and HTTPHeaders formats. +type TextMapPropagator struct { + HTTPHeaders bool +} + +// Inject implements the Injector interface +func (t *TextMapPropagator) Inject(spanContext MockSpanContext, carrier interface{}) error { + writer, ok := carrier.(opentracing.TextMapWriter) + if !ok { + return opentracing.ErrInvalidCarrier + } + // Ids: + writer.Set(mockTextMapIdsPrefix+"traceid", strconv.Itoa(spanContext.TraceID)) + writer.Set(mockTextMapIdsPrefix+"spanid", strconv.Itoa(spanContext.SpanID)) + writer.Set(mockTextMapIdsPrefix+"sampled", fmt.Sprint(spanContext.Sampled)) + // Baggage: + for baggageKey, baggageVal := range spanContext.Baggage { + safeVal := baggageVal + if t.HTTPHeaders { + safeVal = url.QueryEscape(baggageVal) + } + writer.Set(mockTextMapBaggagePrefix+baggageKey, safeVal) + } + return nil +} + +// Extract implements the Extractor interface +func (t *TextMapPropagator) Extract(carrier interface{}) (MockSpanContext, error) { + reader, ok := carrier.(opentracing.TextMapReader) + if !ok { + return emptyContext, opentracing.ErrInvalidCarrier + } + rval := MockSpanContext{0, 0, true, nil} + err := reader.ForeachKey(func(key, val string) error { + lowerKey := strings.ToLower(key) + switch { + case lowerKey == mockTextMapIdsPrefix+"traceid": + // Ids: + i, err := strconv.Atoi(val) + if err != nil { + return err + } + rval.TraceID = i + case lowerKey == mockTextMapIdsPrefix+"spanid": + // Ids: + i, err := strconv.Atoi(val) + if err != nil { + return err + } + rval.SpanID = i + case lowerKey == mockTextMapIdsPrefix+"sampled": + b, err := strconv.ParseBool(val) + if err != nil { + return err + } + rval.Sampled = b + case strings.HasPrefix(lowerKey, mockTextMapBaggagePrefix): + // Baggage: + if rval.Baggage == nil { + rval.Baggage = make(map[string]string) + } + safeVal := val + if t.HTTPHeaders { + // unescape errors are ignored, nothing can be done + if rawVal, err := url.QueryUnescape(val); err == nil { + safeVal = rawVal + } + } + rval.Baggage[lowerKey[len(mockTextMapBaggagePrefix):]] = safeVal + } + return nil + }) + if rval.TraceID == 0 || rval.SpanID == 0 { + return emptyContext, opentracing.ErrSpanContextNotFound + } + if err != nil { + return emptyContext, err + } + return rval, nil +} diff --git a/vendor/istio.io/api/.gitattributes b/vendor/istio.io/api/.gitattributes new file mode 100644 index 000000000000..1db2bfabfeb4 --- /dev/null +++ b/vendor/istio.io/api/.gitattributes @@ -0,0 +1,2 @@ +*.pb.go linguist-generated=true +*.pb.html linguist-generated=true diff --git a/vendor/istio.io/api/mcp/v1alpha1/istio.mcp.v1alpha1.pb.html b/vendor/istio.io/api/mcp/v1alpha1/istio.mcp.v1alpha1.pb.html index 48999418da22..bcee77a76a38 100644 --- a/vendor/istio.io/api/mcp/v1alpha1/istio.mcp.v1alpha1.pb.html +++ b/vendor/istio.io/api/mcp/v1alpha1/istio.mcp.v1alpha1.pb.html @@ -2,7 +2,7 @@ title: istio.mcp.v1alpha1 layout: protoc-gen-docs generator: protoc-gen-docs -number_of_entries: 9 +number_of_entries: 13 ---

This package defines the common, core types used by the Mesh Configuration Protocol.

@@ -27,76 +27,31 @@

AggregatedMeshConfigService

scalability of MCP resources.

-

Types

-

Client

+

ResourceSink

-

Identifies a specific MCP client instance. The client identifier is -presented to the management server, which may use this identifier -to distinguish per client configuration for serving. This -information is not authoriative. Authoritative identity should come -from the underlying transport layer (e.g. rpc credentials).

+

Service where the source is the gRPC client. The source is responsible for +initiating connections and opening streams.

-
Suported Type URLsSuported Collections
{{$value}}
TimeTypeVersionCollection Acked Nonce
{{$entry.Time.Format "2006-01-02T15:04:05Z07:00"}}{{$entry.Request.TypeUrl}}{{$entry.Request.VersionInfo}}{{$entry.Request.Collection}} {{$entry.Acked}} {{$entry.Request.ResponseNonce}}
- - - - - - - - - - - - - - - - - - - -
FieldTypeDescription
idstring -

An opaque identifier for the MCP client.

- -
metadatagoogle.protobuf.Struct -

Opaque metadata extending the client identifier.

+
rpc EstablishResourceStream(Resources) returns (RequestResources)
+
+

The source, acting as gRPC client, establishes a new resource stream +with the sink. The sink sends RequestResources message to and +receives Resources messages from the source.

-
-

Envelope

+

ResourceSource

-

Envelope for a configuration resource as transferred via the Mesh Configuration Protocol. -Each envelope is made up of common metadata, and a type-specific resource payload.

- - - - - - - - - - - - - - - - - - - - - -
FieldTypeDescription
metadataMetadata -

Common metadata describing the resource.

+

Service where the sink is the gRPC client. The sink is responsible for +initiating connections and opening streams.

-
resourcegoogle.protobuf.Any -

The resource itself.

+
rpc EstablishResourceStream(RequestResources) returns (Resources)
+
+

The sink, acting as gRPC client, establishes a new resource stream +with the source. The sink sends RequestResources message to +and receives Resources messages from the source.

-
+

Types

IncrementalMeshConfigRequest

IncrementalMeshConfigRequest are be sent in 2 situations:

@@ -118,11 +73,11 @@

IncrementalMeshConfigRequest

- -client -Client + +sinkNode +SinkNode -

The client making the request.

+

The sink node making the request.

@@ -203,11 +158,11 @@

IncrementalMeshConfigResponse

- -envelopes -Envelope[] + +resources +Resource[] -

The response resources wrapped in the common MCP Envelope +

The response resources wrapped in the common MCP Resource message. These are typed resources that match the type url in the IncrementalMeshConfigRequest.

@@ -265,11 +220,11 @@

MeshConfigRequest

- -client -Client + +sinkNode +SinkNode -

The client making the request.

+

The sink node making the request.

@@ -331,11 +286,11 @@

MeshConfigResponse

- -envelopes -Envelope[] + +resources +Resource[] -

The response resources wrapped in the common MCP Envelope +

The response resources wrapped in the common MCP Resource message.

@@ -344,9 +299,9 @@

MeshConfigResponse

typeUrl string -

Type URL for resources wrapped in the provided envelope(s). This +

Type URL for resources wrapped in the provided resources(s). This must be consistent with the type_url in the wrapper messages if -envelopes is non-empty.

+resources is non-empty.

@@ -385,10 +340,38 @@

Metadata

name string -

The name of the resource. It is unique within the context of a -resource type and the origin server of the resource. The resource -type is identified by the TypeUrl of the resource field of the -Envelope message.

+

Fully qualified name of the resource. Unique in context of a collection.

+ +

The fully qualified name consists of a directory and basename. The directory identifies +the resources location in a resource hierarchy. The basename identifies the specific +resource name within the context of that directory.

+ +

The directory and basename are composed of one or more segments. Segments must be +valid DNS labels. “/” is the delimiter between +segments

+ +

The rightmost segment is the basename. All segments to the +left of the basename form the directory. Segments moving towards the left +represent higher positions in the resource hierarchy, similar to reverse +DNS notation. e.g.

+ +

////

+ +

An empty directory indicates a resource that is located at the root of the +hierarchy, e.g.

+ +

/

+ +

On Kubernetes the resource hierarchy is two-levels: namespaces and +cluster-scoped (i.e. global).

+ +

Namespace resources fully qualified name is of the form:

+ +

”//

+ +

Cluster scoped resources are located at the root of the hierarchy and are of the form:

+ +

”/

@@ -404,8 +387,246 @@

Metadata

version string -

The resource level version. It allows MCP to track the state of -individual resources.

+

Resource version. This is used to determine when resources change across +resource updates. It should be treated as opaque by consumers/sinks.

+ + + + +labels +map<string, string> + +

Map of string keys and values that can be used to organize and categorize +resources within a collection.

+ + + + +annotations +map<string, string> + +

Map of string keys and values that can be used by source and sink to communicate +arbitrary metadata about this resource.

+ + + + + +
+

RequestResources

+
+

A RequestResource can be sent in two situations:

+ +

Initial message in an MCP bidirectional change stream +as an ACK or NACK response to a previous Resources. In +this case the responsenonce is set to the nonce value +in the Resources. ACK/NACK is determined by the presence +of errordetail.

+ +
    +
  • ACK (nonce!=“”,error_details==nil)
  • +
  • NACK (nonce!=“”,error_details!=nil)
  • +
  • New/Update request (nonce==“”,error_details ignored)
  • +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
sinkNodeSinkNode +

The sink node making the request.

+ +
collectionstring +

Type of resource collection that is being requested, e.g.

+ +

istio/networking/v1alpha3/VirtualService +k8s//

+ +
initialResourceVersionsmap<string, string> +

When the RequestResources is the first in a stream, the initialresourceversions must +be populated. Otherwise, initialresourceversions must be omitted. The keys are the +resources names of the MCP resources known to the MCP client. The values in the map +are the associated resource level version info.

+ +
responseNoncestring +

When the RequestResources is an ACK or NACK message in response to a previous RequestResources, +the responsenonce must be the nonce in the RequestResources. Otherwise responsenonce must +be omitted.

+ +
errorDetailgoogle.rpc.Status +

This is populated when the previously received resources could not be applied +The message field in error_details provides the source internal error +related to the failure.

+ +
+
+

Resource

+
+

Resource as transferred via the Mesh Configuration Protocol. Each +resource is made up of common metadata, and a type-specific resource payload.

+ + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
metadataMetadata +

Common metadata describing the resource.

+ +
bodygoogle.protobuf.Any +

The primary payload for the resource.

+ +
+
+

Resources

+
+

Resources do not need to include a full snapshot of the tracked +resources. Instead they are a diff to the state of a MCP client. +Per resource versions allow sources and sinks to track state at +the resource granularity. An MCP incremental session is always +in the context of a gRPC bidirectional stream. This allows the +MCP source to keep track of the state of MCP sink connected to +it.

+ +

In Incremental MCP the nonce field is required and used to pair +Resources to an RequestResources ACK or NACK.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FieldTypeDescription
systemVersionInfostring +

The version of the response data (used for debugging).

+ +
collectionstring +

Type of resource collection that is being requested, e.g.

+ +

istio/networking/v1alpha3/VirtualService +k8s//

+ +
resourcesResource[] +

The response resources wrapped in the common MCP Resource message. +These are typed resources that match the type url in the +RequestResources message.

+ +
removedResourcesstring[] +

Names of resources that have been deleted and to be +removed from the MCP sink node. Removed resources for missing +resources can be ignored.

+ +
noncestring +

Required. The nonce provides a way for RequestChange to uniquely +reference a RequestResources.

+ +
+
+

SinkNode

+
+

Identifies a specific MCP sink node instance. The node identifier is +presented to the resource source, which may use this identifier +to distinguish per sink configuration for serving. This +information is not authoritative. Authoritative identity should come +from the underlying transport layer (e.g. rpc credentials).

+ + + + + + + + + + + + + + + + + + + diff --git a/vendor/istio.io/api/mcp/v1alpha1/mcp.pb.go b/vendor/istio.io/api/mcp/v1alpha1/mcp.pb.go index b8b77efd4a75..764783c4da05 100644 --- a/vendor/istio.io/api/mcp/v1alpha1/mcp.pb.go +++ b/vendor/istio.io/api/mcp/v1alpha1/mcp.pb.go @@ -1,12 +1,30 @@ // Code generated by protoc-gen-gogo. DO NOT EDIT. // source: mcp/v1alpha1/mcp.proto +/* + Package v1alpha1 is a generated protocol buffer package. + + It is generated from these files: + mcp/v1alpha1/mcp.proto + mcp/v1alpha1/metadata.proto + mcp/v1alpha1/resource.proto + + It has these top-level messages: + SinkNode + MeshConfigRequest + MeshConfigResponse + IncrementalMeshConfigRequest + IncrementalMeshConfigResponse + RequestResources + Resources + Metadata + Resource +*/ package v1alpha1 import proto "github.com/gogo/protobuf/proto" import fmt "fmt" import math "math" -import google_protobuf3 "github.com/gogo/protobuf/types" import google_rpc "github.com/gogo/googleapis/google/rpc" import _ "github.com/gogo/protobuf/gogoproto" @@ -20,33 +38,39 @@ var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf -// Identifies a specific MCP client instance. The client identifier is -// presented to the management server, which may use this identifier -// to distinguish per client configuration for serving. This -// information is not authoriative. Authoritative identity should come +// This is a compile-time assertion to ensure that this generated file +// is compatible with the proto package it is being compiled against. +// A compilation error at this line likely means your copy of the +// proto package needs to be updated. +const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package + +// Identifies a specific MCP sink node instance. The node identifier is +// presented to the resource source, which may use this identifier +// to distinguish per sink configuration for serving. This +// information is not authoritative. Authoritative identity should come // from the underlying transport layer (e.g. rpc credentials). -type Client struct { - // An opaque identifier for the MCP client. +type SinkNode struct { + // An opaque identifier for the MCP node. Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"` - // Opaque metadata extending the client identifier. - Metadata *google_protobuf3.Struct `protobuf:"bytes,2,opt,name=metadata" json:"metadata,omitempty"` + // Opaque annotations extending the node identifier. + Annotations map[string]string `protobuf:"bytes,2,rep,name=annotations" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` } -func (m *Client) Reset() { *m = Client{} } -func (m *Client) String() string { return proto.CompactTextString(m) } -func (*Client) ProtoMessage() {} -func (*Client) Descriptor() ([]byte, []int) { return fileDescriptorMcp, []int{0} } +func (m *SinkNode) Reset() { *m = SinkNode{} } +func (m *SinkNode) String() string { return proto.CompactTextString(m) } +func (*SinkNode) ProtoMessage() {} +func (*SinkNode) Descriptor() ([]byte, []int) { return fileDescriptorMcp, []int{0} } -func (m *Client) GetId() string { +func (m *SinkNode) GetId() string { if m != nil { return m.Id } return "" } -func (m *Client) GetMetadata() *google_protobuf3.Struct { +func (m *SinkNode) GetAnnotations() map[string]string { if m != nil { - return m.Metadata + return m.Annotations } return nil } @@ -63,8 +87,8 @@ type MeshConfigRequest struct { // the previous API config version respectively. Each type_url (see // below) has an independent version associated with it. VersionInfo string `protobuf:"bytes,1,opt,name=version_info,json=versionInfo,proto3" json:"version_info,omitempty"` - // The client making the request. - Client *Client `protobuf:"bytes,2,opt,name=client" json:"client,omitempty"` + // The sink node making the request. + SinkNode *SinkNode `protobuf:"bytes,2,opt,name=sink_node,json=sinkNode" json:"sink_node,omitempty"` // Type of the resource that is being requested, e.g. // "type.googleapis.com/istio.io.networking.v1alpha3.VirtualService". TypeUrl string `protobuf:"bytes,3,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"` @@ -94,9 +118,9 @@ func (m *MeshConfigRequest) GetVersionInfo() string { return "" } -func (m *MeshConfigRequest) GetClient() *Client { +func (m *MeshConfigRequest) GetSinkNode() *SinkNode { if m != nil { - return m.Client + return m.SinkNode } return nil } @@ -127,12 +151,12 @@ func (m *MeshConfigRequest) GetErrorDetail() *google_rpc.Status { type MeshConfigResponse struct { // The version of the response data. VersionInfo string `protobuf:"bytes,1,opt,name=version_info,json=versionInfo,proto3" json:"version_info,omitempty"` - // The response resources wrapped in the common MCP *Envelope* + // The response resources wrapped in the common MCP *Resource* // message. - Envelopes []Envelope `protobuf:"bytes,2,rep,name=envelopes" json:"envelopes"` - // Type URL for resources wrapped in the provided envelope(s). This + Resources []Resource `protobuf:"bytes,2,rep,name=resources" json:"resources"` + // Type URL for resources wrapped in the provided resources(s). This // must be consistent with the type_url in the wrapper messages if - // envelopes is non-empty. + // resources is non-empty. TypeUrl string `protobuf:"bytes,3,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"` // The nonce provides a way to explicitly ack a specific // MeshConfigResponse in a following MeshConfigRequest. Additional @@ -157,9 +181,9 @@ func (m *MeshConfigResponse) GetVersionInfo() string { return "" } -func (m *MeshConfigResponse) GetEnvelopes() []Envelope { +func (m *MeshConfigResponse) GetResources() []Resource { if m != nil { - return m.Envelopes + return m.Resources } return nil } @@ -186,8 +210,8 @@ func (m *MeshConfigResponse) GetNonce() string { // In this case the response_nonce is set to the nonce value in the Response. // ACK or NACK is determined by the absence or presence of error_detail. type IncrementalMeshConfigRequest struct { - // The client making the request. - Client *Client `protobuf:"bytes,1,opt,name=client" json:"client,omitempty"` + // The sink node making the request. + SinkNode *SinkNode `protobuf:"bytes,1,opt,name=sink_node,json=sinkNode" json:"sink_node,omitempty"` // Type of the resource that is being requested, e.g. // "type.googleapis.com/istio.io.networking.v1alpha3.VirtualService". TypeUrl string `protobuf:"bytes,2,opt,name=type_url,json=typeUrl,proto3" json:"type_url,omitempty"` @@ -213,9 +237,9 @@ func (m *IncrementalMeshConfigRequest) String() string { return proto func (*IncrementalMeshConfigRequest) ProtoMessage() {} func (*IncrementalMeshConfigRequest) Descriptor() ([]byte, []int) { return fileDescriptorMcp, []int{3} } -func (m *IncrementalMeshConfigRequest) GetClient() *Client { +func (m *IncrementalMeshConfigRequest) GetSinkNode() *SinkNode { if m != nil { - return m.Client + return m.SinkNode } return nil } @@ -263,10 +287,10 @@ func (m *IncrementalMeshConfigRequest) GetErrorDetail() *google_rpc.Status { type IncrementalMeshConfigResponse struct { // The version of the response data (used for debugging). SystemVersionInfo string `protobuf:"bytes,1,opt,name=system_version_info,json=systemVersionInfo,proto3" json:"system_version_info,omitempty"` - // The response resources wrapped in the common MCP *Envelope* + // The response resources wrapped in the common MCP *Resource* // message. These are typed resources that match the type url in the // IncrementalMeshConfigRequest. - Envelopes []Envelope `protobuf:"bytes,2,rep,name=envelopes" json:"envelopes"` + Resources []Resource `protobuf:"bytes,2,rep,name=resources" json:"resources"` // Resources names of resources that have be deleted and to be // removed from the MCP Client. Removed resources for missing // resources can be ignored. @@ -289,9 +313,9 @@ func (m *IncrementalMeshConfigResponse) GetSystemVersionInfo() string { return "" } -func (m *IncrementalMeshConfigResponse) GetEnvelopes() []Envelope { +func (m *IncrementalMeshConfigResponse) GetResources() []Resource { if m != nil { - return m.Envelopes + return m.Resources } return nil } @@ -310,21 +334,169 @@ func (m *IncrementalMeshConfigResponse) GetNonce() string { return "" } +// A RequestResource can be sent in two situations: +// +// Initial message in an MCP bidirectional change stream +// as an ACK or NACK response to a previous Resources. In +// this case the response_nonce is set to the nonce value +// in the Resources. ACK/NACK is determined by the presence +// of error_detail. +// +// * ACK (nonce!="",error_details==nil) +// * NACK (nonce!="",error_details!=nil) +// * New/Update request (nonce=="",error_details ignored) +// +type RequestResources struct { + // The sink node making the request. + SinkNode *SinkNode `protobuf:"bytes,1,opt,name=sink_node,json=sinkNode" json:"sink_node,omitempty"` + // Type of resource collection that is being requested, e.g. + // + // istio/networking/v1alpha3/VirtualService + // k8s// + Collection string `protobuf:"bytes,2,opt,name=collection,proto3" json:"collection,omitempty"` + // When the RequestResources is the first in a stream, the initial_resource_versions must + // be populated. Otherwise, initial_resource_versions must be omitted. The keys are the + // resources names of the MCP resources known to the MCP client. The values in the map + // are the associated resource level version info. + InitialResourceVersions map[string]string `protobuf:"bytes,3,rep,name=initial_resource_versions,json=initialResourceVersions" json:"initial_resource_versions,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // When the RequestResources is an ACK or NACK message in response to a previous RequestResources, + // the response_nonce must be the nonce in the RequestResources. Otherwise response_nonce must + // be omitted. + ResponseNonce string `protobuf:"bytes,4,opt,name=response_nonce,json=responseNonce,proto3" json:"response_nonce,omitempty"` + // This is populated when the previously received resources could not be applied + // The *message* field in *error_details* provides the source internal error + // related to the failure. + ErrorDetail *google_rpc.Status `protobuf:"bytes,5,opt,name=error_detail,json=errorDetail" json:"error_detail,omitempty"` +} + +func (m *RequestResources) Reset() { *m = RequestResources{} } +func (m *RequestResources) String() string { return proto.CompactTextString(m) } +func (*RequestResources) ProtoMessage() {} +func (*RequestResources) Descriptor() ([]byte, []int) { return fileDescriptorMcp, []int{5} } + +func (m *RequestResources) GetSinkNode() *SinkNode { + if m != nil { + return m.SinkNode + } + return nil +} + +func (m *RequestResources) GetCollection() string { + if m != nil { + return m.Collection + } + return "" +} + +func (m *RequestResources) GetInitialResourceVersions() map[string]string { + if m != nil { + return m.InitialResourceVersions + } + return nil +} + +func (m *RequestResources) GetResponseNonce() string { + if m != nil { + return m.ResponseNonce + } + return "" +} + +func (m *RequestResources) GetErrorDetail() *google_rpc.Status { + if m != nil { + return m.ErrorDetail + } + return nil +} + +// Resources do not need to include a full snapshot of the tracked +// resources. Instead they are a diff to the state of a MCP client. +// Per resource versions allow sources and sinks to track state at +// the resource granularity. An MCP incremental session is always +// in the context of a gRPC bidirectional stream. This allows the +// MCP source to keep track of the state of MCP sink connected to +// it. +// +// In Incremental MCP the nonce field is required and used to pair +// Resources to an RequestResources ACK or NACK. +type Resources struct { + // The version of the response data (used for debugging). + SystemVersionInfo string `protobuf:"bytes,1,opt,name=system_version_info,json=systemVersionInfo,proto3" json:"system_version_info,omitempty"` + // Type of resource collection that is being requested, e.g. + // + // istio/networking/v1alpha3/VirtualService + // k8s// + Collection string `protobuf:"bytes,2,opt,name=collection,proto3" json:"collection,omitempty"` + // The response resources wrapped in the common MCP *Resource* message. + // These are typed resources that match the type url in the + // RequestResources message. + Resources []Resource `protobuf:"bytes,3,rep,name=resources" json:"resources"` + // Names of resources that have been deleted and to be + // removed from the MCP sink node. Removed resources for missing + // resources can be ignored. + RemovedResources []string `protobuf:"bytes,4,rep,name=removed_resources,json=removedResources" json:"removed_resources,omitempty"` + // Required. The nonce provides a way for RequestChange to uniquely + // reference a RequestResources. + Nonce string `protobuf:"bytes,5,opt,name=nonce,proto3" json:"nonce,omitempty"` +} + +func (m *Resources) Reset() { *m = Resources{} } +func (m *Resources) String() string { return proto.CompactTextString(m) } +func (*Resources) ProtoMessage() {} +func (*Resources) Descriptor() ([]byte, []int) { return fileDescriptorMcp, []int{6} } + +func (m *Resources) GetSystemVersionInfo() string { + if m != nil { + return m.SystemVersionInfo + } + return "" +} + +func (m *Resources) GetCollection() string { + if m != nil { + return m.Collection + } + return "" +} + +func (m *Resources) GetResources() []Resource { + if m != nil { + return m.Resources + } + return nil +} + +func (m *Resources) GetRemovedResources() []string { + if m != nil { + return m.RemovedResources + } + return nil +} + +func (m *Resources) GetNonce() string { + if m != nil { + return m.Nonce + } + return "" +} + func init() { - proto.RegisterType((*Client)(nil), "istio.mcp.v1alpha1.Client") + proto.RegisterType((*SinkNode)(nil), "istio.mcp.v1alpha1.SinkNode") proto.RegisterType((*MeshConfigRequest)(nil), "istio.mcp.v1alpha1.MeshConfigRequest") proto.RegisterType((*MeshConfigResponse)(nil), "istio.mcp.v1alpha1.MeshConfigResponse") proto.RegisterType((*IncrementalMeshConfigRequest)(nil), "istio.mcp.v1alpha1.IncrementalMeshConfigRequest") proto.RegisterType((*IncrementalMeshConfigResponse)(nil), "istio.mcp.v1alpha1.IncrementalMeshConfigResponse") + proto.RegisterType((*RequestResources)(nil), "istio.mcp.v1alpha1.RequestResources") + proto.RegisterType((*Resources)(nil), "istio.mcp.v1alpha1.Resources") } -func (this *Client) Equal(that interface{}) bool { +func (this *SinkNode) Equal(that interface{}) bool { if that == nil { return this == nil } - that1, ok := that.(*Client) + that1, ok := that.(*SinkNode) if !ok { - that2, ok := that.(Client) + that2, ok := that.(SinkNode) if ok { that1 = &that2 } else { @@ -339,9 +511,14 @@ func (this *Client) Equal(that interface{}) bool { if this.Id != that1.Id { return false } - if !this.Metadata.Equal(that1.Metadata) { + if len(this.Annotations) != len(that1.Annotations) { return false } + for i := range this.Annotations { + if this.Annotations[i] != that1.Annotations[i] { + return false + } + } return true } func (this *MeshConfigRequest) Equal(that interface{}) bool { @@ -366,7 +543,7 @@ func (this *MeshConfigRequest) Equal(that interface{}) bool { if this.VersionInfo != that1.VersionInfo { return false } - if !this.Client.Equal(that1.Client) { + if !this.SinkNode.Equal(that1.SinkNode) { return false } if this.TypeUrl != that1.TypeUrl { @@ -402,11 +579,11 @@ func (this *MeshConfigResponse) Equal(that interface{}) bool { if this.VersionInfo != that1.VersionInfo { return false } - if len(this.Envelopes) != len(that1.Envelopes) { + if len(this.Resources) != len(that1.Resources) { return false } - for i := range this.Envelopes { - if !this.Envelopes[i].Equal(&that1.Envelopes[i]) { + for i := range this.Resources { + if !this.Resources[i].Equal(&that1.Resources[i]) { return false } } @@ -437,7 +614,7 @@ func (this *IncrementalMeshConfigRequest) Equal(that interface{}) bool { } else if this == nil { return false } - if !this.Client.Equal(that1.Client) { + if !this.SinkNode.Equal(that1.SinkNode) { return false } if this.TypeUrl != that1.TypeUrl { @@ -481,11 +658,98 @@ func (this *IncrementalMeshConfigResponse) Equal(that interface{}) bool { if this.SystemVersionInfo != that1.SystemVersionInfo { return false } - if len(this.Envelopes) != len(that1.Envelopes) { + if len(this.Resources) != len(that1.Resources) { + return false + } + for i := range this.Resources { + if !this.Resources[i].Equal(&that1.Resources[i]) { + return false + } + } + if len(this.RemovedResources) != len(that1.RemovedResources) { + return false + } + for i := range this.RemovedResources { + if this.RemovedResources[i] != that1.RemovedResources[i] { + return false + } + } + if this.Nonce != that1.Nonce { + return false + } + return true +} +func (this *RequestResources) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*RequestResources) + if !ok { + that2, ok := that.(RequestResources) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if !this.SinkNode.Equal(that1.SinkNode) { + return false + } + if this.Collection != that1.Collection { + return false + } + if len(this.InitialResourceVersions) != len(that1.InitialResourceVersions) { + return false + } + for i := range this.InitialResourceVersions { + if this.InitialResourceVersions[i] != that1.InitialResourceVersions[i] { + return false + } + } + if this.ResponseNonce != that1.ResponseNonce { + return false + } + if !this.ErrorDetail.Equal(that1.ErrorDetail) { + return false + } + return true +} +func (this *Resources) Equal(that interface{}) bool { + if that == nil { + return this == nil + } + + that1, ok := that.(*Resources) + if !ok { + that2, ok := that.(Resources) + if ok { + that1 = &that2 + } else { + return false + } + } + if that1 == nil { + return this == nil + } else if this == nil { + return false + } + if this.SystemVersionInfo != that1.SystemVersionInfo { + return false + } + if this.Collection != that1.Collection { + return false + } + if len(this.Resources) != len(that1.Resources) { return false } - for i := range this.Envelopes { - if !this.Envelopes[i].Equal(&that1.Envelopes[i]) { + for i := range this.Resources { + if !this.Resources[i].Equal(&that1.Resources[i]) { return false } } @@ -686,122 +950,333 @@ var _AggregatedMeshConfigService_serviceDesc = grpc.ServiceDesc{ Metadata: "mcp/v1alpha1/mcp.proto", } -func (m *Client) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalTo(dAtA) - if err != nil { - return nil, err - } - return dAtA[:n], nil +// Client API for ResourceSource service + +type ResourceSourceClient interface { + // The sink, acting as gRPC client, establishes a new resource stream + // with the source. The sink sends RequestResources message to + // and receives Resources messages from the source. + EstablishResourceStream(ctx context.Context, opts ...grpc.CallOption) (ResourceSource_EstablishResourceStreamClient, error) } -func (m *Client) MarshalTo(dAtA []byte) (int, error) { - var i int - _ = i - var l int - _ = l - if len(m.Id) > 0 { - dAtA[i] = 0xa - i++ - i = encodeVarintMcp(dAtA, i, uint64(len(m.Id))) - i += copy(dAtA[i:], m.Id) - } - if m.Metadata != nil { - dAtA[i] = 0x12 - i++ - i = encodeVarintMcp(dAtA, i, uint64(m.Metadata.Size())) - n1, err := m.Metadata.MarshalTo(dAtA[i:]) - if err != nil { - return 0, err - } - i += n1 - } - return i, nil +type resourceSourceClient struct { + cc *grpc.ClientConn } -func (m *MeshConfigRequest) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalTo(dAtA) +func NewResourceSourceClient(cc *grpc.ClientConn) ResourceSourceClient { + return &resourceSourceClient{cc} +} + +func (c *resourceSourceClient) EstablishResourceStream(ctx context.Context, opts ...grpc.CallOption) (ResourceSource_EstablishResourceStreamClient, error) { + stream, err := grpc.NewClientStream(ctx, &_ResourceSource_serviceDesc.Streams[0], c.cc, "/istio.mcp.v1alpha1.ResourceSource/EstablishResourceStream", opts...) if err != nil { return nil, err } - return dAtA[:n], nil + x := &resourceSourceEstablishResourceStreamClient{stream} + return x, nil } -func (m *MeshConfigRequest) MarshalTo(dAtA []byte) (int, error) { - var i int - _ = i - var l int - _ = l - if len(m.VersionInfo) > 0 { - dAtA[i] = 0xa - i++ - i = encodeVarintMcp(dAtA, i, uint64(len(m.VersionInfo))) - i += copy(dAtA[i:], m.VersionInfo) - } - if m.Client != nil { - dAtA[i] = 0x12 - i++ - i = encodeVarintMcp(dAtA, i, uint64(m.Client.Size())) - n2, err := m.Client.MarshalTo(dAtA[i:]) - if err != nil { - return 0, err - } - i += n2 - } - if len(m.TypeUrl) > 0 { - dAtA[i] = 0x1a - i++ - i = encodeVarintMcp(dAtA, i, uint64(len(m.TypeUrl))) - i += copy(dAtA[i:], m.TypeUrl) - } - if len(m.ResponseNonce) > 0 { - dAtA[i] = 0x22 - i++ - i = encodeVarintMcp(dAtA, i, uint64(len(m.ResponseNonce))) - i += copy(dAtA[i:], m.ResponseNonce) - } - if m.ErrorDetail != nil { - dAtA[i] = 0x2a - i++ - i = encodeVarintMcp(dAtA, i, uint64(m.ErrorDetail.Size())) - n3, err := m.ErrorDetail.MarshalTo(dAtA[i:]) - if err != nil { - return 0, err - } - i += n3 - } - return i, nil +type ResourceSource_EstablishResourceStreamClient interface { + Send(*RequestResources) error + Recv() (*Resources, error) + grpc.ClientStream } -func (m *MeshConfigResponse) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalTo(dAtA) - if err != nil { +type resourceSourceEstablishResourceStreamClient struct { + grpc.ClientStream +} + +func (x *resourceSourceEstablishResourceStreamClient) Send(m *RequestResources) error { + return x.ClientStream.SendMsg(m) +} + +func (x *resourceSourceEstablishResourceStreamClient) Recv() (*Resources, error) { + m := new(Resources) + if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err } - return dAtA[:n], nil + return m, nil } -func (m *MeshConfigResponse) MarshalTo(dAtA []byte) (int, error) { - var i int - _ = i - var l int - _ = l - if len(m.VersionInfo) > 0 { - dAtA[i] = 0xa - i++ - i = encodeVarintMcp(dAtA, i, uint64(len(m.VersionInfo))) - i += copy(dAtA[i:], m.VersionInfo) - } - if len(m.Envelopes) > 0 { - for _, msg := range m.Envelopes { - dAtA[i] = 0x12 - i++ - i = encodeVarintMcp(dAtA, i, uint64(msg.Size())) +// Server API for ResourceSource service + +type ResourceSourceServer interface { + // The sink, acting as gRPC client, establishes a new resource stream + // with the source. The sink sends RequestResources message to + // and receives Resources messages from the source. + EstablishResourceStream(ResourceSource_EstablishResourceStreamServer) error +} + +func RegisterResourceSourceServer(s *grpc.Server, srv ResourceSourceServer) { + s.RegisterService(&_ResourceSource_serviceDesc, srv) +} + +func _ResourceSource_EstablishResourceStream_Handler(srv interface{}, stream grpc.ServerStream) error { + return srv.(ResourceSourceServer).EstablishResourceStream(&resourceSourceEstablishResourceStreamServer{stream}) +} + +type ResourceSource_EstablishResourceStreamServer interface { + Send(*Resources) error + Recv() (*RequestResources, error) + grpc.ServerStream +} + +type resourceSourceEstablishResourceStreamServer struct { + grpc.ServerStream +} + +func (x *resourceSourceEstablishResourceStreamServer) Send(m *Resources) error { + return x.ServerStream.SendMsg(m) +} + +func (x *resourceSourceEstablishResourceStreamServer) Recv() (*RequestResources, error) { + m := new(RequestResources) + if err := x.ServerStream.RecvMsg(m); err != nil { + return nil, err + } + return m, nil +} + +var _ResourceSource_serviceDesc = grpc.ServiceDesc{ + ServiceName: "istio.mcp.v1alpha1.ResourceSource", + HandlerType: (*ResourceSourceServer)(nil), + Methods: []grpc.MethodDesc{}, + Streams: []grpc.StreamDesc{ + { + StreamName: "EstablishResourceStream", + Handler: _ResourceSource_EstablishResourceStream_Handler, + ServerStreams: true, + ClientStreams: true, + }, + }, + Metadata: "mcp/v1alpha1/mcp.proto", +} + +// Client API for ResourceSink service + +type ResourceSinkClient interface { + // The source, acting as gRPC client, establishes a new resource stream + // with the sink. The sink sends RequestResources message to and + // receives Resources messages from the source. + EstablishResourceStream(ctx context.Context, opts ...grpc.CallOption) (ResourceSink_EstablishResourceStreamClient, error) +} + +type resourceSinkClient struct { + cc *grpc.ClientConn +} + +func NewResourceSinkClient(cc *grpc.ClientConn) ResourceSinkClient { + return &resourceSinkClient{cc} +} + +func (c *resourceSinkClient) EstablishResourceStream(ctx context.Context, opts ...grpc.CallOption) (ResourceSink_EstablishResourceStreamClient, error) { + stream, err := grpc.NewClientStream(ctx, &_ResourceSink_serviceDesc.Streams[0], c.cc, "/istio.mcp.v1alpha1.ResourceSink/EstablishResourceStream", opts...) + if err != nil { + return nil, err + } + x := &resourceSinkEstablishResourceStreamClient{stream} + return x, nil +} + +type ResourceSink_EstablishResourceStreamClient interface { + Send(*Resources) error + Recv() (*RequestResources, error) + grpc.ClientStream +} + +type resourceSinkEstablishResourceStreamClient struct { + grpc.ClientStream +} + +func (x *resourceSinkEstablishResourceStreamClient) Send(m *Resources) error { + return x.ClientStream.SendMsg(m) +} + +func (x *resourceSinkEstablishResourceStreamClient) Recv() (*RequestResources, error) { + m := new(RequestResources) + if err := x.ClientStream.RecvMsg(m); err != nil { + return nil, err + } + return m, nil +} + +// Server API for ResourceSink service + +type ResourceSinkServer interface { + // The source, acting as gRPC client, establishes a new resource stream + // with the sink. The sink sends RequestResources message to and + // receives Resources messages from the source. + EstablishResourceStream(ResourceSink_EstablishResourceStreamServer) error +} + +func RegisterResourceSinkServer(s *grpc.Server, srv ResourceSinkServer) { + s.RegisterService(&_ResourceSink_serviceDesc, srv) +} + +func _ResourceSink_EstablishResourceStream_Handler(srv interface{}, stream grpc.ServerStream) error { + return srv.(ResourceSinkServer).EstablishResourceStream(&resourceSinkEstablishResourceStreamServer{stream}) +} + +type ResourceSink_EstablishResourceStreamServer interface { + Send(*RequestResources) error + Recv() (*Resources, error) + grpc.ServerStream +} + +type resourceSinkEstablishResourceStreamServer struct { + grpc.ServerStream +} + +func (x *resourceSinkEstablishResourceStreamServer) Send(m *RequestResources) error { + return x.ServerStream.SendMsg(m) +} + +func (x *resourceSinkEstablishResourceStreamServer) Recv() (*Resources, error) { + m := new(Resources) + if err := x.ServerStream.RecvMsg(m); err != nil { + return nil, err + } + return m, nil +} + +var _ResourceSink_serviceDesc = grpc.ServiceDesc{ + ServiceName: "istio.mcp.v1alpha1.ResourceSink", + HandlerType: (*ResourceSinkServer)(nil), + Methods: []grpc.MethodDesc{}, + Streams: []grpc.StreamDesc{ + { + StreamName: "EstablishResourceStream", + Handler: _ResourceSink_EstablishResourceStream_Handler, + ServerStreams: true, + ClientStreams: true, + }, + }, + Metadata: "mcp/v1alpha1/mcp.proto", +} + +func (m *SinkNode) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *SinkNode) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i + var l int + _ = l + if len(m.Id) > 0 { + dAtA[i] = 0xa + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.Id))) + i += copy(dAtA[i:], m.Id) + } + if len(m.Annotations) > 0 { + for k, _ := range m.Annotations { + dAtA[i] = 0x12 + i++ + v := m.Annotations[k] + mapSize := 1 + len(k) + sovMcp(uint64(len(k))) + 1 + len(v) + sovMcp(uint64(len(v))) + i = encodeVarintMcp(dAtA, i, uint64(mapSize)) + dAtA[i] = 0xa + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(k))) + i += copy(dAtA[i:], k) + dAtA[i] = 0x12 + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(v))) + i += copy(dAtA[i:], v) + } + } + return i, nil +} + +func (m *MeshConfigRequest) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *MeshConfigRequest) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i + var l int + _ = l + if len(m.VersionInfo) > 0 { + dAtA[i] = 0xa + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.VersionInfo))) + i += copy(dAtA[i:], m.VersionInfo) + } + if m.SinkNode != nil { + dAtA[i] = 0x12 + i++ + i = encodeVarintMcp(dAtA, i, uint64(m.SinkNode.Size())) + n1, err := m.SinkNode.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n1 + } + if len(m.TypeUrl) > 0 { + dAtA[i] = 0x1a + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.TypeUrl))) + i += copy(dAtA[i:], m.TypeUrl) + } + if len(m.ResponseNonce) > 0 { + dAtA[i] = 0x22 + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.ResponseNonce))) + i += copy(dAtA[i:], m.ResponseNonce) + } + if m.ErrorDetail != nil { + dAtA[i] = 0x2a + i++ + i = encodeVarintMcp(dAtA, i, uint64(m.ErrorDetail.Size())) + n2, err := m.ErrorDetail.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n2 + } + return i, nil +} + +func (m *MeshConfigResponse) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *MeshConfigResponse) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i + var l int + _ = l + if len(m.VersionInfo) > 0 { + dAtA[i] = 0xa + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.VersionInfo))) + i += copy(dAtA[i:], m.VersionInfo) + } + if len(m.Resources) > 0 { + for _, msg := range m.Resources { + dAtA[i] = 0x12 + i++ + i = encodeVarintMcp(dAtA, i, uint64(msg.Size())) n, err := msg.MarshalTo(dAtA[i:]) if err != nil { return 0, err @@ -839,15 +1314,15 @@ func (m *IncrementalMeshConfigRequest) MarshalTo(dAtA []byte) (int, error) { _ = i var l int _ = l - if m.Client != nil { + if m.SinkNode != nil { dAtA[i] = 0xa i++ - i = encodeVarintMcp(dAtA, i, uint64(m.Client.Size())) - n4, err := m.Client.MarshalTo(dAtA[i:]) + i = encodeVarintMcp(dAtA, i, uint64(m.SinkNode.Size())) + n3, err := m.SinkNode.MarshalTo(dAtA[i:]) if err != nil { return 0, err } - i += n4 + i += n3 } if len(m.TypeUrl) > 0 { dAtA[i] = 0x12 @@ -882,11 +1357,11 @@ func (m *IncrementalMeshConfigRequest) MarshalTo(dAtA []byte) (int, error) { dAtA[i] = 0x2a i++ i = encodeVarintMcp(dAtA, i, uint64(m.ErrorDetail.Size())) - n5, err := m.ErrorDetail.MarshalTo(dAtA[i:]) + n4, err := m.ErrorDetail.MarshalTo(dAtA[i:]) if err != nil { return 0, err } - i += n5 + i += n4 } return i, nil } @@ -912,8 +1387,8 @@ func (m *IncrementalMeshConfigResponse) MarshalTo(dAtA []byte) (int, error) { i = encodeVarintMcp(dAtA, i, uint64(len(m.SystemVersionInfo))) i += copy(dAtA[i:], m.SystemVersionInfo) } - if len(m.Envelopes) > 0 { - for _, msg := range m.Envelopes { + if len(m.Resources) > 0 { + for _, msg := range m.Resources { dAtA[i] = 0x12 i++ i = encodeVarintMcp(dAtA, i, uint64(msg.Size())) @@ -948,49 +1423,183 @@ func (m *IncrementalMeshConfigResponse) MarshalTo(dAtA []byte) (int, error) { return i, nil } -func encodeVarintMcp(dAtA []byte, offset int, v uint64) int { - for v >= 1<<7 { - dAtA[offset] = uint8(v&0x7f | 0x80) - v >>= 7 - offset++ - } - dAtA[offset] = uint8(v) - return offset + 1 -} -func (m *Client) Size() (n int) { - var l int - _ = l - l = len(m.Id) - if l > 0 { - n += 1 + l + sovMcp(uint64(l)) - } - if m.Metadata != nil { - l = m.Metadata.Size() - n += 1 + l + sovMcp(uint64(l)) +func (m *RequestResources) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err } - return n + return dAtA[:n], nil } -func (m *MeshConfigRequest) Size() (n int) { +func (m *RequestResources) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i var l int _ = l - l = len(m.VersionInfo) - if l > 0 { - n += 1 + l + sovMcp(uint64(l)) - } - if m.Client != nil { - l = m.Client.Size() - n += 1 + l + sovMcp(uint64(l)) - } - l = len(m.TypeUrl) - if l > 0 { - n += 1 + l + sovMcp(uint64(l)) + if m.SinkNode != nil { + dAtA[i] = 0xa + i++ + i = encodeVarintMcp(dAtA, i, uint64(m.SinkNode.Size())) + n5, err := m.SinkNode.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n5 } - l = len(m.ResponseNonce) - if l > 0 { - n += 1 + l + sovMcp(uint64(l)) + if len(m.Collection) > 0 { + dAtA[i] = 0x12 + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.Collection))) + i += copy(dAtA[i:], m.Collection) } - if m.ErrorDetail != nil { + if len(m.InitialResourceVersions) > 0 { + for k, _ := range m.InitialResourceVersions { + dAtA[i] = 0x1a + i++ + v := m.InitialResourceVersions[k] + mapSize := 1 + len(k) + sovMcp(uint64(len(k))) + 1 + len(v) + sovMcp(uint64(len(v))) + i = encodeVarintMcp(dAtA, i, uint64(mapSize)) + dAtA[i] = 0xa + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(k))) + i += copy(dAtA[i:], k) + dAtA[i] = 0x12 + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(v))) + i += copy(dAtA[i:], v) + } + } + if len(m.ResponseNonce) > 0 { + dAtA[i] = 0x22 + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.ResponseNonce))) + i += copy(dAtA[i:], m.ResponseNonce) + } + if m.ErrorDetail != nil { + dAtA[i] = 0x2a + i++ + i = encodeVarintMcp(dAtA, i, uint64(m.ErrorDetail.Size())) + n6, err := m.ErrorDetail.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n6 + } + return i, nil +} + +func (m *Resources) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *Resources) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i + var l int + _ = l + if len(m.SystemVersionInfo) > 0 { + dAtA[i] = 0xa + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.SystemVersionInfo))) + i += copy(dAtA[i:], m.SystemVersionInfo) + } + if len(m.Collection) > 0 { + dAtA[i] = 0x12 + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.Collection))) + i += copy(dAtA[i:], m.Collection) + } + if len(m.Resources) > 0 { + for _, msg := range m.Resources { + dAtA[i] = 0x1a + i++ + i = encodeVarintMcp(dAtA, i, uint64(msg.Size())) + n, err := msg.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n + } + } + if len(m.RemovedResources) > 0 { + for _, s := range m.RemovedResources { + dAtA[i] = 0x22 + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.Nonce) > 0 { + dAtA[i] = 0x2a + i++ + i = encodeVarintMcp(dAtA, i, uint64(len(m.Nonce))) + i += copy(dAtA[i:], m.Nonce) + } + return i, nil +} + +func encodeVarintMcp(dAtA []byte, offset int, v uint64) int { + for v >= 1<<7 { + dAtA[offset] = uint8(v&0x7f | 0x80) + v >>= 7 + offset++ + } + dAtA[offset] = uint8(v) + return offset + 1 +} +func (m *SinkNode) Size() (n int) { + var l int + _ = l + l = len(m.Id) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + if len(m.Annotations) > 0 { + for k, v := range m.Annotations { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovMcp(uint64(len(k))) + 1 + len(v) + sovMcp(uint64(len(v))) + n += mapEntrySize + 1 + sovMcp(uint64(mapEntrySize)) + } + } + return n +} + +func (m *MeshConfigRequest) Size() (n int) { + var l int + _ = l + l = len(m.VersionInfo) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + if m.SinkNode != nil { + l = m.SinkNode.Size() + n += 1 + l + sovMcp(uint64(l)) + } + l = len(m.TypeUrl) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + l = len(m.ResponseNonce) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + if m.ErrorDetail != nil { l = m.ErrorDetail.Size() n += 1 + l + sovMcp(uint64(l)) } @@ -1004,8 +1613,8 @@ func (m *MeshConfigResponse) Size() (n int) { if l > 0 { n += 1 + l + sovMcp(uint64(l)) } - if len(m.Envelopes) > 0 { - for _, e := range m.Envelopes { + if len(m.Resources) > 0 { + for _, e := range m.Resources { l = e.Size() n += 1 + l + sovMcp(uint64(l)) } @@ -1024,8 +1633,8 @@ func (m *MeshConfigResponse) Size() (n int) { func (m *IncrementalMeshConfigRequest) Size() (n int) { var l int _ = l - if m.Client != nil { - l = m.Client.Size() + if m.SinkNode != nil { + l = m.SinkNode.Size() n += 1 + l + sovMcp(uint64(l)) } l = len(m.TypeUrl) @@ -1058,8 +1667,68 @@ func (m *IncrementalMeshConfigResponse) Size() (n int) { if l > 0 { n += 1 + l + sovMcp(uint64(l)) } - if len(m.Envelopes) > 0 { - for _, e := range m.Envelopes { + if len(m.Resources) > 0 { + for _, e := range m.Resources { + l = e.Size() + n += 1 + l + sovMcp(uint64(l)) + } + } + if len(m.RemovedResources) > 0 { + for _, s := range m.RemovedResources { + l = len(s) + n += 1 + l + sovMcp(uint64(l)) + } + } + l = len(m.Nonce) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + return n +} + +func (m *RequestResources) Size() (n int) { + var l int + _ = l + if m.SinkNode != nil { + l = m.SinkNode.Size() + n += 1 + l + sovMcp(uint64(l)) + } + l = len(m.Collection) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + if len(m.InitialResourceVersions) > 0 { + for k, v := range m.InitialResourceVersions { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovMcp(uint64(len(k))) + 1 + len(v) + sovMcp(uint64(len(v))) + n += mapEntrySize + 1 + sovMcp(uint64(mapEntrySize)) + } + } + l = len(m.ResponseNonce) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + if m.ErrorDetail != nil { + l = m.ErrorDetail.Size() + n += 1 + l + sovMcp(uint64(l)) + } + return n +} + +func (m *Resources) Size() (n int) { + var l int + _ = l + l = len(m.SystemVersionInfo) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + l = len(m.Collection) + if l > 0 { + n += 1 + l + sovMcp(uint64(l)) + } + if len(m.Resources) > 0 { + for _, e := range m.Resources { l = e.Size() n += 1 + l + sovMcp(uint64(l)) } @@ -1090,7 +1759,407 @@ func sovMcp(x uint64) (n int) { func sozMcp(x uint64) (n int) { return sovMcp(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } -func (m *Client) Unmarshal(dAtA []byte) error { +func (m *SinkNode) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: SinkNode: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: SinkNode: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Id = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Annotations", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Annotations == nil { + m.Annotations = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthMcp + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthMcp + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipMcp(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthMcp + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Annotations[mapkey] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipMcp(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthMcp + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MeshConfigRequest) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: MeshConfigRequest: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: MeshConfigRequest: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field VersionInfo", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.VersionInfo = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field SinkNode", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.SinkNode == nil { + m.SinkNode = &SinkNode{} + } + if err := m.SinkNode.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TypeUrl", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TypeUrl = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ResponseNonce", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.ResponseNonce = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field ErrorDetail", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.ErrorDetail == nil { + m.ErrorDetail = &google_rpc.Status{} + } + if err := m.ErrorDetail.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipMcp(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthMcp + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *MeshConfigResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1113,15 +2182,15 @@ func (m *Client) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Client: wiretype end group for non-group") + return fmt.Errorf("proto: MeshConfigResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Client: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: MeshConfigResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field VersionInfo", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -1146,11 +2215,11 @@ func (m *Client) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Id = string(dAtA[iNdEx:postIndex]) + m.VersionInfo = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Metadata", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1174,13 +2243,69 @@ func (m *Client) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Metadata == nil { - m.Metadata = &google_protobuf3.Struct{} - } - if err := m.Metadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Resources = append(m.Resources, Resource{}) + if err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field TypeUrl", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.TypeUrl = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Nonce", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Nonce = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipMcp(dAtA[iNdEx:]) @@ -1202,7 +2327,7 @@ func (m *Client) Unmarshal(dAtA []byte) error { } return nil } -func (m *MeshConfigRequest) Unmarshal(dAtA []byte) error { +func (m *IncrementalMeshConfigRequest) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1225,17 +2350,17 @@ func (m *MeshConfigRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MeshConfigRequest: wiretype end group for non-group") + return fmt.Errorf("proto: IncrementalMeshConfigRequest: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MeshConfigRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: IncrementalMeshConfigRequest: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field VersionInfo", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SinkNode", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowMcp @@ -1245,26 +2370,30 @@ func (m *MeshConfigRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= (uint64(b) & 0x7F) << shift + msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthMcp } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } - m.VersionInfo = string(dAtA[iNdEx:postIndex]) + if m.SinkNode == nil { + m.SinkNode = &SinkNode{} + } + if err := m.SinkNode.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Client", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field TypeUrl", wireType) } - var msglen int + var stringLen uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowMcp @@ -1274,30 +2403,26 @@ func (m *MeshConfigRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - msglen |= (int(b) & 0x7F) << shift + stringLen |= (uint64(b) & 0x7F) << shift if b < 0x80 { break } } - if msglen < 0 { + intStringLen := int(stringLen) + if intStringLen < 0 { return ErrInvalidLengthMcp } - postIndex := iNdEx + msglen + postIndex := iNdEx + intStringLen if postIndex > l { return io.ErrUnexpectedEOF } - if m.Client == nil { - m.Client = &Client{} - } - if err := m.Client.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } + m.TypeUrl = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TypeUrl", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field InitialResourceVersions", wireType) } - var stringLen uint64 + var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { return ErrIntOverflowMcp @@ -1307,20 +2432,109 @@ func (m *MeshConfigRequest) Unmarshal(dAtA []byte) error { } b := dAtA[iNdEx] iNdEx++ - stringLen |= (uint64(b) & 0x7F) << shift + msglen |= (int(b) & 0x7F) << shift if b < 0x80 { break } } - intStringLen := int(stringLen) - if intStringLen < 0 { + if msglen < 0 { return ErrInvalidLengthMcp } - postIndex := iNdEx + intStringLen + postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } - m.TypeUrl = string(dAtA[iNdEx:postIndex]) + if m.InitialResourceVersions == nil { + m.InitialResourceVersions = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthMcp + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthMcp + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipMcp(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthMcp + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.InitialResourceVersions[mapkey] = mapvalue iNdEx = postIndex case 4: if wireType != 2 { @@ -1405,7 +2619,7 @@ func (m *MeshConfigRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *MeshConfigResponse) Unmarshal(dAtA []byte) error { +func (m *IncrementalMeshConfigResponse) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1428,15 +2642,15 @@ func (m *MeshConfigResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: MeshConfigResponse: wiretype end group for non-group") + return fmt.Errorf("proto: IncrementalMeshConfigResponse: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: MeshConfigResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: IncrementalMeshConfigResponse: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field VersionInfo", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SystemVersionInfo", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -1461,11 +2675,11 @@ func (m *MeshConfigResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.VersionInfo = string(dAtA[iNdEx:postIndex]) + m.SystemVersionInfo = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Envelopes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1489,14 +2703,14 @@ func (m *MeshConfigResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Envelopes = append(m.Envelopes, Envelope{}) - if err := m.Envelopes[len(m.Envelopes)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Resources = append(m.Resources, Resource{}) + if err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 3: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TypeUrl", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field RemovedResources", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -1521,7 +2735,7 @@ func (m *MeshConfigResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.TypeUrl = string(dAtA[iNdEx:postIndex]) + m.RemovedResources = append(m.RemovedResources, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex case 4: if wireType != 2 { @@ -1573,7 +2787,7 @@ func (m *MeshConfigResponse) Unmarshal(dAtA []byte) error { } return nil } -func (m *IncrementalMeshConfigRequest) Unmarshal(dAtA []byte) error { +func (m *RequestResources) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1596,15 +2810,15 @@ func (m *IncrementalMeshConfigRequest) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: IncrementalMeshConfigRequest: wiretype end group for non-group") + return fmt.Errorf("proto: RequestResources: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: IncrementalMeshConfigRequest: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: RequestResources: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Client", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field SinkNode", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1628,16 +2842,16 @@ func (m *IncrementalMeshConfigRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - if m.Client == nil { - m.Client = &Client{} + if m.SinkNode == nil { + m.SinkNode = &SinkNode{} } - if err := m.Client.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.SinkNode.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field TypeUrl", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Collection", wireType) } var stringLen uint64 for shift := uint(0); ; shift += 7 { @@ -1662,7 +2876,7 @@ func (m *IncrementalMeshConfigRequest) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.TypeUrl = string(dAtA[iNdEx:postIndex]) + m.Collection = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex case 3: if wireType != 2 { @@ -1865,7 +3079,7 @@ func (m *IncrementalMeshConfigRequest) Unmarshal(dAtA []byte) error { } return nil } -func (m *IncrementalMeshConfigResponse) Unmarshal(dAtA []byte) error { +func (m *Resources) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1888,10 +3102,10 @@ func (m *IncrementalMeshConfigResponse) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: IncrementalMeshConfigResponse: wiretype end group for non-group") + return fmt.Errorf("proto: Resources: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: IncrementalMeshConfigResponse: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Resources: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -1925,7 +3139,36 @@ func (m *IncrementalMeshConfigResponse) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Envelopes", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Collection", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMcp + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthMcp + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Collection = string(dAtA[iNdEx:postIndex]) + iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Resources", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1949,12 +3192,12 @@ func (m *IncrementalMeshConfigResponse) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Envelopes = append(m.Envelopes, Envelope{}) - if err := m.Envelopes[len(m.Envelopes)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + m.Resources = append(m.Resources, Resource{}) + if err := m.Resources[len(m.Resources)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex - case 3: + case 4: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field RemovedResources", wireType) } @@ -1983,7 +3226,7 @@ func (m *IncrementalMeshConfigResponse) Unmarshal(dAtA []byte) error { } m.RemovedResources = append(m.RemovedResources, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex - case 4: + case 5: if wireType != 2 { return fmt.Errorf("proto: wrong wireType = %d for field Nonce", wireType) } @@ -2141,45 +3384,53 @@ var ( func init() { proto.RegisterFile("mcp/v1alpha1/mcp.proto", fileDescriptorMcp) } var fileDescriptorMcp = []byte{ - // 635 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xb4, 0x54, 0x41, 0x4f, 0x13, 0x4f, - 0x14, 0x67, 0xb6, 0xc0, 0x1f, 0xa6, 0xfc, 0x09, 0x8c, 0x44, 0xb6, 0x05, 0x2b, 0x36, 0xc1, 0x34, - 0x21, 0xd9, 0x85, 0x12, 0x13, 0xe3, 0x49, 0x41, 0x0e, 0x98, 0xe0, 0x61, 0x1b, 0x39, 0x78, 0xd9, - 0x0c, 0xdb, 0xd7, 0x65, 0xe2, 0xee, 0xcc, 0x3a, 0x33, 0xdb, 0xa4, 0x1f, 0xc1, 0xf8, 0x19, 0x3c, - 0x79, 0x31, 0x7e, 0x12, 0x8e, 0x1e, 0x3c, 0x1b, 0x53, 0xbf, 0x88, 0xd9, 0xdd, 0x29, 0x2d, 0xe9, - 0x82, 0x62, 0xf4, 0x36, 0xf3, 0xde, 0x6f, 0xde, 0xfb, 0xfd, 0x7e, 0xef, 0x65, 0xf0, 0xdd, 0x38, - 0x48, 0xdc, 0xfe, 0x1e, 0x8d, 0x92, 0x73, 0xba, 0xe7, 0xc6, 0x41, 0xe2, 0x24, 0x52, 0x68, 0x41, - 0x08, 0x53, 0x9a, 0x09, 0x27, 0x0b, 0x8c, 0xb2, 0xf5, 0xcd, 0x50, 0x88, 0x30, 0x02, 0x37, 0x47, - 0x9c, 0xa5, 0x3d, 0x57, 0x69, 0x99, 0x06, 0xba, 0x78, 0x51, 0x5f, 0x37, 0x59, 0x99, 0x04, 0xae, - 0xd2, 0x54, 0xa7, 0xca, 0x24, 0xd6, 0x42, 0x11, 0x8a, 0xfc, 0xe8, 0x66, 0x27, 0x13, 0xdd, 0xb8, - 0xd2, 0x18, 0x78, 0x1f, 0x22, 0x91, 0x40, 0x91, 0x6c, 0x9e, 0xe0, 0xf9, 0xc3, 0x88, 0x01, 0xd7, - 0x64, 0x19, 0x5b, 0xac, 0x6b, 0xa3, 0x2d, 0xd4, 0x5a, 0xf4, 0x2c, 0xd6, 0x25, 0xfb, 0x78, 0x21, - 0x06, 0x4d, 0xbb, 0x54, 0x53, 0xdb, 0xda, 0x42, 0xad, 0x6a, 0x7b, 0xdd, 0x29, 0x1a, 0x3b, 0x23, - 0x5a, 0x4e, 0x27, 0xa7, 0xe5, 0x5d, 0x02, 0x9b, 0x3f, 0x10, 0x5e, 0x3d, 0x01, 0x75, 0x7e, 0x28, - 0x78, 0x8f, 0x85, 0x1e, 0xbc, 0x4d, 0x41, 0x69, 0xf2, 0x00, 0x2f, 0xf5, 0x41, 0x2a, 0x26, 0xb8, - 0xcf, 0x78, 0x4f, 0x98, 0x26, 0x55, 0x13, 0x3b, 0xe6, 0x3d, 0x41, 0xda, 0x78, 0x3e, 0xc8, 0x79, - 0x98, 0x5e, 0x75, 0x67, 0xda, 0x16, 0xa7, 0x60, 0xea, 0x19, 0x24, 0xa9, 0xe1, 0x05, 0x3d, 0x48, - 0xc0, 0x4f, 0x65, 0x64, 0x57, 0xf2, 0x92, 0xff, 0x65, 0xf7, 0x57, 0x32, 0x22, 0xdb, 0x78, 0x59, - 0x82, 0x4a, 0x04, 0x57, 0xe0, 0x73, 0xc1, 0x03, 0xb0, 0x67, 0x73, 0xc0, 0xff, 0xa3, 0xe8, 0xcb, - 0x2c, 0x48, 0x1e, 0xe1, 0x25, 0x90, 0x52, 0x48, 0xbf, 0x0b, 0x9a, 0xb2, 0xc8, 0x9e, 0xcb, 0x7b, - 0x93, 0x91, 0x4e, 0x99, 0x04, 0x4e, 0x27, 0x37, 0xd8, 0xab, 0xe6, 0xb8, 0xe7, 0x39, 0xac, 0xf9, - 0x19, 0x61, 0x32, 0xa9, 0xb2, 0x28, 0xf9, 0x3b, 0x32, 0x9f, 0xe2, 0xc5, 0xd1, 0x00, 0x94, 0x6d, - 0x6d, 0x55, 0x5a, 0xd5, 0xf6, 0x66, 0x99, 0xd2, 0x23, 0x03, 0x3a, 0x98, 0xbd, 0xf8, 0x76, 0x7f, - 0xc6, 0x1b, 0x3f, 0xba, 0x49, 0xf4, 0x1a, 0x9e, 0x9b, 0xd4, 0x5a, 0x5c, 0x9a, 0x1f, 0x2b, 0x78, - 0xf3, 0x98, 0x07, 0x12, 0x62, 0xe0, 0x9a, 0x46, 0xd3, 0xd3, 0x19, 0x5b, 0x8f, 0xfe, 0xc8, 0x7a, - 0xeb, 0x2a, 0x8b, 0x77, 0x08, 0xd7, 0x18, 0x67, 0x9a, 0xd1, 0xc8, 0x97, 0xa0, 0x44, 0x2a, 0x03, - 0xf0, 0x8d, 0x07, 0xca, 0xae, 0xe4, 0x9a, 0x4f, 0xca, 0x5a, 0xdc, 0x44, 0xd2, 0x39, 0x2e, 0x2a, - 0x7a, 0xa6, 0xe0, 0xa9, 0xa9, 0x77, 0xc4, 0xb5, 0x1c, 0x78, 0xeb, 0xac, 0x3c, 0xfb, 0x6f, 0xd7, - 0xa0, 0xfe, 0x22, 0x33, 0xf6, 0x7a, 0x5a, 0x64, 0x05, 0x57, 0xde, 0xc0, 0xc0, 0xac, 0x41, 0x76, - 0xcc, 0x26, 0xd4, 0xa7, 0x51, 0x0a, 0xc6, 0xb3, 0xe2, 0xf2, 0xc4, 0x7a, 0x8c, 0x9a, 0x5f, 0x11, - 0xbe, 0x77, 0x8d, 0x01, 0x66, 0xbb, 0x1c, 0x7c, 0x47, 0x0d, 0x94, 0x86, 0xd8, 0x2f, 0x59, 0xb2, - 0xd5, 0x22, 0x75, 0xfa, 0x57, 0x57, 0x6d, 0x07, 0xaf, 0x4a, 0x88, 0x45, 0x1f, 0xba, 0x97, 0x83, - 0x2c, 0x06, 0xb8, 0xe8, 0xad, 0x98, 0xc4, 0x48, 0xb8, 0x2a, 0x5f, 0xbe, 0xf6, 0x07, 0x0b, 0x6f, - 0x3c, 0x0b, 0x43, 0x09, 0x21, 0xd5, 0xd0, 0x1d, 0xab, 0xea, 0x80, 0xec, 0xb3, 0x00, 0x48, 0x82, - 0x6b, 0x1d, 0x2d, 0x81, 0xc6, 0x63, 0xd0, 0xb8, 0xe4, 0x76, 0x19, 0xdd, 0xa9, 0xd5, 0xa8, 0x3f, - 0xfc, 0x15, 0xac, 0x30, 0xb0, 0x39, 0xd3, 0x42, 0xbb, 0x88, 0xbc, 0x47, 0xb8, 0x31, 0x61, 0x74, - 0x59, 0xdf, 0xdd, 0xdb, 0x6e, 0x67, 0x7d, 0xef, 0x16, 0x2f, 0x26, 0xd9, 0x1c, 0xec, 0x7c, 0x1a, - 0x36, 0xd0, 0xc5, 0xb0, 0x81, 0xbe, 0x0c, 0x1b, 0xe8, 0xfb, 0xb0, 0x81, 0x5e, 0xd7, 0x8a, 0x4a, - 0x4c, 0xb8, 0x34, 0x61, 0xee, 0xe4, 0xd7, 0x7d, 0x36, 0x9f, 0xff, 0xbb, 0xfb, 0x3f, 0x03, 0x00, - 0x00, 0xff, 0xff, 0x7b, 0x9d, 0xce, 0x14, 0x4a, 0x06, 0x00, 0x00, + // 755 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x56, 0x4f, 0x4f, 0x1b, 0x47, + 0x14, 0x67, 0x6c, 0xdc, 0xe2, 0x67, 0x8a, 0xcc, 0x14, 0x15, 0x7b, 0x01, 0x97, 0x5a, 0xa5, 0x42, + 0x42, 0x5d, 0x83, 0xab, 0x4a, 0x6d, 0x0f, 0x55, 0xa1, 0xe5, 0x40, 0x25, 0xa8, 0xb4, 0x56, 0x39, + 0xe4, 0xb2, 0x5a, 0x76, 0x87, 0x65, 0xe4, 0xf5, 0xcc, 0x66, 0x66, 0x6c, 0xc9, 0x87, 0x7c, 0x80, + 0x28, 0xf7, 0xdc, 0x72, 0x8f, 0xc8, 0x17, 0xe1, 0x98, 0x43, 0xce, 0x11, 0xf2, 0x31, 0x9f, 0x22, + 0xda, 0x7f, 0xec, 0x3a, 0x2c, 0x36, 0x4e, 0xc8, 0x25, 0x97, 0xd5, 0xcc, 0x7b, 0x6f, 0x7e, 0xef, + 0xcf, 0xef, 0x37, 0xa3, 0x85, 0xef, 0x7a, 0xb6, 0xdf, 0x1a, 0xec, 0x59, 0x9e, 0x7f, 0x61, 0xed, + 0xb5, 0x7a, 0xb6, 0xaf, 0xfb, 0x82, 0x2b, 0x8e, 0x31, 0x95, 0x8a, 0x72, 0x3d, 0x30, 0x24, 0x5e, + 0x6d, 0xd5, 0xe5, 0xdc, 0xf5, 0x48, 0x4b, 0xf8, 0x76, 0x4b, 0x2a, 0x4b, 0xf5, 0x65, 0x14, 0xac, + 0xad, 0xb8, 0xdc, 0xe5, 0xe1, 0xb2, 0x15, 0xac, 0x62, 0xeb, 0xda, 0x18, 0xb4, 0x20, 0x92, 0xf7, + 0x85, 0x4d, 0x22, 0x67, 0xf3, 0x15, 0x82, 0x85, 0x0e, 0x65, 0xdd, 0x13, 0xee, 0x10, 0xbc, 0x04, + 0x05, 0xea, 0xd4, 0xd0, 0x26, 0xda, 0x2e, 0x1b, 0x05, 0xea, 0xe0, 0xff, 0xa0, 0x62, 0x31, 0xc6, + 0x95, 0xa5, 0x28, 0x67, 0xb2, 0x56, 0xd8, 0x2c, 0x6e, 0x57, 0xda, 0x3f, 0xeb, 0xb7, 0x4b, 0xd2, + 0x13, 0x08, 0x7d, 0x3f, 0x8d, 0x3f, 0x64, 0x4a, 0x0c, 0x8d, 0x2c, 0x82, 0xf6, 0x27, 0x54, 0x3f, + 0x0c, 0xc0, 0x55, 0x28, 0x76, 0xc9, 0x30, 0xce, 0x1a, 0x2c, 0xf1, 0x0a, 0x94, 0x06, 0x96, 0xd7, + 0x27, 0xb5, 0x42, 0x68, 0x8b, 0x36, 0x7f, 0x14, 0x7e, 0x43, 0xcd, 0x77, 0x08, 0x96, 0x8f, 0x89, + 0xbc, 0xf8, 0x9b, 0xb3, 0x73, 0xea, 0x1a, 0xe4, 0x71, 0x9f, 0x48, 0x85, 0x7f, 0x80, 0xc5, 0x01, + 0x11, 0x92, 0x72, 0x66, 0x52, 0x76, 0xce, 0x63, 0xa8, 0x4a, 0x6c, 0x3b, 0x62, 0xe7, 0x1c, 0xff, + 0x0e, 0x65, 0x49, 0x59, 0xd7, 0x64, 0xdc, 0x89, 0x60, 0x2b, 0xed, 0xf5, 0x49, 0x7d, 0x18, 0x0b, + 0x32, 0x19, 0x4a, 0x1d, 0x16, 0xd4, 0xd0, 0x27, 0x66, 0x5f, 0x78, 0xb5, 0x62, 0x88, 0xfc, 0x75, + 0xb0, 0xff, 0x5f, 0x78, 0x78, 0x0b, 0x96, 0x04, 0x91, 0x3e, 0x67, 0x92, 0x98, 0x8c, 0x33, 0x9b, + 0xd4, 0xe6, 0xc3, 0x80, 0x6f, 0x12, 0xeb, 0x49, 0x60, 0xc4, 0xbf, 0xc2, 0x22, 0x11, 0x82, 0x0b, + 0xd3, 0x21, 0xca, 0xa2, 0x5e, 0xad, 0x14, 0xe6, 0xc7, 0x7a, 0x44, 0xa3, 0x2e, 0x7c, 0x5b, 0xef, + 0x84, 0x34, 0x1a, 0x95, 0x30, 0xee, 0x9f, 0x30, 0xac, 0x79, 0x89, 0x00, 0x67, 0x9b, 0x8d, 0x20, + 0xef, 0xd3, 0xed, 0x5f, 0x50, 0x4e, 0x68, 0x4e, 0x58, 0xcb, 0xed, 0xd6, 0x88, 0x83, 0x0e, 0xe6, + 0xaf, 0xde, 0x7e, 0x3f, 0x67, 0xa4, 0x87, 0x26, 0x35, 0xbd, 0x02, 0xa5, 0x6c, 0xaf, 0xd1, 0xa6, + 0x79, 0x59, 0x84, 0xf5, 0x23, 0x66, 0x0b, 0xd2, 0x23, 0x4c, 0x59, 0xde, 0x6d, 0x92, 0xc6, 0x18, + 0x40, 0x1f, 0xcd, 0x40, 0x61, 0xbc, 0x98, 0xa7, 0x08, 0xea, 0x94, 0x51, 0x45, 0x2d, 0xcf, 0x4c, + 0xaa, 0x37, 0xe3, 0x51, 0xc8, 0x5a, 0x31, 0x6c, 0xfd, 0x38, 0x2f, 0xcd, 0xa4, 0x5a, 0xf5, 0xa3, + 0x08, 0x31, 0x19, 0xcf, 0x69, 0x8c, 0x17, 0x09, 0x7a, 0x95, 0xe6, 0x7b, 0x3f, 0xaf, 0x1a, 0xb4, + 0x7f, 0x83, 0xf9, 0xde, 0x5d, 0xd6, 0x4c, 0xd7, 0xe8, 0x0d, 0x82, 0x8d, 0x3b, 0x06, 0x10, 0x8b, + 0x4c, 0x87, 0x6f, 0xe5, 0x50, 0x2a, 0xd2, 0x33, 0x73, 0xb4, 0xb6, 0x1c, 0xb9, 0x4e, 0x1f, 0x54, + 0x71, 0x3b, 0xb0, 0x2c, 0x48, 0x8f, 0x0f, 0x88, 0x63, 0xa6, 0x48, 0x01, 0x81, 0x65, 0xa3, 0x1a, + 0x3b, 0x8c, 0x9b, 0xe0, 0x7c, 0x0d, 0x3e, 0x2f, 0x42, 0x35, 0xa6, 0x30, 0x0d, 0xfd, 0x04, 0xdd, + 0x35, 0x00, 0x6c, 0xee, 0x79, 0xc4, 0x0e, 0x5e, 0xab, 0x78, 0x8a, 0x19, 0x0b, 0x7e, 0x32, 0x5d, + 0x7b, 0xfb, 0xf9, 0x43, 0x18, 0xaf, 0xf1, 0x8b, 0xd7, 0xdb, 0x35, 0x82, 0x72, 0xca, 0xc8, 0xac, + 0xda, 0x9a, 0x46, 0xc3, 0x98, 0xf6, 0x8a, 0x0f, 0xa6, 0xbd, 0xf9, 0x69, 0xda, 0x2b, 0x65, 0xb4, + 0xd7, 0x7e, 0x51, 0x80, 0xb5, 0x7d, 0xd7, 0x15, 0xc4, 0xb5, 0x14, 0x71, 0xd2, 0x1b, 0xd5, 0x21, + 0x62, 0x40, 0x6d, 0x82, 0x7d, 0xa8, 0x77, 0x94, 0x20, 0x56, 0x2f, 0x0d, 0x4a, 0x21, 0xb7, 0xf2, + 0xca, 0xbd, 0xf5, 0x2c, 0x69, 0x3f, 0x4d, 0x0b, 0x8b, 0x68, 0x6f, 0xce, 0x6d, 0xa3, 0x5d, 0x84, + 0x9f, 0x21, 0x68, 0x64, 0x2e, 0x79, 0x5e, 0xde, 0xdd, 0x59, 0x5f, 0x46, 0x6d, 0x6f, 0x86, 0x13, + 0xd9, 0x6a, 0xda, 0x03, 0x58, 0x4a, 0xf2, 0x76, 0xc2, 0x2f, 0x76, 0x60, 0xf5, 0x50, 0x2a, 0xeb, + 0xcc, 0xa3, 0xf2, 0xe2, 0xc6, 0x15, 0x8e, 0x08, 0xff, 0x78, 0x9f, 0x5b, 0xa3, 0x6d, 0x4c, 0x22, + 0x59, 0xc6, 0x79, 0x15, 0x2c, 0xde, 0x80, 0x53, 0xd6, 0x9d, 0x94, 0x75, 0x32, 0x9e, 0x76, 0xaf, + 0xa2, 0xa2, 0xac, 0x07, 0x3b, 0x2f, 0x47, 0x0d, 0x74, 0x35, 0x6a, 0xa0, 0xd7, 0xa3, 0x06, 0xba, + 0x1e, 0x35, 0xd0, 0xa3, 0x7a, 0x74, 0x98, 0xf2, 0x96, 0xe5, 0xd3, 0x56, 0xf6, 0x8f, 0xec, 0xec, + 0xab, 0xf0, 0x4f, 0xec, 0x97, 0xf7, 0x01, 0x00, 0x00, 0xff, 0xff, 0xa7, 0xf2, 0x04, 0xf7, 0x03, + 0x0a, 0x00, 0x00, } diff --git a/vendor/istio.io/api/mcp/v1alpha1/mcp.proto b/vendor/istio.io/api/mcp/v1alpha1/mcp.proto index 9730430c9cc8..777f6ae0c656 100644 --- a/vendor/istio.io/api/mcp/v1alpha1/mcp.proto +++ b/vendor/istio.io/api/mcp/v1alpha1/mcp.proto @@ -16,25 +16,24 @@ syntax = "proto3"; package istio.mcp.v1alpha1; -import "google/protobuf/struct.proto"; import "google/rpc/status.proto"; import "gogoproto/gogo.proto"; -import "mcp/v1alpha1/envelope.proto"; +import "mcp/v1alpha1/resource.proto"; option go_package="istio.io/api/mcp/v1alpha1"; option (gogoproto.equal_all) = true; -// Identifies a specific MCP client instance. The client identifier is -// presented to the management server, which may use this identifier -// to distinguish per client configuration for serving. This -// information is not authoriative. Authoritative identity should come +// Identifies a specific MCP sink node instance. The node identifier is +// presented to the resource source, which may use this identifier +// to distinguish per sink configuration for serving. This +// information is not authoritative. Authoritative identity should come // from the underlying transport layer (e.g. rpc credentials). -message Client { - // An opaque identifier for the MCP client. +message SinkNode { + // An opaque identifier for the MCP node. string id = 1; - // Opaque metadata extending the client identifier. - google.protobuf.Struct metadata = 2; + // Opaque annotations extending the node identifier. + map annotations = 2; } // A MeshConfigRequest requests a set of versioned resources of the @@ -50,8 +49,8 @@ message MeshConfigRequest { // below) has an independent version associated with it. string version_info = 1; - // The client making the request. - Client client = 2; + // The sink node making the request. + SinkNode sink_node = 2; // Type of the resource that is being requested, e.g. // "type.googleapis.com/istio.io.networking.v1alpha3.VirtualService". @@ -78,13 +77,13 @@ message MeshConfigResponse { // The version of the response data. string version_info = 1; - // The response resources wrapped in the common MCP *Envelope* + // The response resources wrapped in the common MCP *Resource* // message. - repeated Envelope envelopes = 2 [(gogoproto.nullable) = false]; + repeated Resource resources = 2 [(gogoproto.nullable) = false]; - // Type URL for resources wrapped in the provided envelope(s). This + // Type URL for resources wrapped in the provided resources(s). This // must be consistent with the type_url in the wrapper messages if - // envelopes is non-empty. + // resources is non-empty. string type_url = 3; // The nonce provides a way to explicitly ack a specific @@ -106,8 +105,8 @@ message MeshConfigResponse { // In this case the response_nonce is set to the nonce value in the Response. // ACK or NACK is determined by the absence or presence of error_detail. message IncrementalMeshConfigRequest { - // The client making the request. - Client client = 1; + // The sink node making the request. + SinkNode sink_node = 1; // Type of the resource that is being requested, e.g. // "type.googleapis.com/istio.io.networking.v1alpha3.VirtualService". @@ -149,10 +148,10 @@ message IncrementalMeshConfigResponse { // The version of the response data (used for debugging). string system_version_info = 1; - // The response resources wrapped in the common MCP *Envelope* + // The response resources wrapped in the common MCP *Resource* // message. These are typed resources that match the type url in the // IncrementalMeshConfigRequest. - repeated Envelope envelopes = 2 [(gogoproto.nullable) = false]; + repeated Resource resources = 2 [(gogoproto.nullable) = false]; // Resources names of resources that have be deleted and to be // removed from the MCP Client. Removed resources for missing @@ -184,3 +183,106 @@ service AggregatedMeshConfigService { returns (stream IncrementalMeshConfigResponse) { } } + +// A RequestResource can be sent in two situations: +// +// Initial message in an MCP bidirectional change stream +// as an ACK or NACK response to a previous Resources. In +// this case the response_nonce is set to the nonce value +// in the Resources. ACK/NACK is determined by the presence +// of error_detail. +// +// * ACK (nonce!="",error_details==nil) +// * NACK (nonce!="",error_details!=nil) +// * New/Update request (nonce=="",error_details ignored) +// +message RequestResources { + // The sink node making the request. + SinkNode sink_node = 1; + + // Type of resource collection that is being requested, e.g. + // + // istio/networking/v1alpha3/VirtualService + // k8s// + string collection = 2; + + // When the RequestResources is the first in a stream, the initial_resource_versions must + // be populated. Otherwise, initial_resource_versions must be omitted. The keys are the + // resources names of the MCP resources known to the MCP client. The values in the map + // are the associated resource level version info. + map initial_resource_versions = 3; + + // When the RequestResources is an ACK or NACK message in response to a previous RequestResources, + // the response_nonce must be the nonce in the RequestResources. Otherwise response_nonce must + // be omitted. + string response_nonce = 4; + + // This is populated when the previously received resources could not be applied + // The *message* field in *error_details* provides the source internal error + // related to the failure. + google.rpc.Status error_detail = 5; +} + +// Resources do not need to include a full snapshot of the tracked +// resources. Instead they are a diff to the state of a MCP client. +// Per resource versions allow sources and sinks to track state at +// the resource granularity. An MCP incremental session is always +// in the context of a gRPC bidirectional stream. This allows the +// MCP source to keep track of the state of MCP sink connected to +// it. +// +// In Incremental MCP the nonce field is required and used to pair +// Resources to an RequestResources ACK or NACK. +message Resources { + // The version of the response data (used for debugging). + string system_version_info = 1; + + // Type of resource collection that is being requested, e.g. + // + // istio/networking/v1alpha3/VirtualService + // k8s// + string collection = 2; + + // The response resources wrapped in the common MCP *Resource* message. + // These are typed resources that match the type url in the + // RequestResources message. + repeated Resource resources = 3 [(gogoproto.nullable) = false]; + + // Names of resources that have been deleted and to be + // removed from the MCP sink node. Removed resources for missing + // resources can be ignored. + repeated string removed_resources = 4; + + // Required. The nonce provides a way for RequestChange to uniquely + // reference a RequestResources. + string nonce = 5; +} + +// ResourceSource and ResourceSink services are semantically +// equivalent with regards to the message exchange. The only meaningful +// difference is who initiates the connection and opens the stream. The +// following high-level overview applies to both service variants. +// +// After the connection and streams have been established, the sink sends +// a RequestResource messages to request the initial set of resources. The +// source sends a Resource message when new resources are available for the +// requested type. In response, the sink sends another RequestResource +// to ACK/NACK the received resources and request the next set of resources. + +// Service where the sink is the gRPC client. The sink is responsible for +// initiating connections and opening streams. +service ResourceSource { + // The sink, acting as gRPC client, establishes a new resource stream + // with the source. The sink sends RequestResources message to + // and receives Resources messages from the source. + rpc EstablishResourceStream(stream RequestResources) returns (stream Resources) {} +} + +// Service where the source is the gRPC client. The source is responsible for +// initiating connections and opening streams. +service ResourceSink { + // The source, acting as gRPC client, establishes a new resource stream + // with the sink. The sink sends RequestResources message to and + // receives Resources messages from the source. + rpc EstablishResourceStream(stream Resources) returns (stream RequestResources) {} +} diff --git a/vendor/istio.io/api/mcp/v1alpha1/metadata.pb.go b/vendor/istio.io/api/mcp/v1alpha1/metadata.pb.go index 97555f1cf261..bdd4632a26ee 100644 --- a/vendor/istio.io/api/mcp/v1alpha1/metadata.pb.go +++ b/vendor/istio.io/api/mcp/v1alpha1/metadata.pb.go @@ -8,6 +8,7 @@ import fmt "fmt" import math "math" import _ "github.com/gogo/protobuf/gogoproto" import google_protobuf2 "github.com/gogo/protobuf/types" +import _ "github.com/gogo/protobuf/types" import io "io" @@ -18,16 +19,50 @@ var _ = math.Inf // Metadata information that all resources within the Mesh Configuration Protocol must have. type Metadata struct { - // The name of the resource. It is unique within the context of a - // resource type and the origin server of the resource. The resource - // type is identified by the TypeUrl of the resource field of the - // Envelope message. + // Fully qualified name of the resource. Unique in context of a collection. + // + // The fully qualified name consists of a directory and basename. The directory identifies + // the resources location in a resource hierarchy. The basename identifies the specific + // resource name within the context of that directory. + // + // The directory and basename are composed of one or more segments. Segments must be + // valid [DNS labels](https://tools.ietf.org/html/rfc1123). “/” is the delimiter between + // segments + // + // The rightmost segment is the basename. All segments to the + // left of the basename form the directory. Segments moving towards the left + // represent higher positions in the resource hierarchy, similar to reverse + // DNS notation. e.g. + // + // //// + // + // An empty directory indicates a resource that is located at the root of the + // hierarchy, e.g. + // + // / + // + // On Kubernetes the resource hierarchy is two-levels: namespaces and + // cluster-scoped (i.e. global). + // + // Namespace resources fully qualified name is of the form: + // + // "//" + // + // Cluster scoped resources are located at the root of the hierarchy and are of the form: + // + // "/" Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` // The creation timestamp of the resource. CreateTime *google_protobuf2.Timestamp `protobuf:"bytes,2,opt,name=create_time,json=createTime" json:"create_time,omitempty"` - // The resource level version. It allows MCP to track the state of - // individual resources. + // Resource version. This is used to determine when resources change across + // resource updates. It should be treated as opaque by consumers/sinks. Version string `protobuf:"bytes,3,opt,name=version,proto3" json:"version,omitempty"` + // Map of string keys and values that can be used to organize and categorize + // resources within a collection. + Labels map[string]string `protobuf:"bytes,4,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` + // Map of string keys and values that can be used by source and sink to communicate + // arbitrary metadata about this resource. + Annotations map[string]string `protobuf:"bytes,5,rep,name=annotations" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` } func (m *Metadata) Reset() { *m = Metadata{} } @@ -56,6 +91,20 @@ func (m *Metadata) GetVersion() string { return "" } +func (m *Metadata) GetLabels() map[string]string { + if m != nil { + return m.Labels + } + return nil +} + +func (m *Metadata) GetAnnotations() map[string]string { + if m != nil { + return m.Annotations + } + return nil +} + func init() { proto.RegisterType((*Metadata)(nil), "istio.mcp.v1alpha1.Metadata") } @@ -87,6 +136,22 @@ func (this *Metadata) Equal(that interface{}) bool { if this.Version != that1.Version { return false } + if len(this.Labels) != len(that1.Labels) { + return false + } + for i := range this.Labels { + if this.Labels[i] != that1.Labels[i] { + return false + } + } + if len(this.Annotations) != len(that1.Annotations) { + return false + } + for i := range this.Annotations { + if this.Annotations[i] != that1.Annotations[i] { + return false + } + } return true } func (m *Metadata) Marshal() (dAtA []byte, err error) { @@ -126,6 +191,40 @@ func (m *Metadata) MarshalTo(dAtA []byte) (int, error) { i = encodeVarintMetadata(dAtA, i, uint64(len(m.Version))) i += copy(dAtA[i:], m.Version) } + if len(m.Labels) > 0 { + for k, _ := range m.Labels { + dAtA[i] = 0x22 + i++ + v := m.Labels[k] + mapSize := 1 + len(k) + sovMetadata(uint64(len(k))) + 1 + len(v) + sovMetadata(uint64(len(v))) + i = encodeVarintMetadata(dAtA, i, uint64(mapSize)) + dAtA[i] = 0xa + i++ + i = encodeVarintMetadata(dAtA, i, uint64(len(k))) + i += copy(dAtA[i:], k) + dAtA[i] = 0x12 + i++ + i = encodeVarintMetadata(dAtA, i, uint64(len(v))) + i += copy(dAtA[i:], v) + } + } + if len(m.Annotations) > 0 { + for k, _ := range m.Annotations { + dAtA[i] = 0x2a + i++ + v := m.Annotations[k] + mapSize := 1 + len(k) + sovMetadata(uint64(len(k))) + 1 + len(v) + sovMetadata(uint64(len(v))) + i = encodeVarintMetadata(dAtA, i, uint64(mapSize)) + dAtA[i] = 0xa + i++ + i = encodeVarintMetadata(dAtA, i, uint64(len(k))) + i += copy(dAtA[i:], k) + dAtA[i] = 0x12 + i++ + i = encodeVarintMetadata(dAtA, i, uint64(len(v))) + i += copy(dAtA[i:], v) + } + } return i, nil } @@ -153,6 +252,22 @@ func (m *Metadata) Size() (n int) { if l > 0 { n += 1 + l + sovMetadata(uint64(l)) } + if len(m.Labels) > 0 { + for k, v := range m.Labels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovMetadata(uint64(len(k))) + 1 + len(v) + sovMetadata(uint64(len(v))) + n += mapEntrySize + 1 + sovMetadata(uint64(mapEntrySize)) + } + } + if len(m.Annotations) > 0 { + for k, v := range m.Annotations { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovMetadata(uint64(len(k))) + 1 + len(v) + sovMetadata(uint64(len(v))) + n += mapEntrySize + 1 + sovMetadata(uint64(mapEntrySize)) + } + } return n } @@ -289,6 +404,242 @@ func (m *Metadata) Unmarshal(dAtA []byte) error { } m.Version = string(dAtA[iNdEx:postIndex]) iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthMetadata + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Labels == nil { + m.Labels = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthMetadata + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthMetadata + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipMetadata(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthMetadata + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Labels[mapkey] = mapvalue + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Annotations", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthMetadata + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.Annotations == nil { + m.Annotations = make(map[string]string) + } + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthMetadata + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowMetadata + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthMetadata + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipMetadata(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthMetadata + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Annotations[mapkey] = mapvalue + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipMetadata(dAtA[iNdEx:]) @@ -418,19 +769,26 @@ var ( func init() { proto.RegisterFile("mcp/v1alpha1/metadata.proto", fileDescriptorMetadata) } var fileDescriptorMetadata = []byte{ - // 220 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x4c, 0x8f, 0xbf, 0x4e, 0xc5, 0x20, - 0x18, 0xc5, 0x83, 0x1a, 0xff, 0x70, 0x37, 0xe2, 0x80, 0x35, 0xc1, 0x1b, 0xa7, 0x26, 0x26, 0x90, - 0xea, 0xe8, 0xe6, 0xee, 0xd2, 0x38, 0xb9, 0x98, 0xaf, 0x15, 0x91, 0xa4, 0xf4, 0x23, 0x2d, 0xed, - 0x33, 0xf9, 0x28, 0x8e, 0x3e, 0x82, 0xe1, 0x49, 0x4c, 0x41, 0x92, 0xbb, 0x9d, 0x03, 0x3f, 0x7e, - 0xe4, 0xd0, 0x6b, 0xd7, 0x7b, 0xb5, 0x36, 0x30, 0xf8, 0x4f, 0x68, 0x94, 0xd3, 0x01, 0xde, 0x21, - 0x80, 0xf4, 0x13, 0x06, 0x64, 0xcc, 0xce, 0xc1, 0xa2, 0x74, 0xbd, 0x97, 0x05, 0xa9, 0x2e, 0x0d, - 0x1a, 0x4c, 0xd7, 0x6a, 0x4b, 0x99, 0xac, 0x6e, 0x0c, 0xa2, 0x19, 0xb4, 0x4a, 0xad, 0x5b, 0x3e, - 0x54, 0xb0, 0x4e, 0xcf, 0x01, 0x9c, 0xcf, 0xc0, 0xed, 0x42, 0xcf, 0x9f, 0xff, 0xe5, 0x8c, 0xd1, - 0x93, 0x11, 0x9c, 0xe6, 0x64, 0x4f, 0xea, 0x8b, 0x36, 0x65, 0xf6, 0x48, 0x77, 0xfd, 0xa4, 0x21, - 0xe8, 0xb7, 0xed, 0x25, 0x3f, 0xda, 0x93, 0x7a, 0x77, 0x5f, 0xc9, 0xac, 0x95, 0x45, 0x2b, 0x5f, - 0x8a, 0xb6, 0xa5, 0x19, 0xdf, 0x0e, 0x18, 0xa7, 0x67, 0xab, 0x9e, 0x66, 0x8b, 0x23, 0x3f, 0x4e, - 0xce, 0x52, 0x9f, 0xee, 0xbe, 0xa2, 0x20, 0xdf, 0x51, 0x90, 0x9f, 0x28, 0xc8, 0x6f, 0x14, 0xe4, - 0xf5, 0x2a, 0x6f, 0xb2, 0xa8, 0xc0, 0x5b, 0x75, 0xb8, 0xbe, 0x3b, 0x4d, 0xdf, 0x3c, 0xfc, 0x05, - 0x00, 0x00, 0xff, 0xff, 0x35, 0xb5, 0xd4, 0xb9, 0x14, 0x01, 0x00, 0x00, + // 335 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x92, 0xcf, 0x4a, 0xf3, 0x40, + 0x10, 0xc0, 0xd9, 0xa6, 0xed, 0xf7, 0x75, 0x73, 0x29, 0x4b, 0x0f, 0x31, 0x4a, 0x2c, 0x9e, 0x02, + 0xe2, 0x2e, 0xad, 0x17, 0xff, 0x80, 0xa8, 0xe0, 0x4d, 0x11, 0x82, 0x27, 0x2f, 0x32, 0x8d, 0x6b, + 0x5c, 0x4c, 0xb2, 0x21, 0xbb, 0x2d, 0xf4, 0xec, 0xcb, 0xf8, 0x28, 0x1e, 0x7d, 0x04, 0xc9, 0x93, + 0x48, 0x76, 0x13, 0x0c, 0x15, 0x04, 0x6f, 0x33, 0x99, 0xdf, 0xfc, 0x26, 0x33, 0x2c, 0xde, 0xce, + 0xe2, 0x82, 0xad, 0x66, 0x90, 0x16, 0xcf, 0x30, 0x63, 0x19, 0xd7, 0xf0, 0x08, 0x1a, 0x68, 0x51, + 0x4a, 0x2d, 0x09, 0x11, 0x4a, 0x0b, 0x49, 0xb3, 0xb8, 0xa0, 0x2d, 0xe2, 0x4f, 0x12, 0x99, 0x48, + 0x53, 0x66, 0x75, 0x64, 0x49, 0x7f, 0x37, 0x91, 0x32, 0x49, 0x39, 0x33, 0xd9, 0x62, 0xf9, 0xc4, + 0xb4, 0xc8, 0xb8, 0xd2, 0x90, 0x15, 0x0d, 0xb0, 0xb3, 0x09, 0x28, 0x5d, 0x2e, 0x63, 0x6d, 0xab, + 0x7b, 0xaf, 0x0e, 0xfe, 0x7f, 0xd3, 0xcc, 0x26, 0x04, 0xf7, 0x73, 0xc8, 0xb8, 0x87, 0xa6, 0x28, + 0x1c, 0x45, 0x26, 0x26, 0xa7, 0xd8, 0x8d, 0x4b, 0x0e, 0x9a, 0x3f, 0xd4, 0x62, 0xaf, 0x37, 0x45, + 0xa1, 0x3b, 0xf7, 0xa9, 0x95, 0xd2, 0x56, 0x4a, 0xef, 0xda, 0xa9, 0x11, 0xb6, 0x78, 0xfd, 0x81, + 0x78, 0xf8, 0xdf, 0x8a, 0x97, 0x4a, 0xc8, 0xdc, 0x73, 0x8c, 0xb3, 0x4d, 0xc9, 0x39, 0x1e, 0xa6, + 0xb0, 0xe0, 0xa9, 0xf2, 0xfa, 0x53, 0x27, 0x74, 0xe7, 0x21, 0xfd, 0xb9, 0x31, 0x6d, 0x7f, 0x8c, + 0x5e, 0x1b, 0xf4, 0x2a, 0xd7, 0xe5, 0x3a, 0x6a, 0xfa, 0xc8, 0x2d, 0x76, 0x21, 0xcf, 0xa5, 0x06, + 0x2d, 0x64, 0xae, 0xbc, 0x81, 0xd1, 0x1c, 0xfc, 0xaa, 0xb9, 0xf8, 0xe6, 0xad, 0xab, 0x6b, 0xf0, + 0x8f, 0xb1, 0xdb, 0x99, 0x43, 0xc6, 0xd8, 0x79, 0xe1, 0xeb, 0xe6, 0x16, 0x75, 0x48, 0x26, 0x78, + 0xb0, 0x82, 0x74, 0x69, 0x8f, 0x30, 0x8a, 0x6c, 0x72, 0xd2, 0x3b, 0x42, 0xfe, 0x19, 0x1e, 0x6f, + 0xba, 0xff, 0xd2, 0x7f, 0xb9, 0xff, 0x56, 0x05, 0xe8, 0xbd, 0x0a, 0xd0, 0x47, 0x15, 0xa0, 0xcf, + 0x2a, 0x40, 0xf7, 0x5b, 0x76, 0x0f, 0x21, 0x19, 0x14, 0x82, 0x75, 0x9f, 0xca, 0x62, 0x68, 0x8e, + 0x7e, 0xf8, 0x15, 0x00, 0x00, 0xff, 0xff, 0xdf, 0x5a, 0x3a, 0xeb, 0x41, 0x02, 0x00, 0x00, } diff --git a/vendor/istio.io/api/mcp/v1alpha1/metadata.proto b/vendor/istio.io/api/mcp/v1alpha1/metadata.proto index 823d99177410..e02c5da3bc3c 100644 --- a/vendor/istio.io/api/mcp/v1alpha1/metadata.proto +++ b/vendor/istio.io/api/mcp/v1alpha1/metadata.proto @@ -18,22 +18,59 @@ package istio.mcp.v1alpha1; import "gogoproto/gogo.proto"; import "google/protobuf/timestamp.proto"; +import "google/protobuf/struct.proto"; option go_package="istio.io/api/mcp/v1alpha1"; option (gogoproto.equal_all) = true; // Metadata information that all resources within the Mesh Configuration Protocol must have. message Metadata { - // The name of the resource. It is unique within the context of a - // resource type and the origin server of the resource. The resource - // type is identified by the TypeUrl of the resource field of the - // Envelope message. + // Fully qualified name of the resource. Unique in context of a collection. + // + // The fully qualified name consists of a directory and basename. The directory identifies + // the resources location in a resource hierarchy. The basename identifies the specific + // resource name within the context of that directory. + // + // The directory and basename are composed of one or more segments. Segments must be + // valid [DNS labels](https://tools.ietf.org/html/rfc1123). “/” is the delimiter between + // segments + // + // The rightmost segment is the basename. All segments to the + // left of the basename form the directory. Segments moving towards the left + // represent higher positions in the resource hierarchy, similar to reverse + // DNS notation. e.g. + // + // //// + // + // An empty directory indicates a resource that is located at the root of the + // hierarchy, e.g. + // + // / + // + // On Kubernetes the resource hierarchy is two-levels: namespaces and + // cluster-scoped (i.e. global). + // + // Namespace resources fully qualified name is of the form: + // + // "//" + // + // Cluster scoped resources are located at the root of the hierarchy and are of the form: + // + // "/" string name = 1; // The creation timestamp of the resource. google.protobuf.Timestamp create_time = 2; - // The resource level version. It allows MCP to track the state of - // individual resources. + // Resource version. This is used to determine when resources change across + // resource updates. It should be treated as opaque by consumers/sinks. string version = 3; + + // Map of string keys and values that can be used to organize and categorize + // resources within a collection. + map labels = 4; + + // Map of string keys and values that can be used by source and sink to communicate + // arbitrary metadata about this resource. + map annotations = 5; } diff --git a/vendor/istio.io/api/mcp/v1alpha1/envelope.pb.go b/vendor/istio.io/api/mcp/v1alpha1/resource.pb.go similarity index 56% rename from vendor/istio.io/api/mcp/v1alpha1/envelope.pb.go rename to vendor/istio.io/api/mcp/v1alpha1/resource.pb.go index 3eae73c78742..9ea11cf6d3e4 100644 --- a/vendor/istio.io/api/mcp/v1alpha1/envelope.pb.go +++ b/vendor/istio.io/api/mcp/v1alpha1/resource.pb.go @@ -1,25 +1,6 @@ // Code generated by protoc-gen-gogo. DO NOT EDIT. -// source: mcp/v1alpha1/envelope.proto +// source: mcp/v1alpha1/resource.proto -/* - Package v1alpha1 is a generated protocol buffer package. - - This package defines the common, core types used by the Mesh Configuration Protocol. - - It is generated from these files: - mcp/v1alpha1/envelope.proto - mcp/v1alpha1/mcp.proto - mcp/v1alpha1/metadata.proto - - It has these top-level messages: - Envelope - Client - MeshConfigRequest - MeshConfigResponse - IncrementalMeshConfigRequest - IncrementalMeshConfigResponse - Metadata -*/ package v1alpha1 import proto "github.com/gogo/protobuf/proto" @@ -35,51 +16,45 @@ var _ = proto.Marshal var _ = fmt.Errorf var _ = math.Inf -// This is a compile-time assertion to ensure that this generated file -// is compatible with the proto package it is being compiled against. -// A compilation error at this line likely means your copy of the -// proto package needs to be updated. -const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package - -// Envelope for a configuration resource as transferred via the Mesh Configuration Protocol. -// Each envelope is made up of common metadata, and a type-specific resource payload. -type Envelope struct { +// Resource as transferred via the Mesh Configuration Protocol. Each +// resource is made up of common metadata, and a type-specific resource payload. +type Resource struct { // Common metadata describing the resource. Metadata *Metadata `protobuf:"bytes,1,opt,name=metadata" json:"metadata,omitempty"` - // The resource itself. - Resource *google_protobuf.Any `protobuf:"bytes,2,opt,name=resource" json:"resource,omitempty"` + // The primary payload for the resource. + Body *google_protobuf.Any `protobuf:"bytes,2,opt,name=body" json:"body,omitempty"` } -func (m *Envelope) Reset() { *m = Envelope{} } -func (m *Envelope) String() string { return proto.CompactTextString(m) } -func (*Envelope) ProtoMessage() {} -func (*Envelope) Descriptor() ([]byte, []int) { return fileDescriptorEnvelope, []int{0} } +func (m *Resource) Reset() { *m = Resource{} } +func (m *Resource) String() string { return proto.CompactTextString(m) } +func (*Resource) ProtoMessage() {} +func (*Resource) Descriptor() ([]byte, []int) { return fileDescriptorResource, []int{0} } -func (m *Envelope) GetMetadata() *Metadata { +func (m *Resource) GetMetadata() *Metadata { if m != nil { return m.Metadata } return nil } -func (m *Envelope) GetResource() *google_protobuf.Any { +func (m *Resource) GetBody() *google_protobuf.Any { if m != nil { - return m.Resource + return m.Body } return nil } func init() { - proto.RegisterType((*Envelope)(nil), "istio.mcp.v1alpha1.Envelope") + proto.RegisterType((*Resource)(nil), "istio.mcp.v1alpha1.Resource") } -func (this *Envelope) Equal(that interface{}) bool { +func (this *Resource) Equal(that interface{}) bool { if that == nil { return this == nil } - that1, ok := that.(*Envelope) + that1, ok := that.(*Resource) if !ok { - that2, ok := that.(Envelope) + that2, ok := that.(Resource) if ok { that1 = &that2 } else { @@ -94,12 +69,12 @@ func (this *Envelope) Equal(that interface{}) bool { if !this.Metadata.Equal(that1.Metadata) { return false } - if !this.Resource.Equal(that1.Resource) { + if !this.Body.Equal(that1.Body) { return false } return true } -func (m *Envelope) Marshal() (dAtA []byte, err error) { +func (m *Resource) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) n, err := m.MarshalTo(dAtA) @@ -109,7 +84,7 @@ func (m *Envelope) Marshal() (dAtA []byte, err error) { return dAtA[:n], nil } -func (m *Envelope) MarshalTo(dAtA []byte) (int, error) { +func (m *Resource) MarshalTo(dAtA []byte) (int, error) { var i int _ = i var l int @@ -117,18 +92,18 @@ func (m *Envelope) MarshalTo(dAtA []byte) (int, error) { if m.Metadata != nil { dAtA[i] = 0xa i++ - i = encodeVarintEnvelope(dAtA, i, uint64(m.Metadata.Size())) + i = encodeVarintResource(dAtA, i, uint64(m.Metadata.Size())) n1, err := m.Metadata.MarshalTo(dAtA[i:]) if err != nil { return 0, err } i += n1 } - if m.Resource != nil { + if m.Body != nil { dAtA[i] = 0x12 i++ - i = encodeVarintEnvelope(dAtA, i, uint64(m.Resource.Size())) - n2, err := m.Resource.MarshalTo(dAtA[i:]) + i = encodeVarintResource(dAtA, i, uint64(m.Body.Size())) + n2, err := m.Body.MarshalTo(dAtA[i:]) if err != nil { return 0, err } @@ -137,7 +112,7 @@ func (m *Envelope) MarshalTo(dAtA []byte) (int, error) { return i, nil } -func encodeVarintEnvelope(dAtA []byte, offset int, v uint64) int { +func encodeVarintResource(dAtA []byte, offset int, v uint64) int { for v >= 1<<7 { dAtA[offset] = uint8(v&0x7f | 0x80) v >>= 7 @@ -146,21 +121,21 @@ func encodeVarintEnvelope(dAtA []byte, offset int, v uint64) int { dAtA[offset] = uint8(v) return offset + 1 } -func (m *Envelope) Size() (n int) { +func (m *Resource) Size() (n int) { var l int _ = l if m.Metadata != nil { l = m.Metadata.Size() - n += 1 + l + sovEnvelope(uint64(l)) + n += 1 + l + sovResource(uint64(l)) } - if m.Resource != nil { - l = m.Resource.Size() - n += 1 + l + sovEnvelope(uint64(l)) + if m.Body != nil { + l = m.Body.Size() + n += 1 + l + sovResource(uint64(l)) } return n } -func sovEnvelope(x uint64) (n int) { +func sovResource(x uint64) (n int) { for { n++ x >>= 7 @@ -170,10 +145,10 @@ func sovEnvelope(x uint64) (n int) { } return n } -func sozEnvelope(x uint64) (n int) { - return sovEnvelope(uint64((x << 1) ^ uint64((int64(x) >> 63)))) +func sozResource(x uint64) (n int) { + return sovResource(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } -func (m *Envelope) Unmarshal(dAtA []byte) error { +func (m *Resource) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -181,7 +156,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error { var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { - return ErrIntOverflowEnvelope + return ErrIntOverflowResource } if iNdEx >= l { return io.ErrUnexpectedEOF @@ -196,10 +171,10 @@ func (m *Envelope) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: Envelope: wiretype end group for non-group") + return fmt.Errorf("proto: Resource: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: Envelope: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: Resource: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: @@ -209,7 +184,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error { var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { - return ErrIntOverflowEnvelope + return ErrIntOverflowResource } if iNdEx >= l { return io.ErrUnexpectedEOF @@ -222,7 +197,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error { } } if msglen < 0 { - return ErrInvalidLengthEnvelope + return ErrInvalidLengthResource } postIndex := iNdEx + msglen if postIndex > l { @@ -237,12 +212,12 @@ func (m *Envelope) Unmarshal(dAtA []byte) error { iNdEx = postIndex case 2: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Resource", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Body", wireType) } var msglen int for shift := uint(0); ; shift += 7 { if shift >= 64 { - return ErrIntOverflowEnvelope + return ErrIntOverflowResource } if iNdEx >= l { return io.ErrUnexpectedEOF @@ -255,27 +230,27 @@ func (m *Envelope) Unmarshal(dAtA []byte) error { } } if msglen < 0 { - return ErrInvalidLengthEnvelope + return ErrInvalidLengthResource } postIndex := iNdEx + msglen if postIndex > l { return io.ErrUnexpectedEOF } - if m.Resource == nil { - m.Resource = &google_protobuf.Any{} + if m.Body == nil { + m.Body = &google_protobuf.Any{} } - if err := m.Resource.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + if err := m.Body.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { return err } iNdEx = postIndex default: iNdEx = preIndex - skippy, err := skipEnvelope(dAtA[iNdEx:]) + skippy, err := skipResource(dAtA[iNdEx:]) if err != nil { return err } if skippy < 0 { - return ErrInvalidLengthEnvelope + return ErrInvalidLengthResource } if (iNdEx + skippy) > l { return io.ErrUnexpectedEOF @@ -289,14 +264,14 @@ func (m *Envelope) Unmarshal(dAtA []byte) error { } return nil } -func skipEnvelope(dAtA []byte) (n int, err error) { +func skipResource(dAtA []byte) (n int, err error) { l := len(dAtA) iNdEx := 0 for iNdEx < l { var wire uint64 for shift := uint(0); ; shift += 7 { if shift >= 64 { - return 0, ErrIntOverflowEnvelope + return 0, ErrIntOverflowResource } if iNdEx >= l { return 0, io.ErrUnexpectedEOF @@ -313,7 +288,7 @@ func skipEnvelope(dAtA []byte) (n int, err error) { case 0: for shift := uint(0); ; shift += 7 { if shift >= 64 { - return 0, ErrIntOverflowEnvelope + return 0, ErrIntOverflowResource } if iNdEx >= l { return 0, io.ErrUnexpectedEOF @@ -331,7 +306,7 @@ func skipEnvelope(dAtA []byte) (n int, err error) { var length int for shift := uint(0); ; shift += 7 { if shift >= 64 { - return 0, ErrIntOverflowEnvelope + return 0, ErrIntOverflowResource } if iNdEx >= l { return 0, io.ErrUnexpectedEOF @@ -345,7 +320,7 @@ func skipEnvelope(dAtA []byte) (n int, err error) { } iNdEx += length if length < 0 { - return 0, ErrInvalidLengthEnvelope + return 0, ErrInvalidLengthResource } return iNdEx, nil case 3: @@ -354,7 +329,7 @@ func skipEnvelope(dAtA []byte) (n int, err error) { var start int = iNdEx for shift := uint(0); ; shift += 7 { if shift >= 64 { - return 0, ErrIntOverflowEnvelope + return 0, ErrIntOverflowResource } if iNdEx >= l { return 0, io.ErrUnexpectedEOF @@ -370,7 +345,7 @@ func skipEnvelope(dAtA []byte) (n int, err error) { if innerWireType == 4 { break } - next, err := skipEnvelope(dAtA[start:]) + next, err := skipResource(dAtA[start:]) if err != nil { return 0, err } @@ -390,26 +365,25 @@ func skipEnvelope(dAtA []byte) (n int, err error) { } var ( - ErrInvalidLengthEnvelope = fmt.Errorf("proto: negative length found during unmarshaling") - ErrIntOverflowEnvelope = fmt.Errorf("proto: integer overflow") + ErrInvalidLengthResource = fmt.Errorf("proto: negative length found during unmarshaling") + ErrIntOverflowResource = fmt.Errorf("proto: integer overflow") ) -func init() { proto.RegisterFile("mcp/v1alpha1/envelope.proto", fileDescriptorEnvelope) } +func init() { proto.RegisterFile("mcp/v1alpha1/resource.proto", fileDescriptorResource) } -var fileDescriptorEnvelope = []byte{ - // 211 bytes of a gzipped FileDescriptorProto +var fileDescriptorResource = []byte{ + // 207 bytes of a gzipped FileDescriptorProto 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0xce, 0x4d, 0x2e, 0xd0, - 0x2f, 0x33, 0x4c, 0xcc, 0x29, 0xc8, 0x48, 0x34, 0xd4, 0x4f, 0xcd, 0x2b, 0x4b, 0xcd, 0xc9, 0x2f, - 0x48, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x12, 0xca, 0x2c, 0x2e, 0xc9, 0xcc, 0xd7, 0xcb, + 0x2f, 0x33, 0x4c, 0xcc, 0x29, 0xc8, 0x48, 0x34, 0xd4, 0x2f, 0x4a, 0x2d, 0xce, 0x2f, 0x2d, 0x4a, + 0x4e, 0xd5, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x12, 0xca, 0x2c, 0x2e, 0xc9, 0xcc, 0xd7, 0xcb, 0x4d, 0x2e, 0xd0, 0x83, 0x29, 0x91, 0x92, 0x4c, 0xcf, 0xcf, 0x4f, 0xcf, 0x49, 0xd5, 0x07, 0xab, 0x48, 0x2a, 0x4d, 0xd3, 0x4f, 0xcc, 0xab, 0x84, 0x28, 0x97, 0x12, 0x49, 0xcf, 0x4f, 0xcf, 0x07, 0x33, 0xf5, 0x41, 0x2c, 0xa8, 0x28, 0xaa, 0x0d, 0xb9, 0xa9, 0x25, 0x89, 0x29, 0x89, 0x25, 0x89, - 0x10, 0x49, 0xa5, 0x32, 0x2e, 0x0e, 0x57, 0xa8, 0x9d, 0x42, 0x16, 0x5c, 0x1c, 0x30, 0x59, 0x09, + 0x10, 0x49, 0xa5, 0x3c, 0x2e, 0x8e, 0x20, 0xa8, 0x9d, 0x42, 0x16, 0x5c, 0x1c, 0x30, 0x59, 0x09, 0x46, 0x05, 0x46, 0x0d, 0x6e, 0x23, 0x19, 0x3d, 0x4c, 0x07, 0xe8, 0xf9, 0x42, 0xd5, 0x04, 0xc1, - 0x55, 0x0b, 0x19, 0x70, 0x71, 0x14, 0xa5, 0x16, 0xe7, 0x97, 0x16, 0x25, 0xa7, 0x4a, 0x30, 0x81, - 0x75, 0x8a, 0xe8, 0x41, 0x9c, 0xa9, 0x07, 0x73, 0xa6, 0x9e, 0x63, 0x5e, 0x65, 0x10, 0x5c, 0x95, - 0x93, 0xf6, 0x8a, 0x47, 0x72, 0x8c, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0xf8, 0xe0, - 0x91, 0x1c, 0x63, 0x94, 0x24, 0xc4, 0xaa, 0xcc, 0x7c, 0xfd, 0xc4, 0x82, 0x4c, 0x7d, 0x64, 0x37, - 0x27, 0xb1, 0x81, 0x0d, 0x31, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0x02, 0xf2, 0x13, 0xb4, 0x2c, - 0x01, 0x00, 0x00, + 0x55, 0x0b, 0x69, 0x70, 0xb1, 0x24, 0xe5, 0xa7, 0x54, 0x4a, 0x30, 0x81, 0x75, 0x89, 0xe8, 0x41, + 0x9c, 0xa8, 0x07, 0x73, 0xa2, 0x9e, 0x63, 0x5e, 0x65, 0x10, 0x58, 0x85, 0x93, 0xf6, 0x8a, 0x47, + 0x72, 0x8c, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0x63, 0x94, + 0x24, 0xc4, 0x8a, 0xcc, 0x7c, 0xfd, 0xc4, 0x82, 0x4c, 0x7d, 0x64, 0xb7, 0x26, 0xb1, 0x81, 0x0d, + 0x30, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0xd3, 0xae, 0x63, 0x31, 0x24, 0x01, 0x00, 0x00, } diff --git a/vendor/istio.io/api/mcp/v1alpha1/envelope.proto b/vendor/istio.io/api/mcp/v1alpha1/resource.proto similarity index 79% rename from vendor/istio.io/api/mcp/v1alpha1/envelope.proto rename to vendor/istio.io/api/mcp/v1alpha1/resource.proto index 28ba4ca8efcd..0698bb32bf6b 100644 --- a/vendor/istio.io/api/mcp/v1alpha1/envelope.proto +++ b/vendor/istio.io/api/mcp/v1alpha1/resource.proto @@ -24,12 +24,12 @@ import "mcp/v1alpha1/metadata.proto"; option go_package="istio.io/api/mcp/v1alpha1"; option (gogoproto.equal_all) = true; -// Envelope for a configuration resource as transferred via the Mesh Configuration Protocol. -// Each envelope is made up of common metadata, and a type-specific resource payload. -message Envelope { +// Resource as transferred via the Mesh Configuration Protocol. Each +// resource is made up of common metadata, and a type-specific resource payload. +message Resource { // Common metadata describing the resource. istio.mcp.v1alpha1.Metadata metadata = 1; - // The resource itself. - google.protobuf.Any resource = 2; + // The primary payload for the resource. + google.protobuf.Any body = 2; } diff --git a/vendor/istio.io/api/policy/v1beta1/cfg.pb.go b/vendor/istio.io/api/policy/v1beta1/cfg.pb.go index bc942cc41ff7..c20dc5f6604a 100644 --- a/vendor/istio.io/api/policy/v1beta1/cfg.pb.go +++ b/vendor/istio.io/api/policy/v1beta1/cfg.pb.go @@ -346,12 +346,17 @@ func (m *Rule) GetSampling() *Sampling { // that may reference action outputs by name. For example, if an action `x` produces an output // with a field `f`, then the header value expressions may use attribute `x.output.f` to reference // the field value: +// // ```yaml // request_header_operations: // - name: x-istio-header // values: // - x.output.f // ``` +// +// If the header value expression evaluates to an empty string, and the operation is to either replace +// or append a header, then the operation is not applied. This permits conditional behavior on behalf of the +// adapter to optionally modify the headers. type Rule_HeaderOperationTemplate struct { // Required. Header name literal value. Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"` diff --git a/vendor/istio.io/api/policy/v1beta1/cfg.proto b/vendor/istio.io/api/policy/v1beta1/cfg.proto index 795b3be2a1da..aba475269228 100644 --- a/vendor/istio.io/api/policy/v1beta1/cfg.proto +++ b/vendor/istio.io/api/policy/v1beta1/cfg.proto @@ -131,12 +131,17 @@ message Rule { // that may reference action outputs by name. For example, if an action `x` produces an output // with a field `f`, then the header value expressions may use attribute `x.output.f` to reference // the field value: + // // ```yaml // request_header_operations: // - name: x-istio-header // values: // - x.output.f // ``` + // + // If the header value expression evaluates to an empty string, and the operation is to either replace + // or append a header, then the operation is not applied. This permits conditional behavior on behalf of the + // adapter to optionally modify the headers. message HeaderOperationTemplate { // Required. Header name literal value. diff --git a/vendor/istio.io/api/policy/v1beta1/istio.policy.v1beta1.pb.html b/vendor/istio.io/api/policy/v1beta1/istio.policy.v1beta1.pb.html index 0ca8ab344ce7..39c716a87e90 100644 --- a/vendor/istio.io/api/policy/v1beta1/istio.policy.v1beta1.pb.html +++ b/vendor/istio.io/api/policy/v1beta1/istio.policy.v1beta1.pb.html @@ -809,6 +809,10 @@

Rule.HeaderOperationTemplate

- x.output.f +

If the header value expression evaluates to an empty string, and the operation is to either replace +or append a header, then the operation is not applied. This permits conditional behavior on behalf of the +adapter to optionally modify the headers.

+
FieldTypeDescription
idstring +

An opaque identifier for the MCP node.

+ +
annotationsmap<string, string> +

Opaque annotations extending the node identifier.

diff --git a/vendor/istio.io/api/proto.lock b/vendor/istio.io/api/proto.lock index 95cf510647fa..36b370e7cdf7 100644 --- a/vendor/istio.io/api/proto.lock +++ b/vendor/istio.io/api/proto.lock @@ -417,44 +417,27 @@ ] } }, - { - "protopath": "mcp:/:v1alpha1:/:envelope.proto", - "def": { - "messages": [ - { - "name": "Envelope", - "fields": [ - { - "id": 1, - "name": "metadata", - "type": "istio.mcp.v1alpha1.Metadata" - }, - { - "id": 2, - "name": "resource", - "type": "google.protobuf.Any" - } - ] - } - ] - } - }, { "protopath": "mcp:/:v1alpha1:/:mcp.proto", "def": { "messages": [ { - "name": "Client", + "name": "SinkNode", "fields": [ { "id": 1, "name": "id", "type": "string" - }, + } + ], + "maps": [ { - "id": 2, - "name": "metadata", - "type": "google.protobuf.Struct" + "key_type": "string", + "field": { + "id": 2, + "name": "annotations", + "type": "string" + } } ] }, @@ -468,8 +451,8 @@ }, { "id": 2, - "name": "client", - "type": "Client" + "name": "sink_node", + "type": "SinkNode" }, { "id": 3, @@ -498,8 +481,8 @@ }, { "id": 2, - "name": "envelopes", - "type": "Envelope", + "name": "resources", + "type": "Resource", "is_repeated": true }, { @@ -519,8 +502,8 @@ "fields": [ { "id": 1, - "name": "client", - "type": "Client" + "name": "sink_node", + "type": "SinkNode" }, { "id": 2, @@ -559,8 +542,8 @@ }, { "id": 2, - "name": "envelopes", - "type": "Envelope", + "name": "resources", + "type": "Resource", "is_repeated": true }, { @@ -575,6 +558,73 @@ "type": "string" } ] + }, + { + "name": "RequestResources", + "fields": [ + { + "id": 1, + "name": "sink_node", + "type": "SinkNode" + }, + { + "id": 2, + "name": "collection", + "type": "string" + }, + { + "id": 4, + "name": "response_nonce", + "type": "string" + }, + { + "id": 5, + "name": "error_detail", + "type": "google.rpc.Status" + } + ], + "maps": [ + { + "key_type": "string", + "field": { + "id": 3, + "name": "initial_resource_versions", + "type": "string" + } + } + ] + }, + { + "name": "Resources", + "fields": [ + { + "id": 1, + "name": "system_version_info", + "type": "string" + }, + { + "id": 2, + "name": "collection", + "type": "string" + }, + { + "id": 3, + "name": "resources", + "type": "Resource", + "is_repeated": true + }, + { + "id": 4, + "name": "removed_resources", + "type": "string", + "is_repeated": true + }, + { + "id": 5, + "name": "nonce", + "type": "string" + } + ] } ], "services": [ @@ -596,6 +646,30 @@ "out_streamed": true } ] + }, + { + "name": "ResourceSource", + "rpcs": [ + { + "name": "EstablishResourceStream", + "in_type": "RequestResources", + "out_type": "Resources", + "in_streamed": true, + "out_streamed": true + } + ] + }, + { + "name": "ResourceSink", + "rpcs": [ + { + "name": "EstablishResourceStream", + "in_type": "Resources", + "out_type": "RequestResources", + "in_streamed": true, + "out_streamed": true + } + ] } ] } @@ -622,6 +696,46 @@ "name": "version", "type": "string" } + ], + "maps": [ + { + "key_type": "string", + "field": { + "id": 4, + "name": "labels", + "type": "string" + } + }, + { + "key_type": "string", + "field": { + "id": 5, + "name": "annotations", + "type": "string" + } + } + ] + } + ] + } + }, + { + "protopath": "mcp:/:v1alpha1:/:resource.proto", + "def": { + "messages": [ + { + "name": "Resource", + "fields": [ + { + "id": 1, + "name": "metadata", + "type": "istio.mcp.v1alpha1.Metadata" + }, + { + "id": 2, + "name": "body", + "type": "google.protobuf.Any" + } ] } ] @@ -4717,6 +4831,35 @@ } ], "messages": [ + { + "name": "WorkloadSelector", + "maps": [ + { + "key_type": "string", + "field": { + "id": 1, + "name": "labels", + "type": "string" + } + } + ] + }, + { + "name": "AuthorizationPolicy", + "fields": [ + { + "id": 1, + "name": "workload_selector", + "type": "WorkloadSelector" + }, + { + "id": 2, + "name": "allow", + "type": "ServiceRoleBinding", + "is_repeated": true + } + ] + }, { "name": "ServiceRole", "fields": [ @@ -4737,18 +4880,54 @@ "type": "string", "is_repeated": true }, + { + "id": 5, + "name": "hosts", + "type": "string", + "is_repeated": true + }, + { + "id": 6, + "name": "not_hosts", + "type": "string", + "is_repeated": true + }, { "id": 2, "name": "paths", "type": "string", "is_repeated": true }, + { + "id": 7, + "name": "not_paths", + "type": "string", + "is_repeated": true + }, { "id": 3, "name": "methods", "type": "string", "is_repeated": true }, + { + "id": 8, + "name": "not_methods", + "type": "string", + "is_repeated": true + }, + { + "id": 9, + "name": "ports", + "type": "int32", + "is_repeated": true + }, + { + "id": 10, + "name": "not_ports", + "type": "int32", + "is_repeated": true + }, { "id": 4, "name": "constraints", @@ -4804,10 +4983,58 @@ "name": "user", "type": "string" }, + { + "id": 4, + "name": "principals", + "type": "string", + "is_repeated": true + }, + { + "id": 5, + "name": "not_principals", + "type": "string", + "is_repeated": true + }, { "id": 2, "name": "group", "type": "string" + }, + { + "id": 6, + "name": "groups", + "type": "string", + "is_repeated": true + }, + { + "id": 7, + "name": "not_groups", + "type": "string", + "is_repeated": true + }, + { + "id": 8, + "name": "namespaces", + "type": "string", + "is_repeated": true + }, + { + "id": 9, + "name": "not_namespaces", + "type": "string", + "is_repeated": true + }, + { + "id": 10, + "name": "ips", + "type": "string", + "is_repeated": true + }, + { + "id": 11, + "name": "not_ips", + "type": "string", + "is_repeated": true } ], "maps": [ @@ -4870,6 +5097,12 @@ "type": "string", "is_repeated": true }, + { + "id": 3, + "name": "workload_selectors", + "type": "WorkloadSelector", + "is_repeated": true + }, { "id": 2, "name": "namespaces", diff --git a/vendor/istio.io/api/prototool.yaml b/vendor/istio.io/api/prototool.yaml index 57e67aac3565..ebf22bd55bb6 100644 --- a/vendor/istio.io/api/prototool.yaml +++ b/vendor/istio.io/api/prototool.yaml @@ -13,6 +13,9 @@ lint: - id: ENUM_FIELD_NAMES_UPPER_SNAKE_CASE files: - networking/v1alpha3/gateway.proto + - id: REQUEST_RESPONSE_TYPES_UNIQUE + files: + - mcp/v1alpha1/mcp.proto # Linter rules. rules: diff --git a/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/mcp_pb2.py b/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/mcp_pb2.py index f49155f7a6b4..c380d4d828cb 100644 --- a/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/mcp_pb2.py +++ b/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/mcp_pb2.py @@ -13,41 +13,40 @@ _sym_db = _symbol_database.Default() -from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 from google.rpc import status_pb2 as google_dot_rpc_dot_status__pb2 from gogoproto import gogo_pb2 as gogoproto_dot_gogo__pb2 -from mcp.v1alpha1 import envelope_pb2 as mcp_dot_v1alpha1_dot_envelope__pb2 +from mcp.v1alpha1 import resource_pb2 as mcp_dot_v1alpha1_dot_resource__pb2 DESCRIPTOR = _descriptor.FileDescriptor( name='mcp/v1alpha1/mcp.proto', package='istio.mcp.v1alpha1', syntax='proto3', - serialized_pb=_b('\n\x16mcp/v1alpha1/mcp.proto\x12\x12istio.mcp.v1alpha1\x1a\x1cgoogle/protobuf/struct.proto\x1a\x17google/rpc/status.proto\x1a\x14gogoproto/gogo.proto\x1a\x1bmcp/v1alpha1/envelope.proto\"?\n\x06\x43lient\x12\n\n\x02id\x18\x01 \x01(\t\x12)\n\x08metadata\x18\x02 \x01(\x0b\x32\x17.google.protobuf.Struct\"\xa9\x01\n\x11MeshConfigRequest\x12\x14\n\x0cversion_info\x18\x01 \x01(\t\x12*\n\x06\x63lient\x18\x02 \x01(\x0b\x32\x1a.istio.mcp.v1alpha1.Client\x12\x10\n\x08type_url\x18\x03 \x01(\t\x12\x16\n\x0eresponse_nonce\x18\x04 \x01(\t\x12(\n\x0c\x65rror_detail\x18\x05 \x01(\x0b\x32\x12.google.rpc.Status\"\x82\x01\n\x12MeshConfigResponse\x12\x14\n\x0cversion_info\x18\x01 \x01(\t\x12\x35\n\tenvelopes\x18\x02 \x03(\x0b\x32\x1c.istio.mcp.v1alpha1.EnvelopeB\x04\xc8\xde\x1f\x00\x12\x10\n\x08type_url\x18\x03 \x01(\t\x12\r\n\x05nonce\x18\x04 \x01(\t\"\xd0\x02\n\x1cIncrementalMeshConfigRequest\x12*\n\x06\x63lient\x18\x01 \x01(\x0b\x32\x1a.istio.mcp.v1alpha1.Client\x12\x10\n\x08type_url\x18\x02 \x01(\t\x12p\n\x19initial_resource_versions\x18\x03 \x03(\x0b\x32M.istio.mcp.v1alpha1.IncrementalMeshConfigRequest.InitialResourceVersionsEntry\x12\x16\n\x0eresponse_nonce\x18\x04 \x01(\t\x12(\n\x0c\x65rror_detail\x18\x05 \x01(\x0b\x32\x12.google.rpc.Status\x1a>\n\x1cInitialResourceVersionsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\x9d\x01\n\x1dIncrementalMeshConfigResponse\x12\x1b\n\x13system_version_info\x18\x01 \x01(\t\x12\x35\n\tenvelopes\x18\x02 \x03(\x0b\x32\x1c.istio.mcp.v1alpha1.EnvelopeB\x04\xc8\xde\x1f\x00\x12\x19\n\x11removed_resources\x18\x03 \x03(\t\x12\r\n\x05nonce\x18\x04 \x01(\t2\x9d\x02\n\x1b\x41ggregatedMeshConfigService\x12p\n\x19StreamAggregatedResources\x12%.istio.mcp.v1alpha1.MeshConfigRequest\x1a&.istio.mcp.v1alpha1.MeshConfigResponse\"\x00(\x01\x30\x01\x12\x8b\x01\n\x1eIncrementalAggregatedResources\x12\x30.istio.mcp.v1alpha1.IncrementalMeshConfigRequest\x1a\x31.istio.mcp.v1alpha1.IncrementalMeshConfigResponse\"\x00(\x01\x30\x01\x42\x1fZ\x19istio.io/api/mcp/v1alpha1\xa8\xe2\x1e\x01\x62\x06proto3') + serialized_pb=_b('\n\x16mcp/v1alpha1/mcp.proto\x12\x12istio.mcp.v1alpha1\x1a\x17google/rpc/status.proto\x1a\x14gogoproto/gogo.proto\x1a\x1bmcp/v1alpha1/resource.proto\"\x8e\x01\n\x08SinkNode\x12\n\n\x02id\x18\x01 \x01(\t\x12\x42\n\x0b\x61nnotations\x18\x02 \x03(\x0b\x32-.istio.mcp.v1alpha1.SinkNode.AnnotationsEntry\x1a\x32\n\x10\x41nnotationsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\xae\x01\n\x11MeshConfigRequest\x12\x14\n\x0cversion_info\x18\x01 \x01(\t\x12/\n\tsink_node\x18\x02 \x01(\x0b\x32\x1c.istio.mcp.v1alpha1.SinkNode\x12\x10\n\x08type_url\x18\x03 \x01(\t\x12\x16\n\x0eresponse_nonce\x18\x04 \x01(\t\x12(\n\x0c\x65rror_detail\x18\x05 \x01(\x0b\x32\x12.google.rpc.Status\"\x82\x01\n\x12MeshConfigResponse\x12\x14\n\x0cversion_info\x18\x01 \x01(\t\x12\x35\n\tresources\x18\x02 \x03(\x0b\x32\x1c.istio.mcp.v1alpha1.ResourceB\x04\xc8\xde\x1f\x00\x12\x10\n\x08type_url\x18\x03 \x01(\t\x12\r\n\x05nonce\x18\x04 \x01(\t\"\xd5\x02\n\x1cIncrementalMeshConfigRequest\x12/\n\tsink_node\x18\x01 \x01(\x0b\x32\x1c.istio.mcp.v1alpha1.SinkNode\x12\x10\n\x08type_url\x18\x02 \x01(\t\x12p\n\x19initial_resource_versions\x18\x03 \x03(\x0b\x32M.istio.mcp.v1alpha1.IncrementalMeshConfigRequest.InitialResourceVersionsEntry\x12\x16\n\x0eresponse_nonce\x18\x04 \x01(\t\x12(\n\x0c\x65rror_detail\x18\x05 \x01(\x0b\x32\x12.google.rpc.Status\x1a>\n\x1cInitialResourceVersionsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\x9d\x01\n\x1dIncrementalMeshConfigResponse\x12\x1b\n\x13system_version_info\x18\x01 \x01(\t\x12\x35\n\tresources\x18\x02 \x03(\x0b\x32\x1c.istio.mcp.v1alpha1.ResourceB\x04\xc8\xde\x1f\x00\x12\x19\n\x11removed_resources\x18\x03 \x03(\t\x12\r\n\x05nonce\x18\x04 \x01(\t\"\xbf\x02\n\x10RequestResources\x12/\n\tsink_node\x18\x01 \x01(\x0b\x32\x1c.istio.mcp.v1alpha1.SinkNode\x12\x12\n\ncollection\x18\x02 \x01(\t\x12\x64\n\x19initial_resource_versions\x18\x03 \x03(\x0b\x32\x41.istio.mcp.v1alpha1.RequestResources.InitialResourceVersionsEntry\x12\x16\n\x0eresponse_nonce\x18\x04 \x01(\t\x12(\n\x0c\x65rror_detail\x18\x05 \x01(\x0b\x32\x12.google.rpc.Status\x1a>\n\x1cInitialResourceVersionsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\"\x9d\x01\n\tResources\x12\x1b\n\x13system_version_info\x18\x01 \x01(\t\x12\x12\n\ncollection\x18\x02 \x01(\t\x12\x35\n\tresources\x18\x03 \x03(\x0b\x32\x1c.istio.mcp.v1alpha1.ResourceB\x04\xc8\xde\x1f\x00\x12\x19\n\x11removed_resources\x18\x04 \x03(\t\x12\r\n\x05nonce\x18\x05 \x01(\t2\x9d\x02\n\x1b\x41ggregatedMeshConfigService\x12p\n\x19StreamAggregatedResources\x12%.istio.mcp.v1alpha1.MeshConfigRequest\x1a&.istio.mcp.v1alpha1.MeshConfigResponse\"\x00(\x01\x30\x01\x12\x8b\x01\n\x1eIncrementalAggregatedResources\x12\x30.istio.mcp.v1alpha1.IncrementalMeshConfigRequest\x1a\x31.istio.mcp.v1alpha1.IncrementalMeshConfigResponse\"\x00(\x01\x30\x01\x32v\n\x0eResourceSource\x12\x64\n\x17\x45stablishResourceStream\x12$.istio.mcp.v1alpha1.RequestResources\x1a\x1d.istio.mcp.v1alpha1.Resources\"\x00(\x01\x30\x01\x32t\n\x0cResourceSink\x12\x64\n\x17\x45stablishResourceStream\x12\x1d.istio.mcp.v1alpha1.Resources\x1a$.istio.mcp.v1alpha1.RequestResources\"\x00(\x01\x30\x01\x42\x1fZ\x19istio.io/api/mcp/v1alpha1\xa8\xe2\x1e\x01\x62\x06proto3') , - dependencies=[google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,google_dot_rpc_dot_status__pb2.DESCRIPTOR,gogoproto_dot_gogo__pb2.DESCRIPTOR,mcp_dot_v1alpha1_dot_envelope__pb2.DESCRIPTOR,]) + dependencies=[google_dot_rpc_dot_status__pb2.DESCRIPTOR,gogoproto_dot_gogo__pb2.DESCRIPTOR,mcp_dot_v1alpha1_dot_resource__pb2.DESCRIPTOR,]) -_CLIENT = _descriptor.Descriptor( - name='Client', - full_name='istio.mcp.v1alpha1.Client', +_SINKNODE_ANNOTATIONSENTRY = _descriptor.Descriptor( + name='AnnotationsEntry', + full_name='istio.mcp.v1alpha1.SinkNode.AnnotationsEntry', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( - name='id', full_name='istio.mcp.v1alpha1.Client.id', index=0, + name='key', full_name='istio.mcp.v1alpha1.SinkNode.AnnotationsEntry.key', index=0, number=1, type=9, cpp_type=9, label=1, has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None, file=DESCRIPTOR), _descriptor.FieldDescriptor( - name='metadata', full_name='istio.mcp.v1alpha1.Client.metadata', index=1, - number=2, type=11, cpp_type=10, label=1, - has_default_value=False, default_value=None, + name='value', full_name='istio.mcp.v1alpha1.SinkNode.AnnotationsEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None, file=DESCRIPTOR), @@ -57,14 +56,51 @@ nested_types=[], enum_types=[ ], + options=_descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')), + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=215, + serialized_end=265, +) + +_SINKNODE = _descriptor.Descriptor( + name='SinkNode', + full_name='istio.mcp.v1alpha1.SinkNode', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='id', full_name='istio.mcp.v1alpha1.SinkNode.id', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='annotations', full_name='istio.mcp.v1alpha1.SinkNode.annotations', index=1, + number=2, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[_SINKNODE_ANNOTATIONSENTRY, ], + enum_types=[ + ], options=None, is_extendable=False, syntax='proto3', extension_ranges=[], oneofs=[ ], - serialized_start=152, - serialized_end=215, + serialized_start=123, + serialized_end=265, ) @@ -83,7 +119,7 @@ is_extension=False, extension_scope=None, options=None, file=DESCRIPTOR), _descriptor.FieldDescriptor( - name='client', full_name='istio.mcp.v1alpha1.MeshConfigRequest.client', index=1, + name='sink_node', full_name='istio.mcp.v1alpha1.MeshConfigRequest.sink_node', index=1, number=2, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, @@ -122,8 +158,8 @@ extension_ranges=[], oneofs=[ ], - serialized_start=218, - serialized_end=387, + serialized_start=268, + serialized_end=442, ) @@ -142,7 +178,7 @@ is_extension=False, extension_scope=None, options=None, file=DESCRIPTOR), _descriptor.FieldDescriptor( - name='envelopes', full_name='istio.mcp.v1alpha1.MeshConfigResponse.envelopes', index=1, + name='resources', full_name='istio.mcp.v1alpha1.MeshConfigResponse.resources', index=1, number=2, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, @@ -174,8 +210,8 @@ extension_ranges=[], oneofs=[ ], - serialized_start=390, - serialized_end=520, + serialized_start=445, + serialized_end=575, ) @@ -212,8 +248,8 @@ extension_ranges=[], oneofs=[ ], - serialized_start=797, - serialized_end=859, + serialized_start=857, + serialized_end=919, ) _INCREMENTALMESHCONFIGREQUEST = _descriptor.Descriptor( @@ -224,7 +260,7 @@ containing_type=None, fields=[ _descriptor.FieldDescriptor( - name='client', full_name='istio.mcp.v1alpha1.IncrementalMeshConfigRequest.client', index=0, + name='sink_node', full_name='istio.mcp.v1alpha1.IncrementalMeshConfigRequest.sink_node', index=0, number=1, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, @@ -270,8 +306,8 @@ extension_ranges=[], oneofs=[ ], - serialized_start=523, - serialized_end=859, + serialized_start=578, + serialized_end=919, ) @@ -290,7 +326,7 @@ is_extension=False, extension_scope=None, options=None, file=DESCRIPTOR), _descriptor.FieldDescriptor( - name='envelopes', full_name='istio.mcp.v1alpha1.IncrementalMeshConfigResponse.envelopes', index=1, + name='resources', full_name='istio.mcp.v1alpha1.IncrementalMeshConfigResponse.resources', index=1, number=2, type=11, cpp_type=10, label=3, has_default_value=False, default_value=[], message_type=None, enum_type=None, containing_type=None, @@ -322,32 +358,203 @@ extension_ranges=[], oneofs=[ ], - serialized_start=862, - serialized_end=1019, + serialized_start=922, + serialized_end=1079, +) + + +_REQUESTRESOURCES_INITIALRESOURCEVERSIONSENTRY = _descriptor.Descriptor( + name='InitialResourceVersionsEntry', + full_name='istio.mcp.v1alpha1.RequestResources.InitialResourceVersionsEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='istio.mcp.v1alpha1.RequestResources.InitialResourceVersionsEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='value', full_name='istio.mcp.v1alpha1.RequestResources.InitialResourceVersionsEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=_descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')), + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=857, + serialized_end=919, +) + +_REQUESTRESOURCES = _descriptor.Descriptor( + name='RequestResources', + full_name='istio.mcp.v1alpha1.RequestResources', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='sink_node', full_name='istio.mcp.v1alpha1.RequestResources.sink_node', index=0, + number=1, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='collection', full_name='istio.mcp.v1alpha1.RequestResources.collection', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='initial_resource_versions', full_name='istio.mcp.v1alpha1.RequestResources.initial_resource_versions', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='response_nonce', full_name='istio.mcp.v1alpha1.RequestResources.response_nonce', index=3, + number=4, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='error_detail', full_name='istio.mcp.v1alpha1.RequestResources.error_detail', index=4, + number=5, type=11, cpp_type=10, label=1, + has_default_value=False, default_value=None, + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[_REQUESTRESOURCES_INITIALRESOURCEVERSIONSENTRY, ], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1082, + serialized_end=1401, +) + + +_RESOURCES = _descriptor.Descriptor( + name='Resources', + full_name='istio.mcp.v1alpha1.Resources', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='system_version_info', full_name='istio.mcp.v1alpha1.Resources.system_version_info', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='collection', full_name='istio.mcp.v1alpha1.Resources.collection', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='resources', full_name='istio.mcp.v1alpha1.Resources.resources', index=2, + number=3, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=_descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\310\336\037\000')), file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='removed_resources', full_name='istio.mcp.v1alpha1.Resources.removed_resources', index=3, + number=4, type=9, cpp_type=9, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='nonce', full_name='istio.mcp.v1alpha1.Resources.nonce', index=4, + number=5, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=None, + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=1404, + serialized_end=1561, ) -_CLIENT.fields_by_name['metadata'].message_type = google_dot_protobuf_dot_struct__pb2._STRUCT -_MESHCONFIGREQUEST.fields_by_name['client'].message_type = _CLIENT +_SINKNODE_ANNOTATIONSENTRY.containing_type = _SINKNODE +_SINKNODE.fields_by_name['annotations'].message_type = _SINKNODE_ANNOTATIONSENTRY +_MESHCONFIGREQUEST.fields_by_name['sink_node'].message_type = _SINKNODE _MESHCONFIGREQUEST.fields_by_name['error_detail'].message_type = google_dot_rpc_dot_status__pb2._STATUS -_MESHCONFIGRESPONSE.fields_by_name['envelopes'].message_type = mcp_dot_v1alpha1_dot_envelope__pb2._ENVELOPE +_MESHCONFIGRESPONSE.fields_by_name['resources'].message_type = mcp_dot_v1alpha1_dot_resource__pb2._RESOURCE _INCREMENTALMESHCONFIGREQUEST_INITIALRESOURCEVERSIONSENTRY.containing_type = _INCREMENTALMESHCONFIGREQUEST -_INCREMENTALMESHCONFIGREQUEST.fields_by_name['client'].message_type = _CLIENT +_INCREMENTALMESHCONFIGREQUEST.fields_by_name['sink_node'].message_type = _SINKNODE _INCREMENTALMESHCONFIGREQUEST.fields_by_name['initial_resource_versions'].message_type = _INCREMENTALMESHCONFIGREQUEST_INITIALRESOURCEVERSIONSENTRY _INCREMENTALMESHCONFIGREQUEST.fields_by_name['error_detail'].message_type = google_dot_rpc_dot_status__pb2._STATUS -_INCREMENTALMESHCONFIGRESPONSE.fields_by_name['envelopes'].message_type = mcp_dot_v1alpha1_dot_envelope__pb2._ENVELOPE -DESCRIPTOR.message_types_by_name['Client'] = _CLIENT +_INCREMENTALMESHCONFIGRESPONSE.fields_by_name['resources'].message_type = mcp_dot_v1alpha1_dot_resource__pb2._RESOURCE +_REQUESTRESOURCES_INITIALRESOURCEVERSIONSENTRY.containing_type = _REQUESTRESOURCES +_REQUESTRESOURCES.fields_by_name['sink_node'].message_type = _SINKNODE +_REQUESTRESOURCES.fields_by_name['initial_resource_versions'].message_type = _REQUESTRESOURCES_INITIALRESOURCEVERSIONSENTRY +_REQUESTRESOURCES.fields_by_name['error_detail'].message_type = google_dot_rpc_dot_status__pb2._STATUS +_RESOURCES.fields_by_name['resources'].message_type = mcp_dot_v1alpha1_dot_resource__pb2._RESOURCE +DESCRIPTOR.message_types_by_name['SinkNode'] = _SINKNODE DESCRIPTOR.message_types_by_name['MeshConfigRequest'] = _MESHCONFIGREQUEST DESCRIPTOR.message_types_by_name['MeshConfigResponse'] = _MESHCONFIGRESPONSE DESCRIPTOR.message_types_by_name['IncrementalMeshConfigRequest'] = _INCREMENTALMESHCONFIGREQUEST DESCRIPTOR.message_types_by_name['IncrementalMeshConfigResponse'] = _INCREMENTALMESHCONFIGRESPONSE +DESCRIPTOR.message_types_by_name['RequestResources'] = _REQUESTRESOURCES +DESCRIPTOR.message_types_by_name['Resources'] = _RESOURCES _sym_db.RegisterFileDescriptor(DESCRIPTOR) -Client = _reflection.GeneratedProtocolMessageType('Client', (_message.Message,), dict( - DESCRIPTOR = _CLIENT, +SinkNode = _reflection.GeneratedProtocolMessageType('SinkNode', (_message.Message,), dict( + + AnnotationsEntry = _reflection.GeneratedProtocolMessageType('AnnotationsEntry', (_message.Message,), dict( + DESCRIPTOR = _SINKNODE_ANNOTATIONSENTRY, + __module__ = 'mcp.v1alpha1.mcp_pb2' + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.SinkNode.AnnotationsEntry) + )) + , + DESCRIPTOR = _SINKNODE, __module__ = 'mcp.v1alpha1.mcp_pb2' - # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.Client) + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.SinkNode) )) -_sym_db.RegisterMessage(Client) +_sym_db.RegisterMessage(SinkNode) +_sym_db.RegisterMessage(SinkNode.AnnotationsEntry) MeshConfigRequest = _reflection.GeneratedProtocolMessageType('MeshConfigRequest', (_message.Message,), dict( DESCRIPTOR = _MESHCONFIGREQUEST, @@ -385,15 +592,43 @@ )) _sym_db.RegisterMessage(IncrementalMeshConfigResponse) +RequestResources = _reflection.GeneratedProtocolMessageType('RequestResources', (_message.Message,), dict( + + InitialResourceVersionsEntry = _reflection.GeneratedProtocolMessageType('InitialResourceVersionsEntry', (_message.Message,), dict( + DESCRIPTOR = _REQUESTRESOURCES_INITIALRESOURCEVERSIONSENTRY, + __module__ = 'mcp.v1alpha1.mcp_pb2' + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.RequestResources.InitialResourceVersionsEntry) + )) + , + DESCRIPTOR = _REQUESTRESOURCES, + __module__ = 'mcp.v1alpha1.mcp_pb2' + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.RequestResources) + )) +_sym_db.RegisterMessage(RequestResources) +_sym_db.RegisterMessage(RequestResources.InitialResourceVersionsEntry) + +Resources = _reflection.GeneratedProtocolMessageType('Resources', (_message.Message,), dict( + DESCRIPTOR = _RESOURCES, + __module__ = 'mcp.v1alpha1.mcp_pb2' + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.Resources) + )) +_sym_db.RegisterMessage(Resources) + DESCRIPTOR.has_options = True DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('Z\031istio.io/api/mcp/v1alpha1\250\342\036\001')) -_MESHCONFIGRESPONSE.fields_by_name['envelopes'].has_options = True -_MESHCONFIGRESPONSE.fields_by_name['envelopes']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\310\336\037\000')) +_SINKNODE_ANNOTATIONSENTRY.has_options = True +_SINKNODE_ANNOTATIONSENTRY._options = _descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')) +_MESHCONFIGRESPONSE.fields_by_name['resources'].has_options = True +_MESHCONFIGRESPONSE.fields_by_name['resources']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\310\336\037\000')) _INCREMENTALMESHCONFIGREQUEST_INITIALRESOURCEVERSIONSENTRY.has_options = True _INCREMENTALMESHCONFIGREQUEST_INITIALRESOURCEVERSIONSENTRY._options = _descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')) -_INCREMENTALMESHCONFIGRESPONSE.fields_by_name['envelopes'].has_options = True -_INCREMENTALMESHCONFIGRESPONSE.fields_by_name['envelopes']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\310\336\037\000')) +_INCREMENTALMESHCONFIGRESPONSE.fields_by_name['resources'].has_options = True +_INCREMENTALMESHCONFIGRESPONSE.fields_by_name['resources']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\310\336\037\000')) +_REQUESTRESOURCES_INITIALRESOURCEVERSIONSENTRY.has_options = True +_REQUESTRESOURCES_INITIALRESOURCEVERSIONSENTRY._options = _descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')) +_RESOURCES.fields_by_name['resources'].has_options = True +_RESOURCES.fields_by_name['resources']._options = _descriptor._ParseOptions(descriptor_pb2.FieldOptions(), _b('\310\336\037\000')) _AGGREGATEDMESHCONFIGSERVICE = _descriptor.ServiceDescriptor( name='AggregatedMeshConfigService', @@ -401,8 +636,8 @@ file=DESCRIPTOR, index=0, options=None, - serialized_start=1022, - serialized_end=1307, + serialized_start=1564, + serialized_end=1849, methods=[ _descriptor.MethodDescriptor( name='StreamAggregatedResources', @@ -427,4 +662,52 @@ DESCRIPTOR.services_by_name['AggregatedMeshConfigService'] = _AGGREGATEDMESHCONFIGSERVICE + +_RESOURCESOURCE = _descriptor.ServiceDescriptor( + name='ResourceSource', + full_name='istio.mcp.v1alpha1.ResourceSource', + file=DESCRIPTOR, + index=1, + options=None, + serialized_start=1851, + serialized_end=1969, + methods=[ + _descriptor.MethodDescriptor( + name='EstablishResourceStream', + full_name='istio.mcp.v1alpha1.ResourceSource.EstablishResourceStream', + index=0, + containing_service=None, + input_type=_REQUESTRESOURCES, + output_type=_RESOURCES, + options=None, + ), +]) +_sym_db.RegisterServiceDescriptor(_RESOURCESOURCE) + +DESCRIPTOR.services_by_name['ResourceSource'] = _RESOURCESOURCE + + +_RESOURCESINK = _descriptor.ServiceDescriptor( + name='ResourceSink', + full_name='istio.mcp.v1alpha1.ResourceSink', + file=DESCRIPTOR, + index=2, + options=None, + serialized_start=1971, + serialized_end=2087, + methods=[ + _descriptor.MethodDescriptor( + name='EstablishResourceStream', + full_name='istio.mcp.v1alpha1.ResourceSink.EstablishResourceStream', + index=0, + containing_service=None, + input_type=_RESOURCES, + output_type=_REQUESTRESOURCES, + options=None, + ), +]) +_sym_db.RegisterServiceDescriptor(_RESOURCESINK) + +DESCRIPTOR.services_by_name['ResourceSink'] = _RESOURCESINK + # @@protoc_insertion_point(module_scope) diff --git a/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/metadata_pb2.py b/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/metadata_pb2.py index 0ed149b57977..3be0202f3c32 100644 --- a/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/metadata_pb2.py +++ b/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/metadata_pb2.py @@ -15,19 +15,94 @@ from gogoproto import gogo_pb2 as gogoproto_dot_gogo__pb2 from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 +from google.protobuf import struct_pb2 as google_dot_protobuf_dot_struct__pb2 DESCRIPTOR = _descriptor.FileDescriptor( name='mcp/v1alpha1/metadata.proto', package='istio.mcp.v1alpha1', syntax='proto3', - serialized_pb=_b('\n\x1bmcp/v1alpha1/metadata.proto\x12\x12istio.mcp.v1alpha1\x1a\x14gogoproto/gogo.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"Z\n\x08Metadata\x12\x0c\n\x04name\x18\x01 \x01(\t\x12/\n\x0b\x63reate_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x0f\n\x07version\x18\x03 \x01(\tB\x1fZ\x19istio.io/api/mcp/v1alpha1\xa8\xe2\x1e\x01\x62\x06proto3') + serialized_pb=_b('\n\x1bmcp/v1alpha1/metadata.proto\x12\x12istio.mcp.v1alpha1\x1a\x14gogoproto/gogo.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x1cgoogle/protobuf/struct.proto\"\xbb\x02\n\x08Metadata\x12\x0c\n\x04name\x18\x01 \x01(\t\x12/\n\x0b\x63reate_time\x18\x02 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x0f\n\x07version\x18\x03 \x01(\t\x12\x38\n\x06labels\x18\x04 \x03(\x0b\x32(.istio.mcp.v1alpha1.Metadata.LabelsEntry\x12\x42\n\x0b\x61nnotations\x18\x05 \x03(\x0b\x32-.istio.mcp.v1alpha1.Metadata.AnnotationsEntry\x1a-\n\x0bLabelsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\x1a\x32\n\x10\x41nnotationsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\x42\x1fZ\x19istio.io/api/mcp/v1alpha1\xa8\xe2\x1e\x01\x62\x06proto3') , - dependencies=[gogoproto_dot_gogo__pb2.DESCRIPTOR,google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,]) + dependencies=[gogoproto_dot_gogo__pb2.DESCRIPTOR,google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR,google_dot_protobuf_dot_struct__pb2.DESCRIPTOR,]) +_METADATA_LABELSENTRY = _descriptor.Descriptor( + name='LabelsEntry', + full_name='istio.mcp.v1alpha1.Metadata.LabelsEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='istio.mcp.v1alpha1.Metadata.LabelsEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='value', full_name='istio.mcp.v1alpha1.Metadata.LabelsEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=_descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')), + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=355, + serialized_end=400, +) + +_METADATA_ANNOTATIONSENTRY = _descriptor.Descriptor( + name='AnnotationsEntry', + full_name='istio.mcp.v1alpha1.Metadata.AnnotationsEntry', + filename=None, + file=DESCRIPTOR, + containing_type=None, + fields=[ + _descriptor.FieldDescriptor( + name='key', full_name='istio.mcp.v1alpha1.Metadata.AnnotationsEntry.key', index=0, + number=1, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='value', full_name='istio.mcp.v1alpha1.Metadata.AnnotationsEntry.value', index=1, + number=2, type=9, cpp_type=9, label=1, + has_default_value=False, default_value=_b("").decode('utf-8'), + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + ], + extensions=[ + ], + nested_types=[], + enum_types=[ + ], + options=_descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')), + is_extendable=False, + syntax='proto3', + extension_ranges=[], + oneofs=[ + ], + serialized_start=402, + serialized_end=452, +) + _METADATA = _descriptor.Descriptor( name='Metadata', full_name='istio.mcp.v1alpha1.Metadata', @@ -56,10 +131,24 @@ message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='labels', full_name='istio.mcp.v1alpha1.Metadata.labels', index=3, + number=4, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), + _descriptor.FieldDescriptor( + name='annotations', full_name='istio.mcp.v1alpha1.Metadata.annotations', index=4, + number=5, type=11, cpp_type=10, label=3, + has_default_value=False, default_value=[], + message_type=None, enum_type=None, containing_type=None, + is_extension=False, extension_scope=None, + options=None, file=DESCRIPTOR), ], extensions=[ ], - nested_types=[], + nested_types=[_METADATA_LABELSENTRY, _METADATA_ANNOTATIONSENTRY, ], enum_types=[ ], options=None, @@ -68,22 +157,46 @@ extension_ranges=[], oneofs=[ ], - serialized_start=106, - serialized_end=196, + serialized_start=137, + serialized_end=452, ) +_METADATA_LABELSENTRY.containing_type = _METADATA +_METADATA_ANNOTATIONSENTRY.containing_type = _METADATA _METADATA.fields_by_name['create_time'].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP +_METADATA.fields_by_name['labels'].message_type = _METADATA_LABELSENTRY +_METADATA.fields_by_name['annotations'].message_type = _METADATA_ANNOTATIONSENTRY DESCRIPTOR.message_types_by_name['Metadata'] = _METADATA _sym_db.RegisterFileDescriptor(DESCRIPTOR) Metadata = _reflection.GeneratedProtocolMessageType('Metadata', (_message.Message,), dict( + + LabelsEntry = _reflection.GeneratedProtocolMessageType('LabelsEntry', (_message.Message,), dict( + DESCRIPTOR = _METADATA_LABELSENTRY, + __module__ = 'mcp.v1alpha1.metadata_pb2' + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.Metadata.LabelsEntry) + )) + , + + AnnotationsEntry = _reflection.GeneratedProtocolMessageType('AnnotationsEntry', (_message.Message,), dict( + DESCRIPTOR = _METADATA_ANNOTATIONSENTRY, + __module__ = 'mcp.v1alpha1.metadata_pb2' + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.Metadata.AnnotationsEntry) + )) + , DESCRIPTOR = _METADATA, __module__ = 'mcp.v1alpha1.metadata_pb2' # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.Metadata) )) _sym_db.RegisterMessage(Metadata) +_sym_db.RegisterMessage(Metadata.LabelsEntry) +_sym_db.RegisterMessage(Metadata.AnnotationsEntry) DESCRIPTOR.has_options = True DESCRIPTOR._options = _descriptor._ParseOptions(descriptor_pb2.FileOptions(), _b('Z\031istio.io/api/mcp/v1alpha1\250\342\036\001')) +_METADATA_LABELSENTRY.has_options = True +_METADATA_LABELSENTRY._options = _descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')) +_METADATA_ANNOTATIONSENTRY.has_options = True +_METADATA_ANNOTATIONSENTRY._options = _descriptor._ParseOptions(descriptor_pb2.MessageOptions(), _b('8\001')) # @@protoc_insertion_point(module_scope) diff --git a/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/envelope_pb2.py b/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/resource_pb2.py similarity index 65% rename from vendor/istio.io/api/python/istio_api/mcp/v1alpha1/envelope_pb2.py rename to vendor/istio.io/api/python/istio_api/mcp/v1alpha1/resource_pb2.py index 7357c432b9c9..ce5615342dd0 100644 --- a/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/envelope_pb2.py +++ b/vendor/istio.io/api/python/istio_api/mcp/v1alpha1/resource_pb2.py @@ -1,5 +1,5 @@ # Generated by the protocol buffer compiler. DO NOT EDIT! -# source: mcp/v1alpha1/envelope.proto +# source: mcp/v1alpha1/resource.proto import sys _b=sys.version_info[0]<3 and (lambda x:x) or (lambda x:x.encode('latin1')) @@ -19,32 +19,32 @@ DESCRIPTOR = _descriptor.FileDescriptor( - name='mcp/v1alpha1/envelope.proto', + name='mcp/v1alpha1/resource.proto', package='istio.mcp.v1alpha1', syntax='proto3', - serialized_pb=_b('\n\x1bmcp/v1alpha1/envelope.proto\x12\x12istio.mcp.v1alpha1\x1a\x19google/protobuf/any.proto\x1a\x14gogoproto/gogo.proto\x1a\x1bmcp/v1alpha1/metadata.proto\"b\n\x08\x45nvelope\x12.\n\x08metadata\x18\x01 \x01(\x0b\x32\x1c.istio.mcp.v1alpha1.Metadata\x12&\n\x08resource\x18\x02 \x01(\x0b\x32\x14.google.protobuf.AnyB\x1fZ\x19istio.io/api/mcp/v1alpha1\xa8\xe2\x1e\x01\x62\x06proto3') + serialized_pb=_b('\n\x1bmcp/v1alpha1/resource.proto\x12\x12istio.mcp.v1alpha1\x1a\x19google/protobuf/any.proto\x1a\x14gogoproto/gogo.proto\x1a\x1bmcp/v1alpha1/metadata.proto\"^\n\x08Resource\x12.\n\x08metadata\x18\x01 \x01(\x0b\x32\x1c.istio.mcp.v1alpha1.Metadata\x12\"\n\x04\x62ody\x18\x02 \x01(\x0b\x32\x14.google.protobuf.AnyB\x1fZ\x19istio.io/api/mcp/v1alpha1\xa8\xe2\x1e\x01\x62\x06proto3') , dependencies=[google_dot_protobuf_dot_any__pb2.DESCRIPTOR,gogoproto_dot_gogo__pb2.DESCRIPTOR,mcp_dot_v1alpha1_dot_metadata__pb2.DESCRIPTOR,]) -_ENVELOPE = _descriptor.Descriptor( - name='Envelope', - full_name='istio.mcp.v1alpha1.Envelope', +_RESOURCE = _descriptor.Descriptor( + name='Resource', + full_name='istio.mcp.v1alpha1.Resource', filename=None, file=DESCRIPTOR, containing_type=None, fields=[ _descriptor.FieldDescriptor( - name='metadata', full_name='istio.mcp.v1alpha1.Envelope.metadata', index=0, + name='metadata', full_name='istio.mcp.v1alpha1.Resource.metadata', index=0, number=1, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, is_extension=False, extension_scope=None, options=None, file=DESCRIPTOR), _descriptor.FieldDescriptor( - name='resource', full_name='istio.mcp.v1alpha1.Envelope.resource', index=1, + name='body', full_name='istio.mcp.v1alpha1.Resource.body', index=1, number=2, type=11, cpp_type=10, label=1, has_default_value=False, default_value=None, message_type=None, enum_type=None, containing_type=None, @@ -63,20 +63,20 @@ oneofs=[ ], serialized_start=129, - serialized_end=227, + serialized_end=223, ) -_ENVELOPE.fields_by_name['metadata'].message_type = mcp_dot_v1alpha1_dot_metadata__pb2._METADATA -_ENVELOPE.fields_by_name['resource'].message_type = google_dot_protobuf_dot_any__pb2._ANY -DESCRIPTOR.message_types_by_name['Envelope'] = _ENVELOPE +_RESOURCE.fields_by_name['metadata'].message_type = mcp_dot_v1alpha1_dot_metadata__pb2._METADATA +_RESOURCE.fields_by_name['body'].message_type = google_dot_protobuf_dot_any__pb2._ANY +DESCRIPTOR.message_types_by_name['Resource'] = _RESOURCE _sym_db.RegisterFileDescriptor(DESCRIPTOR) -Envelope = _reflection.GeneratedProtocolMessageType('Envelope', (_message.Message,), dict( - DESCRIPTOR = _ENVELOPE, - __module__ = 'mcp.v1alpha1.envelope_pb2' - # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.Envelope) +Resource = _reflection.GeneratedProtocolMessageType('Resource', (_message.Message,), dict( + DESCRIPTOR = _RESOURCE, + __module__ = 'mcp.v1alpha1.resource_pb2' + # @@protoc_insertion_point(class_scope:istio.mcp.v1alpha1.Resource) )) -_sym_db.RegisterMessage(Envelope) +_sym_db.RegisterMessage(Resource) DESCRIPTOR.has_options = True diff --git a/vendor/istio.io/api/rbac/v1alpha1/istio.rbac.v1alpha1.pb.html b/vendor/istio.io/api/rbac/v1alpha1/istio.rbac.v1alpha1.pb.html index c37ceef23ef7..edc324f0e837 100644 --- a/vendor/istio.io/api/rbac/v1alpha1/istio.rbac.v1alpha1.pb.html +++ b/vendor/istio.io/api/rbac/v1alpha1/istio.rbac.v1alpha1.pb.html @@ -106,11 +106,10 @@

AccessRule

Optional. A list of HTTP paths or gRPC methods. gRPC methods must be presented as fully-qualified name in the form of “/packageName.serviceName/methodName” and are case sensitive. -Exact match, prefix match, and suffix match are supported for paths. -For example, the path “/books/review” matches -“/books/review” (exact match), or “/books/” (prefix match), -or “/review” (suffix match). -If not specified, it applies to any path.

+Exact match, prefix match, and suffix match are supported. For example, +the path “/books/review” matches “/books/review” (exact match), +or “/books/” (prefix match), or “/review” (suffix match). +If not specified, it matches to any path.

@@ -120,7 +119,7 @@

AccessRule

@@ -128,8 +127,7 @@

AccessRule

@@ -162,10 +160,9 @@

AccessRule.Constraint

@@ -335,8 +332,7 @@

RoleRef

@@ -345,9 +341,7 @@

RoleRef

ServiceRole

-

ServiceRole specification contains a list of access rules (permissions). -This represent the “Spec” part of the ServiceRole object. The name and namespace -of the ServiceRole is specified in “metadata” section of the ServiceRole object.

+

ServiceRole specification contains a list of access rules (permissions).

Optional. A list of HTTP methods (e.g., “GET”, “POST”). It is ignored in gRPC case because the value is always “POST”. -If set to [“*”] or not specified, it applies to any method.

+If not specified, it matches to any methods.

constraints AccessRule.Constraint[] -

Optional. Extra constraints in the ServiceRole specification. -The above ServiceRole example shows an example of constraint “version”.

+

Optional. Extra constraints in the ServiceRole specification.

string[]

List of valid values for the constraint. -Exact match, prefix match, and suffix match are supported for constraint values. -For example, the value “v1alpha2” matches -“v1alpha2” (exact match), or “v1” (prefix match), -or “alpha2” (suffix match).

+Exact match, prefix match, and suffix match are supported. +For example, the value “v1alpha2” matches “v1alpha2” (exact match), +or “v1” (prefix match), or “alpha2” (suffix match).

string

Required. The name of the ServiceRole object being referenced. -The ServiceRole object must be in the same namespace as the ServiceRoleBinding -object.

+The ServiceRole object must be in the same namespace as the ServiceRoleBinding object.

@@ -371,10 +365,7 @@

ServiceRole

ServiceRoleBinding

-

ServiceRoleBinding assigns a ServiceRole to a list of subjects. -This represents the “Spec” part of the ServiceRoleBinding object. The name and namespace -of the ServiceRoleBinding is specified in “metadata” section of the ServiceRoleBinding -object.

+

ServiceRoleBinding assigns a ServiceRole to a list of subjects.

@@ -430,8 +421,7 @@

Subject

diff --git a/vendor/istio.io/api/rbac/v1alpha1/rbac.pb.go b/vendor/istio.io/api/rbac/v1alpha1/rbac.pb.go index b4f490610be4..b194986bed8d 100644 --- a/vendor/istio.io/api/rbac/v1alpha1/rbac.pb.go +++ b/vendor/istio.io/api/rbac/v1alpha1/rbac.pb.go @@ -71,6 +71,8 @@ rbac/v1alpha1/rbac.proto It has these top-level messages: + WorkloadSelector + AuthorizationPolicy ServiceRole AccessRule ServiceRoleBinding @@ -159,11 +161,74 @@ var RbacConfig_Mode_value = map[string]int32{ func (x RbacConfig_Mode) String() string { return proto.EnumName(RbacConfig_Mode_name, int32(x)) } -func (RbacConfig_Mode) EnumDescriptor() ([]byte, []int) { return fileDescriptorRbac, []int{5, 0} } +func (RbacConfig_Mode) EnumDescriptor() ([]byte, []int) { return fileDescriptorRbac, []int{7, 0} } + +// $hide_from_docs +// This is forked from the networking/v1alpha3/sidecar.proto to avoid a direct +// dependency from the rbac API on networking API. +// TODO: Move the WorkloadSelector to a common place to be shared by other packages. +// WorkloadSelector specifies the criteria used to determine if the Gateway +// or Sidecar resource can be applied to a proxy. The matching criteria +// includes the metadata associated with a proxy, workload info such as +// labels attached to the pod/VM, or any other info that the proxy provides +// to Istio during the initial handshake. If multiple conditions are +// specified, all conditions need to match in order for the workload to be +// selected. Currently, only label based selection mechanism is supported. +type WorkloadSelector struct { + // One or more labels that indicate a specific set of pods/VMs on which + // this sidecar configuration should be applied. The scope of label + // search is restricted to the configuration namespace in which the the + // resource is present. + Labels map[string]string `protobuf:"bytes,1,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` +} + +func (m *WorkloadSelector) Reset() { *m = WorkloadSelector{} } +func (m *WorkloadSelector) String() string { return proto.CompactTextString(m) } +func (*WorkloadSelector) ProtoMessage() {} +func (*WorkloadSelector) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{0} } + +func (m *WorkloadSelector) GetLabels() map[string]string { + if m != nil { + return m.Labels + } + return nil +} + +// $hide_from_docs +// AuthorizationPolicy to enforce access control on a selected workload. +type AuthorizationPolicy struct { + // $hide_from_docs + // Optional. One or more labels that indicate a specific set of pods/VMs + // on which this authorization policy should be applied. Leave this empty to + // select all pods/VMs. + // The scope of label search is platform dependent. On Kubernetes, for example, + // the scope includes pods running in the same namespace as the authorization policy itself. + WorkloadSelector *WorkloadSelector `protobuf:"bytes,1,opt,name=workload_selector,json=workloadSelector" json:"workload_selector,omitempty"` + // $hide_from_docs + // A list of bindings that specify the subjects and permissions to the selected workload. + Allow []*ServiceRoleBinding `protobuf:"bytes,2,rep,name=allow" json:"allow,omitempty"` +} + +func (m *AuthorizationPolicy) Reset() { *m = AuthorizationPolicy{} } +func (m *AuthorizationPolicy) String() string { return proto.CompactTextString(m) } +func (*AuthorizationPolicy) ProtoMessage() {} +func (*AuthorizationPolicy) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{1} } + +func (m *AuthorizationPolicy) GetWorkloadSelector() *WorkloadSelector { + if m != nil { + return m.WorkloadSelector + } + return nil +} + +func (m *AuthorizationPolicy) GetAllow() []*ServiceRoleBinding { + if m != nil { + return m.Allow + } + return nil +} // ServiceRole specification contains a list of access rules (permissions). -// This represent the "Spec" part of the ServiceRole object. The name and namespace -// of the ServiceRole is specified in "metadata" section of the ServiceRole object. type ServiceRole struct { // Required. The set of access rules (permissions) that the role has. Rules []*AccessRule `protobuf:"bytes,1,rep,name=rules" json:"rules,omitempty"` @@ -172,7 +237,7 @@ type ServiceRole struct { func (m *ServiceRole) Reset() { *m = ServiceRole{} } func (m *ServiceRole) String() string { return proto.CompactTextString(m) } func (*ServiceRole) ProtoMessage() {} -func (*ServiceRole) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{0} } +func (*ServiceRole) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{2} } func (m *ServiceRole) GetRules() []*AccessRule { if m != nil { @@ -190,28 +255,52 @@ type AccessRule struct { // or "*.mtv.cluster.local" (suffix match). // If set to ["*"], it refers to all services in the namespace. Services []string `protobuf:"bytes,1,rep,name=services" json:"services,omitempty"` + // $hide_from_docs + // Optional. A list of HTTP hosts. This is matched against the HOST header in + // a HTTP request. Exact match, prefix match and suffix match are supported. + // For example, the host "test.abc.com" matches "test.abc.com" (exact match), + // or "*.abc.com" (prefix match), or "test.abc.*" (suffix match). + // If not specified, it matches to any host. + Hosts []string `protobuf:"bytes,5,rep,name=hosts" json:"hosts,omitempty"` + // $hide_from_docs + // Optional. A list of HTTP hosts that must not be matched. + NotHosts []string `protobuf:"bytes,6,rep,name=not_hosts,json=notHosts" json:"not_hosts,omitempty"` // Optional. A list of HTTP paths or gRPC methods. // gRPC methods must be presented as fully-qualified name in the form of // "/packageName.serviceName/methodName" and are case sensitive. - // Exact match, prefix match, and suffix match are supported for paths. - // For example, the path "/books/review" matches - // "/books/review" (exact match), or "/books/*" (prefix match), - // or "*/review" (suffix match). - // If not specified, it applies to any path. + // Exact match, prefix match, and suffix match are supported. For example, + // the path "/books/review" matches "/books/review" (exact match), + // or "/books/*" (prefix match), or "*/review" (suffix match). + // If not specified, it matches to any path. Paths []string `protobuf:"bytes,2,rep,name=paths" json:"paths,omitempty"` + // $hide_from_docs + // Optional. A list of HTTP paths or gRPC methods that must not be matched. + NotPaths []string `protobuf:"bytes,7,rep,name=not_paths,json=notPaths" json:"not_paths,omitempty"` // Optional. A list of HTTP methods (e.g., "GET", "POST"). // It is ignored in gRPC case because the value is always "POST". - // If set to ["*"] or not specified, it applies to any method. + // If not specified, it matches to any methods. Methods []string `protobuf:"bytes,3,rep,name=methods" json:"methods,omitempty"` + // $hide_from_docs + // Optional. A list of HTTP methods that must not be matched. + // Note: It's an error to set methods and not_methods at the same time. + NotMethods []string `protobuf:"bytes,8,rep,name=not_methods,json=notMethods" json:"not_methods,omitempty"` + // $hide_from_docs + // Optional. A list of port numbers of the request. If not specified, it matches + // to any port number. + // Note: It's an error to set ports and not_ports at the same time. + Ports []int32 `protobuf:"varint,9,rep,packed,name=ports" json:"ports,omitempty"` + // $hide_from_docs + // Optional. A list of port numbers that must not be matched. + // Note: It's an error to set ports and not_ports at the same time. + NotPorts []int32 `protobuf:"varint,10,rep,packed,name=not_ports,json=notPorts" json:"not_ports,omitempty"` // Optional. Extra constraints in the ServiceRole specification. - // The above ServiceRole example shows an example of constraint "version". Constraints []*AccessRule_Constraint `protobuf:"bytes,4,rep,name=constraints" json:"constraints,omitempty"` } func (m *AccessRule) Reset() { *m = AccessRule{} } func (m *AccessRule) String() string { return proto.CompactTextString(m) } func (*AccessRule) ProtoMessage() {} -func (*AccessRule) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{1} } +func (*AccessRule) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{3} } func (m *AccessRule) GetServices() []string { if m != nil { @@ -220,6 +309,20 @@ func (m *AccessRule) GetServices() []string { return nil } +func (m *AccessRule) GetHosts() []string { + if m != nil { + return m.Hosts + } + return nil +} + +func (m *AccessRule) GetNotHosts() []string { + if m != nil { + return m.NotHosts + } + return nil +} + func (m *AccessRule) GetPaths() []string { if m != nil { return m.Paths @@ -227,6 +330,13 @@ func (m *AccessRule) GetPaths() []string { return nil } +func (m *AccessRule) GetNotPaths() []string { + if m != nil { + return m.NotPaths + } + return nil +} + func (m *AccessRule) GetMethods() []string { if m != nil { return m.Methods @@ -234,6 +344,27 @@ func (m *AccessRule) GetMethods() []string { return nil } +func (m *AccessRule) GetNotMethods() []string { + if m != nil { + return m.NotMethods + } + return nil +} + +func (m *AccessRule) GetPorts() []int32 { + if m != nil { + return m.Ports + } + return nil +} + +func (m *AccessRule) GetNotPorts() []int32 { + if m != nil { + return m.NotPorts + } + return nil +} + func (m *AccessRule) GetConstraints() []*AccessRule_Constraint { if m != nil { return m.Constraints @@ -246,17 +377,16 @@ type AccessRule_Constraint struct { // Key of the constraint. Key string `protobuf:"bytes,1,opt,name=key,proto3" json:"key,omitempty"` // List of valid values for the constraint. - // Exact match, prefix match, and suffix match are supported for constraint values. - // For example, the value "v1alpha2" matches - // "v1alpha2" (exact match), or "v1*" (prefix match), - // or "*alpha2" (suffix match). + // Exact match, prefix match, and suffix match are supported. + // For example, the value "v1alpha2" matches "v1alpha2" (exact match), + // or "v1*" (prefix match), or "*alpha2" (suffix match). Values []string `protobuf:"bytes,2,rep,name=values" json:"values,omitempty"` } func (m *AccessRule_Constraint) Reset() { *m = AccessRule_Constraint{} } func (m *AccessRule_Constraint) String() string { return proto.CompactTextString(m) } func (*AccessRule_Constraint) ProtoMessage() {} -func (*AccessRule_Constraint) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{1, 0} } +func (*AccessRule_Constraint) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{3, 0} } func (m *AccessRule_Constraint) GetKey() string { if m != nil { @@ -273,9 +403,6 @@ func (m *AccessRule_Constraint) GetValues() []string { } // ServiceRoleBinding assigns a ServiceRole to a list of subjects. -// This represents the "Spec" part of the ServiceRoleBinding object. The name and namespace -// of the ServiceRoleBinding is specified in "metadata" section of the ServiceRoleBinding -// object. type ServiceRoleBinding struct { // Required. List of subjects that are assigned the ServiceRole object. Subjects []*Subject `protobuf:"bytes,1,rep,name=subjects" json:"subjects,omitempty"` @@ -289,7 +416,7 @@ type ServiceRoleBinding struct { func (m *ServiceRoleBinding) Reset() { *m = ServiceRoleBinding{} } func (m *ServiceRoleBinding) String() string { return proto.CompactTextString(m) } func (*ServiceRoleBinding) ProtoMessage() {} -func (*ServiceRoleBinding) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{2} } +func (*ServiceRoleBinding) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{4} } func (m *ServiceRoleBinding) GetSubjects() []*Subject { if m != nil { @@ -318,17 +445,45 @@ type Subject struct { // Optional. The user name/ID that the subject represents. User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"` // $hide_from_docs + // Optional. A list of principals that the subject represents. This is matched to the + // `source.principal` attribute. If not specified, it applies to any principals. + Principals []string `protobuf:"bytes,4,rep,name=principals" json:"principals,omitempty"` + // $hide_from_docs + // Optional. A list of principals that must not be matched. + NotPrincipals []string `protobuf:"bytes,5,rep,name=not_principals,json=notPrincipals" json:"not_principals,omitempty"` + // $hide_from_docs // Optional. The group that the subject belongs to. + // Deprecated. Use groups and not_groups instead. Group string `protobuf:"bytes,2,opt,name=group,proto3" json:"group,omitempty"` + // $hide_from_docs + // Optional. A list of groups that the subject represents. This is matched to the + // `request.auth.claims[groups]` attribute. If not specified, it applies to any groups. + Groups []string `protobuf:"bytes,6,rep,name=groups" json:"groups,omitempty"` + // $hide_from_docs + // Optional. A list of groups that must not be matched. + NotGroups []string `protobuf:"bytes,7,rep,name=not_groups,json=notGroups" json:"not_groups,omitempty"` + // $hide_from_docs + // Optional. A list of namespaces that the subject represents. This is matched to + // the `source.namespace` attribute. If not specified, it applies to any namespaces. + Namespaces []string `protobuf:"bytes,8,rep,name=namespaces" json:"namespaces,omitempty"` + // $hide_from_docs + // Optional. A list of namespaces that must not be matched. + NotNamespaces []string `protobuf:"bytes,9,rep,name=not_namespaces,json=notNamespaces" json:"not_namespaces,omitempty"` + // $hide_from_docs + // Optional. A list of IP address or CIDR ranges that the subject represents. + // E.g. 192.168.100.2 or 10.1.0.0/16. If not specified, it applies to any IP addresses. + Ips []string `protobuf:"bytes,10,rep,name=ips" json:"ips,omitempty"` + // $hide_from_docs + // Optional. A list of IP addresses or CIDR ranges that must not be matched. + NotIps []string `protobuf:"bytes,11,rep,name=not_ips,json=notIps" json:"not_ips,omitempty"` // Optional. The set of properties that identify the subject. - // The above ServiceRoleBinding example shows an example of property "source.namespace". Properties map[string]string `protobuf:"bytes,3,rep,name=properties" json:"properties,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"` } func (m *Subject) Reset() { *m = Subject{} } func (m *Subject) String() string { return proto.CompactTextString(m) } func (*Subject) ProtoMessage() {} -func (*Subject) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{3} } +func (*Subject) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{5} } func (m *Subject) GetUser() string { if m != nil { @@ -337,6 +492,20 @@ func (m *Subject) GetUser() string { return "" } +func (m *Subject) GetPrincipals() []string { + if m != nil { + return m.Principals + } + return nil +} + +func (m *Subject) GetNotPrincipals() []string { + if m != nil { + return m.NotPrincipals + } + return nil +} + func (m *Subject) GetGroup() string { if m != nil { return m.Group @@ -344,6 +513,48 @@ func (m *Subject) GetGroup() string { return "" } +func (m *Subject) GetGroups() []string { + if m != nil { + return m.Groups + } + return nil +} + +func (m *Subject) GetNotGroups() []string { + if m != nil { + return m.NotGroups + } + return nil +} + +func (m *Subject) GetNamespaces() []string { + if m != nil { + return m.Namespaces + } + return nil +} + +func (m *Subject) GetNotNamespaces() []string { + if m != nil { + return m.NotNamespaces + } + return nil +} + +func (m *Subject) GetIps() []string { + if m != nil { + return m.Ips + } + return nil +} + +func (m *Subject) GetNotIps() []string { + if m != nil { + return m.NotIps + } + return nil +} + func (m *Subject) GetProperties() map[string]string { if m != nil { return m.Properties @@ -357,15 +568,14 @@ type RoleRef struct { // Currently, "ServiceRole" is the only supported value for "kind". Kind string `protobuf:"bytes,1,opt,name=kind,proto3" json:"kind,omitempty"` // Required. The name of the ServiceRole object being referenced. - // The ServiceRole object must be in the same namespace as the ServiceRoleBinding - // object. + // The ServiceRole object must be in the same namespace as the ServiceRoleBinding object. Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"` } func (m *RoleRef) Reset() { *m = RoleRef{} } func (m *RoleRef) String() string { return proto.CompactTextString(m) } func (*RoleRef) ProtoMessage() {} -func (*RoleRef) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{4} } +func (*RoleRef) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{6} } func (m *RoleRef) GetKind() string { if m != nil { @@ -422,7 +632,7 @@ type RbacConfig struct { func (m *RbacConfig) Reset() { *m = RbacConfig{} } func (m *RbacConfig) String() string { return proto.CompactTextString(m) } func (*RbacConfig) ProtoMessage() {} -func (*RbacConfig) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{5} } +func (*RbacConfig) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{7} } func (m *RbacConfig) GetMode() RbacConfig_Mode { if m != nil { @@ -456,6 +666,9 @@ func (m *RbacConfig) GetEnforcementMode() EnforcementMode { type RbacConfig_Target struct { // A list of services. Services []string `protobuf:"bytes,1,rep,name=services" json:"services,omitempty"` + // $hide_from_docs + // A list of workloads. + WorkloadSelectors []*WorkloadSelector `protobuf:"bytes,3,rep,name=workload_selectors,json=workloadSelectors" json:"workload_selectors,omitempty"` // A list of namespaces. Namespaces []string `protobuf:"bytes,2,rep,name=namespaces" json:"namespaces,omitempty"` } @@ -463,7 +676,7 @@ type RbacConfig_Target struct { func (m *RbacConfig_Target) Reset() { *m = RbacConfig_Target{} } func (m *RbacConfig_Target) String() string { return proto.CompactTextString(m) } func (*RbacConfig_Target) ProtoMessage() {} -func (*RbacConfig_Target) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{5, 0} } +func (*RbacConfig_Target) Descriptor() ([]byte, []int) { return fileDescriptorRbac, []int{7, 0} } func (m *RbacConfig_Target) GetServices() []string { if m != nil { @@ -472,6 +685,13 @@ func (m *RbacConfig_Target) GetServices() []string { return nil } +func (m *RbacConfig_Target) GetWorkloadSelectors() []*WorkloadSelector { + if m != nil { + return m.WorkloadSelectors + } + return nil +} + func (m *RbacConfig_Target) GetNamespaces() []string { if m != nil { return m.Namespaces @@ -480,6 +700,8 @@ func (m *RbacConfig_Target) GetNamespaces() []string { } func init() { + proto.RegisterType((*WorkloadSelector)(nil), "istio.rbac.v1alpha1.WorkloadSelector") + proto.RegisterType((*AuthorizationPolicy)(nil), "istio.rbac.v1alpha1.AuthorizationPolicy") proto.RegisterType((*ServiceRole)(nil), "istio.rbac.v1alpha1.ServiceRole") proto.RegisterType((*AccessRule)(nil), "istio.rbac.v1alpha1.AccessRule") proto.RegisterType((*AccessRule_Constraint)(nil), "istio.rbac.v1alpha1.AccessRule.Constraint") @@ -491,6 +713,81 @@ func init() { proto.RegisterEnum("istio.rbac.v1alpha1.EnforcementMode", EnforcementMode_name, EnforcementMode_value) proto.RegisterEnum("istio.rbac.v1alpha1.RbacConfig_Mode", RbacConfig_Mode_name, RbacConfig_Mode_value) } +func (m *WorkloadSelector) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *WorkloadSelector) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i + var l int + _ = l + if len(m.Labels) > 0 { + for k, _ := range m.Labels { + dAtA[i] = 0xa + i++ + v := m.Labels[k] + mapSize := 1 + len(k) + sovRbac(uint64(len(k))) + 1 + len(v) + sovRbac(uint64(len(v))) + i = encodeVarintRbac(dAtA, i, uint64(mapSize)) + dAtA[i] = 0xa + i++ + i = encodeVarintRbac(dAtA, i, uint64(len(k))) + i += copy(dAtA[i:], k) + dAtA[i] = 0x12 + i++ + i = encodeVarintRbac(dAtA, i, uint64(len(v))) + i += copy(dAtA[i:], v) + } + } + return i, nil +} + +func (m *AuthorizationPolicy) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *AuthorizationPolicy) MarshalTo(dAtA []byte) (int, error) { + var i int + _ = i + var l int + _ = l + if m.WorkloadSelector != nil { + dAtA[i] = 0xa + i++ + i = encodeVarintRbac(dAtA, i, uint64(m.WorkloadSelector.Size())) + n1, err := m.WorkloadSelector.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n1 + } + if len(m.Allow) > 0 { + for _, msg := range m.Allow { + dAtA[i] = 0x12 + i++ + i = encodeVarintRbac(dAtA, i, uint64(msg.Size())) + n, err := msg.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n + } + } + return i, nil +} + func (m *ServiceRole) Marshal() (dAtA []byte, err error) { size := m.Size() dAtA = make([]byte, size) @@ -593,6 +890,102 @@ func (m *AccessRule) MarshalTo(dAtA []byte) (int, error) { i += n } } + if len(m.Hosts) > 0 { + for _, s := range m.Hosts { + dAtA[i] = 0x2a + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.NotHosts) > 0 { + for _, s := range m.NotHosts { + dAtA[i] = 0x32 + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.NotPaths) > 0 { + for _, s := range m.NotPaths { + dAtA[i] = 0x3a + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.NotMethods) > 0 { + for _, s := range m.NotMethods { + dAtA[i] = 0x42 + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.Ports) > 0 { + dAtA3 := make([]byte, len(m.Ports)*10) + var j2 int + for _, num1 := range m.Ports { + num := uint64(num1) + for num >= 1<<7 { + dAtA3[j2] = uint8(uint64(num)&0x7f | 0x80) + num >>= 7 + j2++ + } + dAtA3[j2] = uint8(num) + j2++ + } + dAtA[i] = 0x4a + i++ + i = encodeVarintRbac(dAtA, i, uint64(j2)) + i += copy(dAtA[i:], dAtA3[:j2]) + } + if len(m.NotPorts) > 0 { + dAtA5 := make([]byte, len(m.NotPorts)*10) + var j4 int + for _, num1 := range m.NotPorts { + num := uint64(num1) + for num >= 1<<7 { + dAtA5[j4] = uint8(uint64(num)&0x7f | 0x80) + num >>= 7 + j4++ + } + dAtA5[j4] = uint8(num) + j4++ + } + dAtA[i] = 0x52 + i++ + i = encodeVarintRbac(dAtA, i, uint64(j4)) + i += copy(dAtA[i:], dAtA5[:j4]) + } return i, nil } @@ -666,11 +1059,11 @@ func (m *ServiceRoleBinding) MarshalTo(dAtA []byte) (int, error) { dAtA[i] = 0x12 i++ i = encodeVarintRbac(dAtA, i, uint64(m.RoleRef.Size())) - n1, err := m.RoleRef.MarshalTo(dAtA[i:]) + n6, err := m.RoleRef.MarshalTo(dAtA[i:]) if err != nil { return 0, err } - i += n1 + i += n6 } if m.Mode != 0 { dAtA[i] = 0x18 @@ -724,21 +1117,141 @@ func (m *Subject) MarshalTo(dAtA []byte) (int, error) { i += copy(dAtA[i:], v) } } - return i, nil -} - -func (m *RoleRef) Marshal() (dAtA []byte, err error) { - size := m.Size() - dAtA = make([]byte, size) - n, err := m.MarshalTo(dAtA) - if err != nil { - return nil, err + if len(m.Principals) > 0 { + for _, s := range m.Principals { + dAtA[i] = 0x22 + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } } - return dAtA[:n], nil -} - -func (m *RoleRef) MarshalTo(dAtA []byte) (int, error) { - var i int + if len(m.NotPrincipals) > 0 { + for _, s := range m.NotPrincipals { + dAtA[i] = 0x2a + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.Groups) > 0 { + for _, s := range m.Groups { + dAtA[i] = 0x32 + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.NotGroups) > 0 { + for _, s := range m.NotGroups { + dAtA[i] = 0x3a + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.Namespaces) > 0 { + for _, s := range m.Namespaces { + dAtA[i] = 0x42 + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.NotNamespaces) > 0 { + for _, s := range m.NotNamespaces { + dAtA[i] = 0x4a + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.Ips) > 0 { + for _, s := range m.Ips { + dAtA[i] = 0x52 + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + if len(m.NotIps) > 0 { + for _, s := range m.NotIps { + dAtA[i] = 0x5a + i++ + l = len(s) + for l >= 1<<7 { + dAtA[i] = uint8(uint64(l)&0x7f | 0x80) + l >>= 7 + i++ + } + dAtA[i] = uint8(l) + i++ + i += copy(dAtA[i:], s) + } + } + return i, nil +} + +func (m *RoleRef) Marshal() (dAtA []byte, err error) { + size := m.Size() + dAtA = make([]byte, size) + n, err := m.MarshalTo(dAtA) + if err != nil { + return nil, err + } + return dAtA[:n], nil +} + +func (m *RoleRef) MarshalTo(dAtA []byte) (int, error) { + var i int _ = i var l int _ = l @@ -781,21 +1294,21 @@ func (m *RbacConfig) MarshalTo(dAtA []byte) (int, error) { dAtA[i] = 0x12 i++ i = encodeVarintRbac(dAtA, i, uint64(m.Inclusion.Size())) - n2, err := m.Inclusion.MarshalTo(dAtA[i:]) + n7, err := m.Inclusion.MarshalTo(dAtA[i:]) if err != nil { return 0, err } - i += n2 + i += n7 } if m.Exclusion != nil { dAtA[i] = 0x1a i++ i = encodeVarintRbac(dAtA, i, uint64(m.Exclusion.Size())) - n3, err := m.Exclusion.MarshalTo(dAtA[i:]) + n8, err := m.Exclusion.MarshalTo(dAtA[i:]) if err != nil { return 0, err } - i += n3 + i += n8 } if m.EnforcementMode != 0 { dAtA[i] = 0x20 @@ -850,6 +1363,18 @@ func (m *RbacConfig_Target) MarshalTo(dAtA []byte) (int, error) { i += copy(dAtA[i:], s) } } + if len(m.WorkloadSelectors) > 0 { + for _, msg := range m.WorkloadSelectors { + dAtA[i] = 0x1a + i++ + i = encodeVarintRbac(dAtA, i, uint64(msg.Size())) + n, err := msg.MarshalTo(dAtA[i:]) + if err != nil { + return 0, err + } + i += n + } + } return i, nil } @@ -862,6 +1387,36 @@ func encodeVarintRbac(dAtA []byte, offset int, v uint64) int { dAtA[offset] = uint8(v) return offset + 1 } +func (m *WorkloadSelector) Size() (n int) { + var l int + _ = l + if len(m.Labels) > 0 { + for k, v := range m.Labels { + _ = k + _ = v + mapEntrySize := 1 + len(k) + sovRbac(uint64(len(k))) + 1 + len(v) + sovRbac(uint64(len(v))) + n += mapEntrySize + 1 + sovRbac(uint64(mapEntrySize)) + } + } + return n +} + +func (m *AuthorizationPolicy) Size() (n int) { + var l int + _ = l + if m.WorkloadSelector != nil { + l = m.WorkloadSelector.Size() + n += 1 + l + sovRbac(uint64(l)) + } + if len(m.Allow) > 0 { + for _, e := range m.Allow { + l = e.Size() + n += 1 + l + sovRbac(uint64(l)) + } + } + return n +} + func (m *ServiceRole) Size() (n int) { var l int _ = l @@ -901,6 +1456,44 @@ func (m *AccessRule) Size() (n int) { n += 1 + l + sovRbac(uint64(l)) } } + if len(m.Hosts) > 0 { + for _, s := range m.Hosts { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.NotHosts) > 0 { + for _, s := range m.NotHosts { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.NotPaths) > 0 { + for _, s := range m.NotPaths { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.NotMethods) > 0 { + for _, s := range m.NotMethods { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.Ports) > 0 { + l = 0 + for _, e := range m.Ports { + l += sovRbac(uint64(e)) + } + n += 1 + sovRbac(uint64(l)) + l + } + if len(m.NotPorts) > 0 { + l = 0 + for _, e := range m.NotPorts { + l += sovRbac(uint64(e)) + } + n += 1 + sovRbac(uint64(l)) + l + } return n } @@ -958,6 +1551,54 @@ func (m *Subject) Size() (n int) { n += mapEntrySize + 1 + sovRbac(uint64(mapEntrySize)) } } + if len(m.Principals) > 0 { + for _, s := range m.Principals { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.NotPrincipals) > 0 { + for _, s := range m.NotPrincipals { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.Groups) > 0 { + for _, s := range m.Groups { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.NotGroups) > 0 { + for _, s := range m.NotGroups { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.Namespaces) > 0 { + for _, s := range m.Namespaces { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.NotNamespaces) > 0 { + for _, s := range m.NotNamespaces { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.Ips) > 0 { + for _, s := range m.Ips { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } + if len(m.NotIps) > 0 { + for _, s := range m.NotIps { + l = len(s) + n += 1 + l + sovRbac(uint64(l)) + } + } return n } @@ -1010,6 +1651,12 @@ func (m *RbacConfig_Target) Size() (n int) { n += 1 + l + sovRbac(uint64(l)) } } + if len(m.WorkloadSelectors) > 0 { + for _, e := range m.WorkloadSelectors { + l = e.Size() + n += 1 + l + sovRbac(uint64(l)) + } + } return n } @@ -1026,7 +1673,7 @@ func sovRbac(x uint64) (n int) { func sozRbac(x uint64) (n int) { return sovRbac(uint64((x << 1) ^ uint64((int64(x) >> 63)))) } -func (m *ServiceRole) Unmarshal(dAtA []byte) error { +func (m *WorkloadSelector) Unmarshal(dAtA []byte) error { l := len(dAtA) iNdEx := 0 for iNdEx < l { @@ -1049,15 +1696,15 @@ func (m *ServiceRole) Unmarshal(dAtA []byte) error { fieldNum := int32(wire >> 3) wireType := int(wire & 0x7) if wireType == 4 { - return fmt.Errorf("proto: ServiceRole: wiretype end group for non-group") + return fmt.Errorf("proto: WorkloadSelector: wiretype end group for non-group") } if fieldNum <= 0 { - return fmt.Errorf("proto: ServiceRole: illegal tag %d (wire type %d)", fieldNum, wire) + return fmt.Errorf("proto: WorkloadSelector: illegal tag %d (wire type %d)", fieldNum, wire) } switch fieldNum { case 1: if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Rules", wireType) + return fmt.Errorf("proto: wrong wireType = %d for field Labels", wireType) } var msglen int for shift := uint(0); ; shift += 7 { @@ -1081,68 +1728,350 @@ func (m *ServiceRole) Unmarshal(dAtA []byte) error { if postIndex > l { return io.ErrUnexpectedEOF } - m.Rules = append(m.Rules, &AccessRule{}) - if err := m.Rules[len(m.Rules)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex - default: - iNdEx = preIndex - skippy, err := skipRbac(dAtA[iNdEx:]) - if err != nil { - return err - } - if skippy < 0 { - return ErrInvalidLengthRbac - } - if (iNdEx + skippy) > l { - return io.ErrUnexpectedEOF - } - iNdEx += skippy - } - } - - if iNdEx > l { - return io.ErrUnexpectedEOF - } - return nil -} -func (m *AccessRule) Unmarshal(dAtA []byte) error { - l := len(dAtA) - iNdEx := 0 - for iNdEx < l { - preIndex := iNdEx - var wire uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { - return ErrIntOverflowRbac - } - if iNdEx >= l { - return io.ErrUnexpectedEOF - } - b := dAtA[iNdEx] - iNdEx++ - wire |= (uint64(b) & 0x7F) << shift - if b < 0x80 { - break - } - } - fieldNum := int32(wire >> 3) - wireType := int(wire & 0x7) - if wireType == 4 { - return fmt.Errorf("proto: AccessRule: wiretype end group for non-group") - } - if fieldNum <= 0 { - return fmt.Errorf("proto: AccessRule: illegal tag %d (wire type %d)", fieldNum, wire) - } - switch fieldNum { - case 1: - if wireType != 2 { - return fmt.Errorf("proto: wrong wireType = %d for field Services", wireType) + if m.Labels == nil { + m.Labels = make(map[string]string) } - var stringLen uint64 - for shift := uint(0); ; shift += 7 { - if shift >= 64 { + var mapkey string + var mapvalue string + for iNdEx < postIndex { + entryPreIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + if fieldNum == 1 { + var stringLenmapkey uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapkey |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapkey := int(stringLenmapkey) + if intStringLenmapkey < 0 { + return ErrInvalidLengthRbac + } + postStringIndexmapkey := iNdEx + intStringLenmapkey + if postStringIndexmapkey > l { + return io.ErrUnexpectedEOF + } + mapkey = string(dAtA[iNdEx:postStringIndexmapkey]) + iNdEx = postStringIndexmapkey + } else if fieldNum == 2 { + var stringLenmapvalue uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLenmapvalue |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLenmapvalue := int(stringLenmapvalue) + if intStringLenmapvalue < 0 { + return ErrInvalidLengthRbac + } + postStringIndexmapvalue := iNdEx + intStringLenmapvalue + if postStringIndexmapvalue > l { + return io.ErrUnexpectedEOF + } + mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue]) + iNdEx = postStringIndexmapvalue + } else { + iNdEx = entryPreIndex + skippy, err := skipRbac(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthRbac + } + if (iNdEx + skippy) > postIndex { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + m.Labels[mapkey] = mapvalue + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipRbac(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthRbac + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AuthorizationPolicy) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AuthorizationPolicy: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AuthorizationPolicy: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadSelector", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + if m.WorkloadSelector == nil { + m.WorkloadSelector = &WorkloadSelector{} + } + if err := m.WorkloadSelector.Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 2: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Allow", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Allow = append(m.Allow, &ServiceRoleBinding{}) + if err := m.Allow[len(m.Allow)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipRbac(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthRbac + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *ServiceRole) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: ServiceRole: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: ServiceRole: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Rules", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Rules = append(m.Rules, &AccessRule{}) + if err := m.Rules[len(m.Rules)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + default: + iNdEx = preIndex + skippy, err := skipRbac(dAtA[iNdEx:]) + if err != nil { + return err + } + if skippy < 0 { + return ErrInvalidLengthRbac + } + if (iNdEx + skippy) > l { + return io.ErrUnexpectedEOF + } + iNdEx += skippy + } + } + + if iNdEx > l { + return io.ErrUnexpectedEOF + } + return nil +} +func (m *AccessRule) Unmarshal(dAtA []byte) error { + l := len(dAtA) + iNdEx := 0 + for iNdEx < l { + preIndex := iNdEx + var wire uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + wire |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + fieldNum := int32(wire >> 3) + wireType := int(wire & 0x7) + if wireType == 4 { + return fmt.Errorf("proto: AccessRule: wiretype end group for non-group") + } + if fieldNum <= 0 { + return fmt.Errorf("proto: AccessRule: illegal tag %d (wire type %d)", fieldNum, wire) + } + switch fieldNum { + case 1: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Services", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { return ErrIntOverflowRbac } if iNdEx >= l { @@ -1242,18 +2171,258 @@ func (m *AccessRule) Unmarshal(dAtA []byte) error { break } } - if msglen < 0 { - return ErrInvalidLengthRbac - } - postIndex := iNdEx + msglen - if postIndex > l { - return io.ErrUnexpectedEOF - } - m.Constraints = append(m.Constraints, &AccessRule_Constraint{}) - if err := m.Constraints[len(m.Constraints)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { - return err - } - iNdEx = postIndex + if msglen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Constraints = append(m.Constraints, &AccessRule_Constraint{}) + if err := m.Constraints[len(m.Constraints)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Hosts", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Hosts = append(m.Hosts, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NotHosts", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NotHosts = append(m.NotHosts, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NotPaths", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NotPaths = append(m.NotPaths, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NotMethods", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NotMethods = append(m.NotMethods, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 9: + if wireType == 0 { + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= (int32(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + m.Ports = append(m.Ports, v) + } else if wireType == 2 { + var packedLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + packedLen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if packedLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + packedLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + for iNdEx < postIndex { + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= (int32(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + m.Ports = append(m.Ports, v) + } + } else { + return fmt.Errorf("proto: wrong wireType = %d for field Ports", wireType) + } + case 10: + if wireType == 0 { + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= (int32(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + m.NotPorts = append(m.NotPorts, v) + } else if wireType == 2 { + var packedLen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + packedLen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if packedLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + packedLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + for iNdEx < postIndex { + var v int32 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + v |= (int32(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + m.NotPorts = append(m.NotPorts, v) + } + } else { + return fmt.Errorf("proto: wrong wireType = %d for field NotPorts", wireType) + } default: iNdEx = preIndex skippy, err := skipRbac(dAtA[iNdEx:]) @@ -1721,6 +2890,238 @@ func (m *Subject) Unmarshal(dAtA []byte) error { } m.Properties[mapkey] = mapvalue iNdEx = postIndex + case 4: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Principals", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Principals = append(m.Principals, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 5: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NotPrincipals", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NotPrincipals = append(m.NotPrincipals, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 6: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Groups", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Groups = append(m.Groups, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 7: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NotGroups", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NotGroups = append(m.NotGroups, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 8: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Namespaces", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Namespaces = append(m.Namespaces, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 9: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NotNamespaces", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NotNamespaces = append(m.NotNamespaces, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 10: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field Ips", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.Ips = append(m.Ips, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex + case 11: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field NotIps", wireType) + } + var stringLen uint64 + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + stringLen |= (uint64(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + intStringLen := int(stringLen) + if intStringLen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + intStringLen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.NotIps = append(m.NotIps, string(dAtA[iNdEx:postIndex])) + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipRbac(dAtA[iNdEx:]) @@ -2091,6 +3492,37 @@ func (m *RbacConfig_Target) Unmarshal(dAtA []byte) error { } m.Namespaces = append(m.Namespaces, string(dAtA[iNdEx:postIndex])) iNdEx = postIndex + case 3: + if wireType != 2 { + return fmt.Errorf("proto: wrong wireType = %d for field WorkloadSelectors", wireType) + } + var msglen int + for shift := uint(0); ; shift += 7 { + if shift >= 64 { + return ErrIntOverflowRbac + } + if iNdEx >= l { + return io.ErrUnexpectedEOF + } + b := dAtA[iNdEx] + iNdEx++ + msglen |= (int(b) & 0x7F) << shift + if b < 0x80 { + break + } + } + if msglen < 0 { + return ErrInvalidLengthRbac + } + postIndex := iNdEx + msglen + if postIndex > l { + return io.ErrUnexpectedEOF + } + m.WorkloadSelectors = append(m.WorkloadSelectors, &WorkloadSelector{}) + if err := m.WorkloadSelectors[len(m.WorkloadSelectors)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil { + return err + } + iNdEx = postIndex default: iNdEx = preIndex skippy, err := skipRbac(dAtA[iNdEx:]) @@ -2220,44 +3652,62 @@ var ( func init() { proto.RegisterFile("rbac/v1alpha1/rbac.proto", fileDescriptorRbac) } var fileDescriptorRbac = []byte{ - // 615 bytes of a gzipped FileDescriptorProto - 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x54, 0xdd, 0x6a, 0xdb, 0x4c, - 0x10, 0xcd, 0x5a, 0x8e, 0x1d, 0x8f, 0x3f, 0x12, 0x7d, 0xdb, 0xb4, 0x08, 0x53, 0x5c, 0x63, 0x4a, - 0x31, 0xa1, 0xc8, 0x24, 0xa5, 0x21, 0x14, 0x7a, 0xd1, 0xd8, 0x0a, 0x35, 0x24, 0x56, 0x58, 0xa7, - 0x3f, 0xf4, 0x26, 0xc8, 0xf2, 0xc6, 0xd9, 0x46, 0xde, 0x15, 0xbb, 0x52, 0x68, 0xde, 0xaa, 0x8f, - 0xd0, 0xcb, 0x5e, 0xf6, 0x09, 0x4a, 0xc9, 0x93, 0x14, 0xad, 0x2c, 0xc9, 0x09, 0x6e, 0x42, 0xee, - 0x66, 0xce, 0xcc, 0x19, 0x9f, 0x39, 0x63, 0x2d, 0x58, 0x72, 0xec, 0xf9, 0xdd, 0xcb, 0x6d, 0x2f, - 0x08, 0xcf, 0xbd, 0xed, 0x6e, 0x92, 0xd9, 0xa1, 0x14, 0x91, 0xc0, 0x8f, 0x98, 0x8a, 0x98, 0xb0, - 0x35, 0x92, 0xd5, 0xdb, 0x7d, 0xa8, 0x8f, 0xa8, 0xbc, 0x64, 0x3e, 0x25, 0x22, 0xa0, 0xf8, 0x35, - 0xac, 0xca, 0x38, 0xa0, 0xca, 0x42, 0x2d, 0xa3, 0x53, 0xdf, 0x79, 0x66, 0x2f, 0xe1, 0xd8, 0xef, - 0x7c, 0x9f, 0x2a, 0x45, 0xe2, 0x80, 0x92, 0xb4, 0xbb, 0xfd, 0x1b, 0x01, 0x14, 0x28, 0x6e, 0xc0, - 0x9a, 0x4a, 0x87, 0xa6, 0x83, 0x6a, 0x24, 0xcf, 0xf1, 0x26, 0xac, 0x86, 0x5e, 0x74, 0xae, 0xac, - 0x92, 0x2e, 0xa4, 0x09, 0xb6, 0xa0, 0x3a, 0xa3, 0xd1, 0xb9, 0x98, 0x28, 0xcb, 0xd0, 0x78, 0x96, - 0xe2, 0x43, 0xa8, 0xfb, 0x82, 0xab, 0x48, 0x7a, 0x8c, 0x47, 0xca, 0x2a, 0x6b, 0x5d, 0x5b, 0xf7, - 0xe8, 0xb2, 0x7b, 0x39, 0x85, 0x2c, 0xd2, 0x1b, 0xbb, 0x00, 0x45, 0x09, 0x9b, 0x60, 0x5c, 0xd0, - 0x2b, 0x0b, 0xb5, 0x50, 0xa7, 0x46, 0x92, 0x10, 0x3f, 0x81, 0xca, 0xa5, 0x17, 0xc4, 0x34, 0x93, - 0x37, 0xcf, 0xda, 0x3f, 0x10, 0xe0, 0x05, 0x9f, 0xf6, 0x19, 0x9f, 0x30, 0x3e, 0xc5, 0x7b, 0xb0, - 0xa6, 0xe2, 0xf1, 0x57, 0xea, 0x47, 0x99, 0x63, 0x4f, 0x97, 0x2a, 0x1b, 0xa5, 0x4d, 0x24, 0xef, - 0xc6, 0xbb, 0x50, 0x95, 0x22, 0xa0, 0x84, 0x9e, 0x59, 0xa5, 0x16, 0xfa, 0x27, 0x91, 0xa4, 0x3d, - 0x24, 0x6b, 0xc6, 0x7b, 0x50, 0x9e, 0x89, 0x09, 0xb5, 0x8c, 0x16, 0xea, 0xac, 0xef, 0x3c, 0x5f, - 0x4a, 0x72, 0xf8, 0x99, 0x90, 0x3e, 0x9d, 0x51, 0x1e, 0x1d, 0x89, 0x09, 0x25, 0x9a, 0x91, 0xac, - 0x50, 0x9d, 0xeb, 0xc0, 0x18, 0xca, 0xb1, 0xa2, 0x72, 0xbe, 0xb9, 0x8e, 0x93, 0xc3, 0x4c, 0xa5, - 0x88, 0x43, 0xad, 0xa7, 0x46, 0xd2, 0x04, 0x1f, 0x02, 0x84, 0x52, 0x84, 0x54, 0x46, 0x8c, 0xa6, - 0xb7, 0xa9, 0xef, 0xbc, 0xbc, 0x6b, 0x47, 0xfb, 0x38, 0x6f, 0x77, 0x78, 0x24, 0xaf, 0xc8, 0x02, - 0xbf, 0xf1, 0x16, 0x36, 0x6e, 0x95, 0x97, 0xdc, 0x60, 0x13, 0x56, 0xb5, 0xeb, 0x99, 0x10, 0x9d, - 0xbc, 0x29, 0xed, 0xa1, 0xf6, 0x36, 0x54, 0xe7, 0x86, 0x24, 0x1b, 0x5c, 0x30, 0x3e, 0xc9, 0x36, - 0x48, 0xe2, 0x04, 0xe3, 0xde, 0x2c, 0xe3, 0xe9, 0xb8, 0xfd, 0xdd, 0x00, 0x20, 0x63, 0xcf, 0xef, - 0x09, 0x7e, 0xc6, 0xa6, 0xb9, 0x7d, 0xe8, 0x0e, 0xfb, 0x8a, 0x76, 0xbb, 0xb0, 0x0f, 0xf7, 0xa1, - 0xc6, 0xb8, 0x1f, 0xc4, 0x8a, 0x09, 0x3e, 0x3f, 0xd9, 0x8b, 0xfb, 0xe8, 0x27, 0x9e, 0x9c, 0xd2, - 0x88, 0x14, 0xc4, 0x64, 0x0a, 0xfd, 0x96, 0x4d, 0x31, 0x1e, 0x36, 0x25, 0x27, 0x62, 0x17, 0x4c, - 0x5a, 0xdc, 0xf8, 0x54, 0x6f, 0x54, 0x7e, 0xc0, 0x1f, 0x62, 0x83, 0xde, 0x04, 0x1a, 0x7d, 0xa8, - 0xa4, 0xbf, 0x72, 0xe7, 0xa7, 0xdb, 0x04, 0x48, 0x3c, 0x55, 0xa1, 0xe7, 0xe7, 0x1f, 0xc8, 0x02, - 0xd2, 0x76, 0xa0, 0x9c, 0x4c, 0xc3, 0x55, 0x30, 0xdc, 0x83, 0x03, 0x73, 0x05, 0x57, 0xa0, 0xe4, - 0x0e, 0x4d, 0x84, 0x1f, 0xc3, 0xff, 0xee, 0xf0, 0xf4, 0xd3, 0xe0, 0xe4, 0xfd, 0xe9, 0x60, 0xd8, - 0x3b, 0xfc, 0x30, 0x1a, 0xb8, 0x43, 0xb3, 0xb4, 0x08, 0x3b, 0x9f, 0x33, 0xd8, 0xd8, 0xea, 0xc2, - 0xc6, 0x2d, 0xc1, 0xf8, 0x3f, 0x58, 0x73, 0x86, 0x07, 0x2e, 0xe9, 0x39, 0x7d, 0x73, 0x05, 0xaf, - 0x03, 0x1c, 0x3b, 0xe4, 0x68, 0x30, 0x1a, 0x0d, 0x3e, 0x3a, 0x26, 0xda, 0xef, 0xfc, 0xbc, 0x6e, - 0xa2, 0x5f, 0xd7, 0x4d, 0xf4, 0xe7, 0xba, 0x89, 0xbe, 0x34, 0x52, 0x07, 0x98, 0xe8, 0x7a, 0x21, - 0xeb, 0xde, 0x78, 0x0d, 0xc7, 0x15, 0xfd, 0x12, 0xbe, 0xfa, 0x1b, 0x00, 0x00, 0xff, 0xff, 0x00, - 0xdb, 0x22, 0xbc, 0x25, 0x05, 0x00, 0x00, + // 909 bytes of a gzipped FileDescriptorProto + 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x56, 0xdd, 0x6e, 0x23, 0x35, + 0x14, 0xde, 0xc9, 0xe4, 0xa7, 0x39, 0x81, 0x76, 0xea, 0x85, 0x65, 0x14, 0x20, 0x5b, 0x45, 0x2c, + 0x44, 0x2b, 0x94, 0xa8, 0x45, 0xac, 0x0a, 0xd2, 0x5e, 0x6c, 0xdb, 0x94, 0x8d, 0xd4, 0x26, 0x95, + 0x53, 0x58, 0xc4, 0x4d, 0x34, 0x99, 0xb8, 0x8d, 0xe9, 0xd4, 0x1e, 0xd9, 0x4e, 0x4b, 0xb9, 0xe4, + 0x86, 0x57, 0x40, 0x3c, 0x01, 0x0f, 0xc0, 0x03, 0x70, 0xc9, 0x25, 0x8f, 0x80, 0xfa, 0x24, 0xc8, + 0xf6, 0xfc, 0xa4, 0x69, 0xe8, 0xb6, 0x77, 0x3e, 0xdf, 0x39, 0xdf, 0x39, 0x9f, 0xed, 0xcf, 0x99, + 0x80, 0x2f, 0xc6, 0x41, 0xd8, 0xb9, 0xd8, 0x0c, 0xa2, 0x78, 0x1a, 0x6c, 0x76, 0x74, 0xd4, 0x8e, + 0x05, 0x57, 0x1c, 0x3d, 0xa6, 0x52, 0x51, 0xde, 0x36, 0x48, 0x9a, 0x6f, 0xfe, 0xe6, 0x80, 0xf7, + 0x86, 0x8b, 0xb3, 0x88, 0x07, 0x93, 0x21, 0x89, 0x48, 0xa8, 0xb8, 0x40, 0x3d, 0x28, 0x47, 0xc1, + 0x98, 0x44, 0xd2, 0x77, 0x36, 0xdc, 0x56, 0x6d, 0x6b, 0xb3, 0xbd, 0x84, 0xda, 0x5e, 0xa4, 0xb5, + 0x0f, 0x0c, 0xa7, 0xcb, 0x94, 0xb8, 0xc2, 0x49, 0x83, 0xfa, 0x57, 0x50, 0x9b, 0x83, 0x91, 0x07, + 0xee, 0x19, 0xb9, 0xf2, 0x9d, 0x0d, 0xa7, 0x55, 0xc5, 0x7a, 0x89, 0xde, 0x83, 0xd2, 0x45, 0x10, + 0xcd, 0x88, 0x5f, 0x30, 0x98, 0x0d, 0xbe, 0x2e, 0x6c, 0x3b, 0xcd, 0x3f, 0x1c, 0x78, 0xfc, 0x6a, + 0xa6, 0xa6, 0x5c, 0xd0, 0x9f, 0x03, 0x45, 0x39, 0x3b, 0xe2, 0x11, 0x0d, 0xaf, 0x10, 0x86, 0xf5, + 0xcb, 0x64, 0xf4, 0x48, 0x26, 0xb3, 0x4d, 0xc7, 0xda, 0xd6, 0xb3, 0x7b, 0x09, 0xc5, 0xde, 0xe5, + 0xe2, 0x8e, 0x5f, 0x42, 0x29, 0x88, 0x22, 0x7e, 0xe9, 0x17, 0xcc, 0x86, 0x3f, 0x5b, 0xda, 0x67, + 0x48, 0xc4, 0x05, 0x0d, 0x09, 0xe6, 0x11, 0xd9, 0xa1, 0x6c, 0x42, 0xd9, 0x29, 0xb6, 0xac, 0xe6, + 0x1e, 0xd4, 0xe6, 0x92, 0xe8, 0x4b, 0x28, 0x89, 0x59, 0x44, 0xd2, 0xe3, 0x7b, 0xba, 0xb4, 0xdb, + 0xab, 0x30, 0x24, 0x52, 0xe2, 0x59, 0x44, 0xb0, 0xad, 0x6e, 0xfe, 0xe2, 0x02, 0xe4, 0x28, 0xaa, + 0xc3, 0x8a, 0xb4, 0x4d, 0x6d, 0xa3, 0x2a, 0xce, 0x62, 0x7d, 0x6a, 0x71, 0xa0, 0xa6, 0xd2, 0xe8, + 0xad, 0x62, 0x1b, 0x20, 0x1f, 0x2a, 0xe7, 0x44, 0x4d, 0xf9, 0x44, 0xfa, 0xae, 0xc1, 0xd3, 0x10, + 0x1d, 0x40, 0x2d, 0xe4, 0x4c, 0x2a, 0x11, 0x50, 0xa6, 0xa4, 0x5f, 0x34, 0xba, 0x9e, 0xbf, 0x45, + 0x57, 0x7b, 0x37, 0xa3, 0xe0, 0x79, 0xba, 0x9e, 0x3e, 0xe5, 0x52, 0x49, 0xbf, 0x64, 0xa7, 0x9b, + 0x00, 0x7d, 0x08, 0x55, 0xc6, 0xd5, 0xc8, 0x66, 0xca, 0x56, 0x30, 0xe3, 0xea, 0xf5, 0x7c, 0xd2, + 0x8a, 0xae, 0x64, 0xc9, 0x23, 0xa3, 0xfb, 0x29, 0xd4, 0x74, 0x32, 0xd5, 0xbe, 0x62, 0xd2, 0xc0, + 0xb8, 0x3a, 0x4c, 0xe4, 0xeb, 0xed, 0x72, 0xa1, 0xa4, 0x5f, 0xdd, 0x70, 0x5b, 0x25, 0x6c, 0x83, + 0xac, 0xa7, 0xc9, 0x80, 0xc9, 0x98, 0x9e, 0x3a, 0xae, 0xbf, 0x00, 0xc8, 0xe5, 0x2f, 0xf1, 0xdd, + 0x13, 0x28, 0x1b, 0xab, 0xa5, 0x47, 0x98, 0x44, 0xcd, 0xbf, 0x1c, 0x40, 0xb7, 0x2f, 0x1a, 0x6d, + 0xc3, 0x8a, 0x9c, 0x8d, 0x7f, 0x24, 0xa1, 0x4a, 0x6f, 0xf5, 0xa3, 0xe5, 0x1e, 0xb1, 0x45, 0x38, + 0xab, 0x46, 0x2f, 0xa0, 0x22, 0x78, 0x44, 0x30, 0x39, 0x31, 0x16, 0xff, 0x3f, 0x22, 0xb6, 0x35, + 0x38, 0x2d, 0x46, 0xdb, 0x50, 0x3c, 0xe7, 0x13, 0xe2, 0xbb, 0x1b, 0x4e, 0x6b, 0x75, 0xeb, 0x93, + 0xa5, 0xa4, 0x2e, 0x3b, 0xe1, 0x22, 0x24, 0xe7, 0x84, 0xa9, 0x43, 0x3e, 0x21, 0xd8, 0x30, 0x9a, + 0x7f, 0xba, 0x50, 0x49, 0x74, 0x20, 0x04, 0xc5, 0x99, 0x24, 0x22, 0xd9, 0xb9, 0x59, 0x23, 0x1f, + 0x4a, 0xa7, 0x82, 0xcf, 0x62, 0xfb, 0xe4, 0x76, 0x0a, 0xbe, 0x83, 0x2d, 0x80, 0x0e, 0x00, 0x62, + 0xc1, 0x63, 0x22, 0x14, 0x25, 0xd6, 0x43, 0xb5, 0xad, 0xcf, 0xef, 0xda, 0x67, 0xfb, 0x28, 0x2b, + 0xb7, 0xef, 0x7e, 0x8e, 0x8f, 0x1a, 0xba, 0x1b, 0x65, 0x21, 0x8d, 0x83, 0xc8, 0x7a, 0xae, 0x8a, + 0xe7, 0x10, 0xf4, 0x0c, 0x56, 0xcd, 0xfd, 0xe5, 0x35, 0xd6, 0x4f, 0xef, 0xea, 0x4b, 0xcc, 0xcb, + 0x9e, 0x40, 0xd9, 0xa8, 0x4b, 0x4d, 0x95, 0x44, 0xe8, 0x63, 0xd0, 0x16, 0x19, 0x25, 0x39, 0xeb, + 0x29, 0x6d, 0x88, 0x6f, 0x6c, 0xba, 0x01, 0xc0, 0x82, 0x73, 0x22, 0xe3, 0x40, 0x3f, 0xa0, 0xd4, + 0x53, 0x19, 0x92, 0x4e, 0x9f, 0xab, 0xa9, 0x66, 0xd3, 0xfb, 0x79, 0x99, 0x07, 0x2e, 0x8d, 0xad, + 0xbd, 0xaa, 0x58, 0x2f, 0xd1, 0x07, 0x50, 0xd1, 0x44, 0x8d, 0xd6, 0xac, 0x20, 0xc6, 0x55, 0x2f, + 0x96, 0xf5, 0x97, 0xb0, 0xb6, 0x70, 0x1c, 0x0f, 0xfa, 0xbd, 0xdb, 0x84, 0x4a, 0x62, 0x02, 0x7d, + 0x6b, 0x67, 0x94, 0x4d, 0xd2, 0x5b, 0xd3, 0x6b, 0x8d, 0x69, 0xad, 0x09, 0xcf, 0xac, 0x9b, 0xbf, + 0x16, 0x01, 0xf0, 0x38, 0x08, 0x77, 0x39, 0x3b, 0xa1, 0xa7, 0x99, 0x65, 0x9c, 0x3b, 0x2c, 0x93, + 0x97, 0xb7, 0x73, 0xcb, 0xa0, 0x3d, 0xa8, 0x52, 0x16, 0x46, 0x33, 0x49, 0x39, 0x4b, 0x6c, 0xfa, + 0xe9, 0xdb, 0xe8, 0xc7, 0x81, 0x38, 0x25, 0x0a, 0xe7, 0x44, 0xdd, 0x85, 0xfc, 0x94, 0x76, 0x71, + 0x1f, 0xd6, 0x25, 0x23, 0xa2, 0x01, 0x78, 0x24, 0xf7, 0xf5, 0xc8, 0xec, 0xa8, 0xf8, 0x80, 0x47, + 0xb0, 0x46, 0x6e, 0x02, 0xf5, 0xdf, 0x1d, 0x28, 0xdb, 0x31, 0x77, 0xfe, 0xa6, 0xde, 0x34, 0x4c, + 0xe1, 0x96, 0x61, 0x8e, 0x01, 0xdd, 0xfa, 0xee, 0xa4, 0x8f, 0xe4, 0x9e, 0x1f, 0x9e, 0xf5, 0xc5, + 0x0f, 0x8f, 0x6c, 0x76, 0xa1, 0xa8, 0x45, 0xa2, 0x0a, 0xb8, 0x83, 0xfd, 0x7d, 0xef, 0x11, 0x2a, + 0x43, 0x61, 0xd0, 0xf7, 0x1c, 0xf4, 0x3e, 0xac, 0x0f, 0xfa, 0xa3, 0x37, 0xbd, 0xe3, 0xd7, 0xa3, + 0x5e, 0x7f, 0xf7, 0xe0, 0xdb, 0x61, 0x6f, 0xd0, 0xf7, 0x0a, 0xf3, 0x70, 0xf7, 0xfb, 0x14, 0x76, + 0x9f, 0x77, 0x60, 0x6d, 0xe1, 0x1c, 0xd0, 0x3b, 0xb0, 0xd2, 0xed, 0xef, 0x0f, 0xf0, 0x6e, 0x77, + 0xcf, 0x7b, 0x84, 0x56, 0x01, 0x8e, 0xba, 0xf8, 0xb0, 0x37, 0x1c, 0xf6, 0xbe, 0xeb, 0x7a, 0xce, + 0x4e, 0xeb, 0xef, 0xeb, 0x86, 0xf3, 0xcf, 0x75, 0xc3, 0xf9, 0xf7, 0xba, 0xe1, 0xfc, 0x50, 0xb7, + 0xf2, 0x29, 0xef, 0x04, 0x31, 0xed, 0xdc, 0xf8, 0x0b, 0x31, 0x2e, 0x9b, 0xbf, 0x0f, 0x5f, 0xfc, + 0x17, 0x00, 0x00, 0xff, 0xff, 0x6e, 0x5d, 0x8c, 0xf1, 0x5a, 0x08, 0x00, 0x00, } diff --git a/vendor/istio.io/api/rbac/v1alpha1/rbac.proto b/vendor/istio.io/api/rbac/v1alpha1/rbac.proto index 1b138bda481f..69c080cd5df6 100644 --- a/vendor/istio.io/api/rbac/v1alpha1/rbac.proto +++ b/vendor/istio.io/api/rbac/v1alpha1/rbac.proto @@ -1,4 +1,4 @@ -// Copyright 2018 Istio Authors +// Copyright 2019 Istio Authors // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. @@ -84,9 +84,42 @@ package istio.rbac.v1alpha1; option go_package="istio.io/api/rbac/v1alpha1"; +// $hide_from_docs +// This is forked from the networking/v1alpha3/sidecar.proto to avoid a direct +// dependency from the rbac API on networking API. +// TODO: Move the WorkloadSelector to a common place to be shared by other packages. +// WorkloadSelector specifies the criteria used to determine if the Gateway +// or Sidecar resource can be applied to a proxy. The matching criteria +// includes the metadata associated with a proxy, workload info such as +// labels attached to the pod/VM, or any other info that the proxy provides +// to Istio during the initial handshake. If multiple conditions are +// specified, all conditions need to match in order for the workload to be +// selected. Currently, only label based selection mechanism is supported. +message WorkloadSelector { + // One or more labels that indicate a specific set of pods/VMs on which + // this sidecar configuration should be applied. The scope of label + // search is restricted to the configuration namespace in which the the + // resource is present. + map labels = 1; +} + +// $hide_from_docs +// AuthorizationPolicy to enforce access control on a selected workload. +message AuthorizationPolicy { + // $hide_from_docs + // Optional. One or more labels that indicate a specific set of pods/VMs + // on which this authorization policy should be applied. Leave this empty to + // select all pods/VMs. + // The scope of label search is platform dependent. On Kubernetes, for example, + // the scope includes pods running in the same namespace as the authorization policy itself. + WorkloadSelector workload_selector = 1; + + // $hide_from_docs + // A list of bindings that specify the subjects and permissions to the selected workload. + repeated ServiceRoleBinding allow = 2; +} + // ServiceRole specification contains a list of access rules (permissions). -// This represent the "Spec" part of the ServiceRole object. The name and namespace -// of the ServiceRole is specified in "metadata" section of the ServiceRole object. message ServiceRole { // Required. The set of access rules (permissions) that the role has. repeated AccessRule rules = 1; @@ -102,37 +135,69 @@ message AccessRule { // If set to ["*"], it refers to all services in the namespace. repeated string services = 1; + // $hide_from_docs + // Optional. A list of HTTP hosts. This is matched against the HOST header in + // a HTTP request. Exact match, prefix match and suffix match are supported. + // For example, the host "test.abc.com" matches "test.abc.com" (exact match), + // or "*.abc.com" (prefix match), or "test.abc.*" (suffix match). + // If not specified, it matches to any host. + repeated string hosts = 5; + + // $hide_from_docs + // Optional. A list of HTTP hosts that must not be matched. + repeated string not_hosts = 6; + // Optional. A list of HTTP paths or gRPC methods. // gRPC methods must be presented as fully-qualified name in the form of // "/packageName.serviceName/methodName" and are case sensitive. - // Exact match, prefix match, and suffix match are supported for paths. - // For example, the path "/books/review" matches - // "/books/review" (exact match), or "/books/*" (prefix match), - // or "*/review" (suffix match). - // If not specified, it applies to any path. + // Exact match, prefix match, and suffix match are supported. For example, + // the path "/books/review" matches "/books/review" (exact match), + // or "/books/*" (prefix match), or "*/review" (suffix match). + // If not specified, it matches to any path. repeated string paths = 2; + // $hide_from_docs + // Optional. A list of HTTP paths or gRPC methods that must not be matched. + repeated string not_paths = 7; + // Optional. A list of HTTP methods (e.g., "GET", "POST"). // It is ignored in gRPC case because the value is always "POST". - // If set to ["*"] or not specified, it applies to any method. + // If not specified, it matches to any methods. repeated string methods = 3; + // $hide_from_docs + // Optional. A list of HTTP methods that must not be matched. + // Note: It's an error to set methods and not_methods at the same time. + repeated string not_methods = 8; + + // $hide_from_docs + // Optional. A list of port numbers of the request. If not specified, it matches + // to any port number. + // Note: It's an error to set ports and not_ports at the same time. + repeated int32 ports = 9; + + // $hide_from_docs + // Optional. A list of port numbers that must not be matched. + // Note: It's an error to set ports and not_ports at the same time. + repeated int32 not_ports = 10; + // Definition of a custom constraint. The supported keys are listed in the "constraint and properties" page. message Constraint { // Key of the constraint. string key = 1; // List of valid values for the constraint. - // Exact match, prefix match, and suffix match are supported for constraint values. - // For example, the value "v1alpha2" matches - // "v1alpha2" (exact match), or "v1*" (prefix match), - // or "*alpha2" (suffix match). + // Exact match, prefix match, and suffix match are supported. + // For example, the value "v1alpha2" matches "v1alpha2" (exact match), + // or "v1*" (prefix match), or "*alpha2" (suffix match). repeated string values = 2; } // Optional. Extra constraints in the ServiceRole specification. - // The above ServiceRole example shows an example of constraint "version". repeated Constraint constraints = 4; + + // $hide_from_docs + // Next available field number: 11 } // $hide_from_docs @@ -148,12 +213,9 @@ enum EnforcementMode { // Policy in PERMISSIVE mode isn't enforced and has no impact on users. // RBAC engine run policies in PERMISSIVE mode and logs stats. PERMISSIVE = 1; - } +} // ServiceRoleBinding assigns a ServiceRole to a list of subjects. -// This represents the "Spec" part of the ServiceRoleBinding object. The name and namespace -// of the ServiceRoleBinding is specified in "metadata" section of the ServiceRoleBinding -// object. message ServiceRoleBinding { // Required. List of subjects that are assigned the ServiceRole object. repeated Subject subjects = 1; @@ -172,13 +234,52 @@ message Subject { // Optional. The user name/ID that the subject represents. string user = 1; + // $hide_from_docs + // Optional. A list of principals that the subject represents. This is matched to the + // `source.principal` attribute. If not specified, it applies to any principals. + repeated string principals = 4; + + // $hide_from_docs + // Optional. A list of principals that must not be matched. + repeated string not_principals = 5; + // $hide_from_docs // Optional. The group that the subject belongs to. - string group = 2; + // Deprecated. Use groups and not_groups instead. + string group = 2 [deprecated = true]; + + // $hide_from_docs + // Optional. A list of groups that the subject represents. This is matched to the + // `request.auth.claims[groups]` attribute. If not specified, it applies to any groups. + repeated string groups = 6; + + // $hide_from_docs + // Optional. A list of groups that must not be matched. + repeated string not_groups = 7; + + // $hide_from_docs + // Optional. A list of namespaces that the subject represents. This is matched to + // the `source.namespace` attribute. If not specified, it applies to any namespaces. + repeated string namespaces = 8; + + // $hide_from_docs + // Optional. A list of namespaces that must not be matched. + repeated string not_namespaces = 9; + + // $hide_from_docs + // Optional. A list of IP address or CIDR ranges that the subject represents. + // E.g. 192.168.100.2 or 10.1.0.0/16. If not specified, it applies to any IP addresses. + repeated string ips = 10; + + // $hide_from_docs + // Optional. A list of IP addresses or CIDR ranges that must not be matched. + repeated string not_ips = 11; // Optional. The set of properties that identify the subject. - // The above ServiceRoleBinding example shows an example of property "source.namespace". map properties = 3; + + // $hide_from_docs + // Next available field number: 12 } // RoleRef refers to a role object. @@ -188,8 +289,7 @@ message RoleRef { string kind = 1; // Required. The name of the ServiceRole object being referenced. - // The ServiceRole object must be in the same namespace as the ServiceRoleBinding - // object. + // The ServiceRole object must be in the same namespace as the ServiceRoleBinding object. string name = 2; } @@ -236,6 +336,10 @@ message RbacConfig { // A list of services. repeated string services = 1; + // $hide_from_docs + // A list of workloads. + repeated WorkloadSelector workload_selectors = 3; + // A list of namespaces. repeated string namespaces = 2; }
properties map<string, string> -

Optional. The set of properties that identify the subject. -The above ServiceRoleBinding example shows an example of property “source.namespace”.

+

Optional. The set of properties that identify the subject.