Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix cluster IP allocator metrics #110027

Merged
merged 1 commit into from May 25, 2022

Conversation

tksm
Copy link
Contributor

@tksm tksm commented May 13, 2022

What type of PR is this?

/kind bug

What this PR does / why we need it:

Fix the bug that the metrics for the cluster IP allocator are incorrectly reported.

Which issue(s) this PR fixes:

Fixes #109994

Special notes for your reviewer:

It seems the repair loop unintentionally overwrote the metrics. I made the metrics recording disabled by default and made it enabled in only LegacyRESTStorage.

I verified the metrics no longer changed after the repair interval (3m) on kind.

I verified with the following script after creating two dual-stack services.
echo "# $(date)"
kubectl get --raw /metrics | egrep "^kube_apiserver_clusterip_allocator" | tee prev.txt
for i in $(seq 1 5); do
  sleep 60
  echo "# $(date)"
  kubectl get --raw /metrics | egrep "^kube_apiserver_clusterip_allocator" > current.txt
  diff prev.txt current.txt
  cp current.txt prev.txt
done

After fix

After this fix, the metrics no longer changed within 5 minutes.

# Fri May 13 09:17:21 JST 2022
kube_apiserver_clusterip_allocator_allocated_ips{cidr="10.96.0.0/20"} 4
kube_apiserver_clusterip_allocator_allocated_ips{cidr="fd03::/112"} 2
kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="dynamic"} 2
kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="static"} 2
kube_apiserver_clusterip_allocator_allocation_total{cidr="fd03::/112",scope="dynamic"} 2
kube_apiserver_clusterip_allocator_available_ips{cidr="10.96.0.0/20"} 4090
kube_apiserver_clusterip_allocator_available_ips{cidr="fd03::/112"} 65533
# Fri May 13 09:18:22 JST 2022
# Fri May 13 09:19:22 JST 2022
# Fri May 13 09:20:22 JST 2022
# Fri May 13 09:21:22 JST 2022
# Fri May 13 09:22:22 JST 2022

Before fix

Before this fix, the metrics was incorrectly changed after a while (0 - 3 m).

# Fri May 13 08:55:40 JST 2022
kube_apiserver_clusterip_allocator_allocated_ips{cidr="10.96.0.0/20"} 4
kube_apiserver_clusterip_allocator_allocated_ips{cidr="fd03::/112"} 2
kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="dynamic"} 2
kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="static"} 2
kube_apiserver_clusterip_allocator_allocation_total{cidr="fd03::/112",scope="dynamic"} 2
kube_apiserver_clusterip_allocator_available_ips{cidr="10.96.0.0/20"} 4090
kube_apiserver_clusterip_allocator_available_ips{cidr="fd03::/112"} 65533
# Fri May 13 08:56:40 JST 2022
1,2c1,2
< kube_apiserver_clusterip_allocator_allocated_ips{cidr="10.96.0.0/20"} 4
< kube_apiserver_clusterip_allocator_allocated_ips{cidr="fd03::/112"} 2
---
> kube_apiserver_clusterip_allocator_allocated_ips{cidr="10.96.0.0/20"} 0
> kube_apiserver_clusterip_allocator_allocated_ips{cidr="fd03::/112"} 0
4c4
< kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="static"} 2
---
> kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="static"} 6
6,7c6,8
< kube_apiserver_clusterip_allocator_available_ips{cidr="10.96.0.0/20"} 4090
< kube_apiserver_clusterip_allocator_available_ips{cidr="fd03::/112"} 65533
---
> kube_apiserver_clusterip_allocator_allocation_total{cidr="fd03::/112",scope="static"} 2
> kube_apiserver_clusterip_allocator_available_ips{cidr="10.96.0.0/20"} 4094
> kube_apiserver_clusterip_allocator_available_ips{cidr="fd03::/112"} 65535
# Fri May 13 08:57:40 JST 2022
# Fri May 13 08:58:40 JST 2022
# Fri May 13 08:59:40 JST 2022
4c4
< kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="static"} 6
---
> kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/20",scope="static"} 10
6c6
< kube_apiserver_clusterip_allocator_allocation_total{cidr="fd03::/112",scope="static"} 2
---
> kube_apiserver_clusterip_allocator_allocation_total{cidr="fd03::/112",scope="static"} 4
# Fri May 13 09:00:40 JST 2022

Does this PR introduce a user-facing change?

Fix the bug that the metrics for the cluster IP allocator are incorrectly reported.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 13, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @tksm. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label May 13, 2022
@k8s-ci-robot k8s-ci-robot added sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 13, 2022
@tksm
Copy link
Contributor Author

tksm commented May 13, 2022

/assign @aojea
as the author of the original code

@@ -213,6 +213,7 @@ func (c LegacyRESTStorageProvider) NewLegacyRESTStorage(apiResourceConfigSource
if err != nil {
return LegacyRESTStorage{}, genericapiserver.APIGroupInfo{}, fmt.Errorf("cannot create cluster IP allocator: %v", err)
}
serviceClusterIPAllocator.EnableMetrics()
restStorage.ServiceClusterIPAllocator = serviceClusterIPRegistry
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to remember how this ends in the repair loops

repairClusterIPs := servicecontroller.NewRepair(c.ServiceClusterIPInterval, c.ServiceClient, c.EventClient, &c.ServiceClusterIPRange, c.ServiceClusterIPRegistry, &c.SecondaryServiceClusterIPRange, c.SecondaryServiceClusterIPRegistry)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tracing

func (m *Instance) InstallLegacyAPI(c *completedConfig, restOptionsGetter generic.RESTOptionsGetter) error {
legacyRESTStorageProvider := corerest.LegacyRESTStorageProvider{
StorageFactory: c.ExtraConfig.StorageFactory,
ProxyTransport: c.ExtraConfig.ProxyTransport,
KubeletClientConfig: c.ExtraConfig.KubeletClientConfig,
EventTTL: c.ExtraConfig.EventTTL,
ServiceIPRange: c.ExtraConfig.ServiceIPRange,
SecondaryServiceIPRange: c.ExtraConfig.SecondaryServiceIPRange,
ServiceNodePortRange: c.ExtraConfig.ServiceNodePortRange,
LoopbackClientConfig: c.GenericConfig.LoopbackClientConfig,
ServiceAccountIssuer: c.ExtraConfig.ServiceAccountIssuer,
ExtendExpiration: c.ExtraConfig.ExtendExpiration,
ServiceAccountMaxExpiration: c.ExtraConfig.ServiceAccountMaxExpiration,
APIAudiences: c.GenericConfig.Authentication.APIAudiences,
}
legacyRESTStorage, apiGroupInfo, err := legacyRESTStorageProvider.NewLegacyRESTStorage(c.ExtraConfig.APIResourceConfigSource, restOptionsGetter)
if err != nil {
return fmt.Errorf("error building core storage: %v", err)
}
if len(apiGroupInfo.VersionedResourcesStorageMap) == 0 { // if all core storage is disabled, return.
return nil
}
controllerName := "bootstrap-controller"
coreClient := corev1client.NewForConfigOrDie(c.GenericConfig.LoopbackClientConfig)
eventsClient := eventsv1client.NewForConfigOrDie(c.GenericConfig.LoopbackClientConfig)
bootstrapController, err := c.NewBootstrapController(legacyRESTStorage, coreClient, coreClient, eventsClient, coreClient.RESTClient())

@aojea
Copy link
Member

aojea commented May 13, 2022

I still don't have clear what is the root cause , it seems that is an interaction with the repair loop confirmed by your changes, but the repair loops receive the registry allocator, no?

$ kubectl get --raw /metrics | egrep "^kube_apiserver_clusterip_allocator"
kube_apiserver_clusterip_allocator_allocated_ips{cidr="10.96.0.0/12"} 2
kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/12",scope="static"} 2
kube_apiserver_clusterip_allocator_available_ips{cidr="10.96.0.0/12"} 1.048572e+06

But after a while with doing nothing, it incorrectly reports that allocation_ips is 0 and allocation_total is 4

$ kubectl get --raw /metrics | egrep "^kube_apiserver_clusterip_allocator"
kube_apiserver_clusterip_allocator_allocated_ips{cidr="10.96.0.0/12"} 0
kube_apiserver_clusterip_allocator_allocation_total{cidr="10.96.0.0/12",scope="static"} 4

@tksm
Copy link
Contributor Author

tksm commented May 13, 2022

@aojea Repair's runOnce() creates new IP Allocators (#L171, #L181). I think these allocators overwrite the metrics at #L231, #L236 and #L307

@wojtek-t
Copy link
Member

xref #109975 - might potentially help with clearing the semantics

@aojea
Copy link
Member

aojea commented May 13, 2022

@aojea Repair's runOnce() creates new IP Allocators (#L171, #L181). I think these allocators overwrite the metrics at #L231, #L236 and #L307

IC, so I think that is this one that should not be able to update metrics

rebuilt, err := ipallocator.NewInMemory(network)

base: base,
max: maximum(0, int(max)),
family: family,
metrics: &emptyMetricsRecorder{}, // disabled by default
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then, should we registerMetrics() in L96, or only when we enable the metrics?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines 446 to 448
// create metrics disabled IPv4 allocator
// this metrics should be ignored
c, err := NewInMemory(clusterCIDRv4)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we move this to an individual test instead of overloading current test?
create a with metrics
create b without metrics
check b doesn't update a's metrics

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -86,6 +87,8 @@ type Range struct {
family api.IPFamily

alloc allocator.Interface
// metrics is a metrics recorder that can be disabled
metrics metricsRecorderInterface
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this solution and seems aligned with the rest of the code base,
@thockin what do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dgrisonnet (sig-instrumentation) I'd like your opinion too, the fact that metrics are global is 😬 , what do you think about this solution ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, it is not ideal to have metrics declared globally, but at the same time, most of the codebase/ecosystem is doing that and I don't think we had any issues in the past because of that. The reason is that to "enable" a metric, you have to register it into a registry so you always need an extra step in order for your metric to start being exposed. So I don't really think having them global is a problem as such.

That said, the approach that you've taken is one of the potential ways to make metrics initialization cleaner, but I personally prefer the one taken here https://github.com/prometheus-operator/prometheus-operator/blob/main/pkg/operator/operator.go#L180-L272 that creates structures to group similar metrics together. You are then able to initialize and register metrics via a simple call to "New". We might have some precedence of that or something similar in the codebase, but out of my head, I can't think of any that I have seen.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that current proposal is good enough, maybe is a TODO for sig-instrumentation to work on standardize this ;)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it is good enough, we haven't standardized anything for now, so the implementation is up to the component owners.

That's a good point, I'll bring it to the group.

@@ -364,6 +368,11 @@ func (r *Range) Destroy() {
r.alloc.Destroy()
}

// EnableMetrics enables metrics recording.
func (r *Range) EnableMetrics() {
r.metrics = &metricsRecorder{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the metrics registerMetrics() be initialized here now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aojea
Copy link
Member

aojea commented May 13, 2022

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 13, 2022
@aojea
Copy link
Member

aojea commented May 13, 2022

/assign @aojea @thockin

@tksm tksm force-pushed the fix-ipallocator-metrics branch from f2f925a to 187af77 Compare May 13, 2022 13:33
@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels May 13, 2022
@tksm
Copy link
Contributor Author

tksm commented May 13, 2022

@aojea Thank you for your comment. Please take another look.

@logicalhan
Copy link
Member

/triage accepted
/assign @dashpole

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 19, 2022
@aojea
Copy link
Member

aojea commented May 23, 2022

I'm fine with this
/approve

Defer lgtm to sig-instrumentation
and final approval to @thockin

@thockin
Copy link
Member

thockin commented May 25, 2022

@dashpole do you accept LGTM responsibility?

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: aojea, thockin, tksm

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 25, 2022
@dashpole
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 25, 2022
@k8s-triage-robot
Copy link

The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass.

This bot retests PRs for certain kubernetes repos according to the following rules:

  • The PR does have any do-not-merge/* labels
  • The PR does not have the needs-ok-to-test label
  • The PR is mergeable (does not have a needs-rebase label)
  • The PR is approved (has cncf-cla: yes, lgtm, approved labels)
  • The PR is failing tests required for merge

You can:

/retest

@k8s-ci-robot k8s-ci-robot merged commit 68fc207 into kubernetes:master May 25, 2022
@k8s-ci-robot k8s-ci-robot added this to the v1.25 milestone May 25, 2022
@tksm tksm deleted the fix-ipallocator-metrics branch May 25, 2022 23:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The metrics for the cluster IP allocator are incorrectly reported
9 participants