Skip to content

Commit 747bec0

Browse files
committed
docs: add infra relationship example for collector
1 parent 30519f2 commit 747bec0

File tree

5 files changed

+330
-0
lines changed

5 files changed

+330
-0
lines changed
Lines changed: 144 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,144 @@
1+
# Monitoring Hosts with OpenTelemetry Collector
2+
3+
This example demonstrates configuring the [internal telemetry](https://opentelemetry.io/docs/collector/internal-telemetry/) of an OTel Collector in way that also provides infrastructure relationships between the collector and its host and container.
4+
5+
## Architecture
6+
```mermaid
7+
flowchart LR
8+
subgraph Collector
9+
direction LR
10+
internaltelemetry["Internal Telemetry"]
11+
subgraph Pipeline
12+
direction TB
13+
otlpreceiver["otlpreceiver"]
14+
resourcedetectionprocessor["resourcedetectionprocessor <br> (adds host.id)"]
15+
otlphttpexporter["otlphttpexporter"]
16+
end
17+
end
18+
19+
internaltelemetry -- "OTLP/HTTP" --> otlpreceiver
20+
otlpreceiver --> resourcedetectionprocessor
21+
resourcedetectionprocessor --> otlphttpexporter
22+
otlphttpexporter --> newrelic["New Relic OTLP Endpoint"]
23+
24+
k8sdownwardapi["k8s Downward API"]
25+
k8sdownwardapi -- "namespace, pod" --> collectorconfig
26+
27+
collectorconfig["Collector Config <br> (adds <br> k8s.cluster.name, <br> k8s.namespace.name, <br> k8s.pod.name, <br> k8s.container.name)"]
28+
collectorconfig --> internaltelemetry
29+
```
30+
31+
## Requirements
32+
33+
- Your infrastructure must be instrumented which means container and/or host entities show up in NR. We recommend using the [nr-k8s-otel-collector](https://github.com/newrelic/helm-charts/tree/master/charts/nr-k8s-otel-collector) helm chart.
34+
- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. This example was tested on [AWS EKS](https://aws.amazon.com/eks/) with Amazon Linux nodes. The steps for achieving a container relationship should be universal for all k8s clusters - they also work on local clusters like `kind` or `minikube`.
35+
- The host relationship is synthesized based on the `host.id` attribute matching up on the host and collector telemetry. The determination of this attribute heavily depends on your environment and is driven by the `resourcedetectionprocessor` which does not support local clusters out-of-the-box. You might be able to make it work by tweaking the processor configuration, but we won't cover this here as there are too many variables involved.
36+
- [A New Relic account](https://one.newrelic.com/)
37+
- [A New Relic license key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#license-key)
38+
39+
### Collector
40+
We'll use [otelcol-contrib](https://github.com/open-telemetry/opentelemetry-collector-releases/tree/main/distributions/otelcol-contrib) for the example but if you are using your own collector, here is the what and why regarding components:
41+
- [otlpreceiver](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md) to provide a hook for the internal telemetry to get funnelled into a pipeline defined in the collector itself.
42+
- [resourcedetectionprocessor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor) to add `host.id` to internal telemetry.
43+
- [otlphttpexporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) to send telemetry to New Relic.
44+
- (optional) [memorylimiterprocessor](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/memorylimiterprocessor) and [batchprocessor](https://github.com/open-telemetry/opentelemetry-collector/tree/v0.102.0/processor/batchprocessor) for best practices.
45+
46+
### Appendix
47+
- Collector entity definition: [EXT-SERVICE](https://github.com/newrelic/entity-definitions/blob/main/entity-types/ext-service/definition.yml#L72-L94)
48+
- requires `service.name` on internal telemetry
49+
- Collector to container relationship: [INFRA_KUBERNETES_CONTAINER-to-EXT_SERVICE](https://github.com/newrelic/entity-definitions/blob/main/relationships/synthesis/INFRA_KUBERNETES_CONTAINER-to-EXT_SERVICE.yml#L40)
50+
- requires `k8s.cluster.name`, `k8s.namespace.name`, `k8s.pod.name`, `k8s.container.name` on internal telemetry that matches
51+
equivalent attributes on the container telemetry.
52+
- Collector to host relationship: [INFRA-HOST-to-EXT-SERVICE](https://github.com/newrelic/entity-definitions/blob/main/relationships/synthesis/INFRA-HOST-to-EXT-SERVICE.yml)
53+
- requires `host.id` on internal telemetry that matches the host telemetry.
54+
55+
56+
## Running the example
57+
58+
1. Instrument your infrastructure, e.g. install [nr-k8s-otel-collector](https://github.com/newrelic/helm-charts/tree/master/charts/nr-k8s-otel-collector)
59+
```shell
60+
# Cluster name is hard coded as the downward API does not expose it
61+
license_key='INSERT_API_KEY'
62+
cluster_name='INSERT_CLUSTER_NAME'
63+
helm repo add newrelic https://helm-charts.newrelic.com
64+
helm upgrade 'nr-k8s-otel-collector-release' newrelic/nr-k8s-otel-collector \
65+
--install \
66+
--create-namespace --namespace 'nr-k8s-otel-collector' \
67+
--dependency-update \
68+
--set "cluster=${cluster_name}" \
69+
--set "licenseKey=${license_key}"
70+
```
71+
1. Update the values in [secrets.yaml](./k8s/secrets.yaml) based on the comments and your setup.
72+
* Note, be careful to avoid inadvertent secret sharing when modifying `secrets.yaml`. To ignore changes to this file from git, run `git update-index --skip-worktree k8s/secrets.yaml`.
73+
74+
1. Run the application with the following command.
75+
76+
```shell
77+
kubectl apply -f k8s/
78+
```
79+
80+
* When finished, cleanup resources with the following command. This is also useful to reset if modifying configuration.
81+
82+
```shell
83+
kubectl delete -f k8s/
84+
helm uninstall 'nr-k8s-otel-collector-release' --namespace 'nr-k8s-otel-collector'
85+
```
86+
87+
## Viewing your data
88+
89+
### In the UI
90+
The infrastructure relationships are used to light up our APM UI. Navigate to "New Relic -> All Entities -> Services - OpenTelemetry" and click on the service with name corresponding to value provided in `secrets.yaml` for `COLLECTOR_SERVICE_NAME`. The 'Summary' page shows metrics related to the infrastructure entities related to your collector at the bottom of the page.
91+
92+
93+
### From the CLI
94+
You can also query the relationships through NerdGraph using the [newrelic CLI](https://github.com/newrelic/newrelic-cli/blob/main/docs/GETTING_STARTED.md#environment-setup). Note that the api key in this case is NOT an ingest key (as used above), but instead a user key.
95+
96+
The following script should work if your service name is sufficiently unique as the first part determines the entity guid based on the service name.
97+
If you have the correct entity guid, you can skip the first part and just query the relationships directly.
98+
99+
```bash
100+
#!/bin/bash
101+
export NEW_RELIC_REGION='US'
102+
export NEW_RELIC_API_KEY='INSERT_USER_KEY'
103+
SERVICE_NAME='INSERT_SERVICE_NAME'
104+
105+
ENTITY=$(newrelic nerdgraph query "{
106+
actor {
107+
entitySearch(queryBuilder: {name: \"${SERVICE_NAME}\"}) {
108+
results {
109+
entities {
110+
guid
111+
name
112+
}
113+
}
114+
}
115+
}
116+
}")
117+
SERVICE_ENTITY_GUID=$(jq -r '.actor.entitySearch.results.entities[0].guid' <<< "$ENTITY")
118+
119+
newrelic nerdgraph query "{
120+
actor {
121+
entity(guid: \"${SERVICE_ENTITY_GUID}\") {
122+
relatedEntities(filter: {relationshipTypes: {include: HOSTS}}) {
123+
results {
124+
source {
125+
entity {
126+
name
127+
guid
128+
domain
129+
type
130+
}
131+
}
132+
type
133+
target {
134+
entity {
135+
guid
136+
name
137+
}
138+
}
139+
}
140+
}
141+
}
142+
}
143+
}"
144+
```
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
apiVersion: v1
3+
kind: Namespace
4+
metadata:
5+
name: internal-telemetry-infra-relationship
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
apiVersion: apps/v1
2+
kind: DaemonSet
3+
metadata:
4+
name: collector
5+
namespace: internal-telemetry-infra-relationship
6+
labels:
7+
app.kubernetes.io/name: collector
8+
spec:
9+
selector:
10+
matchLabels:
11+
name: collector
12+
template:
13+
metadata:
14+
labels:
15+
name: collector
16+
spec:
17+
containers:
18+
- name: collector-infra-relationships
19+
image: otel/opentelemetry-collector-contrib:0.130.1
20+
args:
21+
- --config=/config/config.yaml
22+
# add k8s metadata as resource attributes
23+
- '--config=yaml:service::telemetry::resource::k8s.cluster.name: ${env:CLUSTER_NAME}'
24+
- '--config=yaml:service::telemetry::resource::k8s.namespace.name: "${env:NAMESPACE}"'
25+
- '--config=yaml:service::telemetry::resource::k8s.pod.name: "${env:POD_NAME}"'
26+
# Hardcoded container name - needs to match the container's name above
27+
- '--config=yaml:service::telemetry::resource::k8s.container.name: collector-infra-relationships'
28+
env:
29+
# New Relic OTLP endpoint
30+
- name: NEW_RELIC_OTLP_ENDPOINT
31+
valueFrom:
32+
secretKeyRef:
33+
name: collector-internal-telemetry-secret
34+
key: NEW_RELIC_OTLP_ENDPOINT
35+
# The New Relic API key used to authenticate export requests.
36+
# Defined in secrets.yaml
37+
- name: NEW_RELIC_LICENSE_KEY
38+
valueFrom:
39+
secretKeyRef:
40+
name: collector-internal-telemetry-secret
41+
key: NEW_RELIC_API_KEY
42+
# defines the collector's entity name in New Relic
43+
- name: SERVICE_NAME
44+
valueFrom:
45+
secretKeyRef:
46+
name: collector-internal-telemetry-secret
47+
key: COLLECTOR_SERVICE_NAME
48+
- name: CLUSTER_NAME
49+
valueFrom:
50+
secretKeyRef:
51+
name: collector-internal-telemetry-secret
52+
key: CLUSTER_NAME
53+
# k8s.namespace.name from Downward API
54+
- name: NAMESPACE
55+
valueFrom:
56+
fieldRef:
57+
fieldPath: metadata.namespace
58+
# k8s.pod.name from Downward API
59+
- name: POD_NAME
60+
valueFrom:
61+
fieldRef:
62+
fieldPath: metadata.name
63+
volumeMounts:
64+
- name: config
65+
mountPath: /config
66+
volumes:
67+
- name: config
68+
configMap:
69+
name: collector-config
Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
apiVersion: v1
2+
kind: ConfigMap
3+
metadata:
4+
name: collector-config
5+
namespace: internal-telemetry-infra-relationship
6+
data:
7+
config.yaml: |
8+
receivers:
9+
otlp/internal:
10+
protocols:
11+
# listens on localhost:4318 by default, so the collector's internal telemetry otlpexporters can write to this without further configuration
12+
http:
13+
14+
processors:
15+
batch:
16+
memory_limiter:
17+
check_interval: 1s
18+
limit_percentage: 75
19+
spike_limit_percentage: 15
20+
21+
# adds host.id as resource attribute
22+
# Note: if host entities are monitored (and thus their entity synthesis driven) by a collector C, this processor should mirror C's configuration
23+
resourcedetection/internal:
24+
# only one detector should technically be necessary, but this list covers the big three cloud providers and their managed Kubernetes services
25+
# env detector allows providing the host.id via env var, i.e. OTEL_RESOURCE_ATTRIBUTES=host.id=<host_id> for more static setups and manual testing/debugging
26+
detectors: [env, eks, ec2, aks, azure, gcp]
27+
timeout: 2s
28+
29+
exporters:
30+
otlphttp/internal:
31+
endpoint: "${env:NEW_RELIC_OTLP_ENDPOINT}"
32+
headers:
33+
api-key: "${env:NEW_RELIC_LICENSE_KEY}"
34+
35+
service:
36+
pipelines:
37+
####
38+
# insert your normal pipelines
39+
####
40+
logs/internal:
41+
receivers:
42+
- otlp/internal
43+
processors:
44+
- memory_limiter
45+
- resourcedetection/internal
46+
- batch
47+
exporters:
48+
- otlphttp/internal
49+
metrics/internal:
50+
receivers:
51+
- otlp/internal
52+
processors:
53+
- memory_limiter
54+
- resourcedetection/internal
55+
- batch
56+
exporters:
57+
- otlphttp/internal
58+
traces/internal:
59+
receivers:
60+
- otlp/internal
61+
processors:
62+
- memory_limiter
63+
- resourcedetection/internal
64+
- batch
65+
exporters:
66+
- otlphttp/internal
67+
68+
telemetry:
69+
resource:
70+
service.name: "${env:SERVICE_NAME}"
71+
metrics:
72+
level: detailed
73+
readers:
74+
- periodic:
75+
exporter:
76+
otlp:
77+
protocol: http/protobuf
78+
endpoint: http://localhost:4318
79+
logs:
80+
level: INFO
81+
disable_stacktrace: true
82+
disable_caller: true
83+
processors:
84+
- batch:
85+
exporter:
86+
otlp:
87+
protocol: http/protobuf
88+
endpoint: http://localhost:4318
89+
traces:
90+
processors:
91+
- batch:
92+
exporter:
93+
otlp:
94+
protocol: http/protobuf
95+
endpoint: http://localhost:4318
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
apiVersion: v1
2+
kind: Secret
3+
metadata:
4+
name: collector-internal-telemetry-secret
5+
namespace: internal-telemetry-infra-relationship
6+
stringData:
7+
# New Relic API key to authenticate the export requests.
8+
# docs: https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#license-key
9+
NEW_RELIC_API_KEY: INSERT_API_KEY
10+
# The default US endpoint is set here. If your account is based in the EU, use `https://otlp.eu01.nr-data.net` instead.
11+
# docs: https://docs.newrelic.com/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/best-practices/opentelemetry-otlp/#configure-endpoint-port-protocol
12+
NEW_RELIC_OTLP_ENDPOINT: https://otlp.nr-data.net/
13+
# The cluster name; will be added as attribute `k8s.cluster.name` to the internal telemetry.
14+
CLUSTER_NAME: INSERT_CLUSTER_NAME
15+
# Determines the entity name in New Relic; will be added as `service.name` to internal telemetry.
16+
COLLECTOR_SERVICE_NAME: infra-relationships-service
17+

0 commit comments

Comments
 (0)