|
| 1 | +# Monitoring Hosts with OpenTelemetry Collector |
| 2 | + |
| 3 | +This example demonstrates configuring the [internal telemetry](https://opentelemetry.io/docs/collector/internal-telemetry/) of an OTel Collector in way that also provides infrastructure relationships between the collector and its host and container. |
| 4 | + |
| 5 | +## Architecture |
| 6 | +```mermaid |
| 7 | +flowchart LR |
| 8 | + subgraph Collector |
| 9 | + direction LR |
| 10 | + internaltelemetry["Internal Telemetry"] |
| 11 | + subgraph Pipeline |
| 12 | + direction TB |
| 13 | + otlpreceiver["otlpreceiver"] |
| 14 | + resourcedetectionprocessor["resourcedetectionprocessor <br> (adds host.id)"] |
| 15 | + otlphttpexporter["otlphttpexporter"] |
| 16 | + end |
| 17 | + end |
| 18 | +
|
| 19 | + internaltelemetry -- "OTLP/HTTP" --> otlpreceiver |
| 20 | + otlpreceiver --> resourcedetectionprocessor |
| 21 | + resourcedetectionprocessor --> otlphttpexporter |
| 22 | + otlphttpexporter --> newrelic["New Relic OTLP Endpoint"] |
| 23 | +
|
| 24 | + k8sdownwardapi["k8s Downward API"] |
| 25 | + k8sdownwardapi -- "namespace, pod" --> collectorconfig |
| 26 | +
|
| 27 | + collectorconfig["Collector Config <br> (adds <br> k8s.cluster.name, <br> k8s.namespace.name, <br> k8s.pod.name, <br> k8s.container.name)"] |
| 28 | + collectorconfig --> internaltelemetry |
| 29 | +``` |
| 30 | + |
| 31 | +## Requirements |
| 32 | + |
| 33 | +- Your infrastructure must be instrumented which means container and/or host entities show up in NR. We recommend using the [nr-k8s-otel-collector](https://github.com/newrelic/helm-charts/tree/master/charts/nr-k8s-otel-collector) helm chart. |
| 34 | +- You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. This example was tested on [AWS EKS](https://aws.amazon.com/eks/) with Amazon Linux nodes. The steps for achieving a container relationship should be universal for all k8s clusters - they also work on local clusters like `kind` or `minikube`. |
| 35 | +- The host relationship is synthesized based on the `host.id` attribute matching up on the host and collector telemetry. The determination of this attribute heavily depends on your environment and is driven by the `resourcedetectionprocessor` which does not support local clusters out-of-the-box. You might be able to make it work by tweaking the processor configuration, but we won't cover this here as there are too many variables involved. |
| 36 | +- [A New Relic account](https://one.newrelic.com/) |
| 37 | +- [A New Relic license key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#license-key) |
| 38 | + |
| 39 | +### Collector |
| 40 | +We'll use [otelcol-contrib](https://github.com/open-telemetry/opentelemetry-collector-releases/tree/main/distributions/otelcol-contrib) for the example but if you are using your own collector, here is the what and why regarding components: |
| 41 | +- [otlpreceiver](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md) to provide a hook for the internal telemetry to get funnelled into a pipeline defined in the collector itself. |
| 42 | +- [resourcedetectionprocessor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/resourcedetectionprocessor) to add `host.id` to internal telemetry. |
| 43 | +- [otlphttpexporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter) to send telemetry to New Relic. |
| 44 | +- (optional) [memorylimiterprocessor](https://github.com/open-telemetry/opentelemetry-collector/tree/main/processor/memorylimiterprocessor) and [batchprocessor](https://github.com/open-telemetry/opentelemetry-collector/tree/v0.102.0/processor/batchprocessor) for best practices. |
| 45 | + |
| 46 | +### Appendix |
| 47 | +- Collector entity definition: [EXT-SERVICE](https://github.com/newrelic/entity-definitions/blob/main/entity-types/ext-service/definition.yml#L72-L94) |
| 48 | + - requires `service.name` on internal telemetry |
| 49 | +- Collector to container relationship: [INFRA_KUBERNETES_CONTAINER-to-EXT_SERVICE](https://github.com/newrelic/entity-definitions/blob/main/relationships/synthesis/INFRA_KUBERNETES_CONTAINER-to-EXT_SERVICE.yml#L40) |
| 50 | + - requires `k8s.cluster.name`, `k8s.namespace.name`, `k8s.pod.name`, `k8s.container.name` on internal telemetry that matches |
| 51 | + equivalent attributes on the container telemetry. |
| 52 | +- Collector to host relationship: [INFRA-HOST-to-EXT-SERVICE](https://github.com/newrelic/entity-definitions/blob/main/relationships/synthesis/INFRA-HOST-to-EXT-SERVICE.yml) |
| 53 | + - requires `host.id` on internal telemetry that matches the host telemetry. |
| 54 | + |
| 55 | + |
| 56 | +## Running the example |
| 57 | + |
| 58 | +1. Instrument your infrastructure, e.g. install [nr-k8s-otel-collector](https://github.com/newrelic/helm-charts/tree/master/charts/nr-k8s-otel-collector) |
| 59 | + ```shell |
| 60 | + # Cluster name is hard coded as the downward API does not expose it |
| 61 | + license_key='INSERT_API_KEY' |
| 62 | + cluster_name='INSERT_CLUSTER_NAME' |
| 63 | + helm repo add newrelic https://helm-charts.newrelic.com |
| 64 | + helm upgrade 'nr-k8s-otel-collector-release' newrelic/nr-k8s-otel-collector \ |
| 65 | + --install \ |
| 66 | + --create-namespace --namespace 'nr-k8s-otel-collector' \ |
| 67 | + --dependency-update \ |
| 68 | + --set "cluster=${cluster_name}" \ |
| 69 | + --set "licenseKey=${license_key}" |
| 70 | + ``` |
| 71 | +1. Update the values in [secrets.yaml](./k8s/secrets.yaml) based on the comments and your setup. |
| 72 | + * Note, be careful to avoid inadvertent secret sharing when modifying `secrets.yaml`. To ignore changes to this file from git, run `git update-index --skip-worktree k8s/secrets.yaml`. |
| 73 | + |
| 74 | +1. Run the application with the following command. |
| 75 | + |
| 76 | + ```shell |
| 77 | + kubectl apply -f k8s/ |
| 78 | + ``` |
| 79 | + |
| 80 | + * When finished, cleanup resources with the following command. This is also useful to reset if modifying configuration. |
| 81 | + |
| 82 | + ```shell |
| 83 | + kubectl delete -f k8s/ |
| 84 | + helm uninstall 'nr-k8s-otel-collector-release' --namespace 'nr-k8s-otel-collector' |
| 85 | + ``` |
| 86 | + |
| 87 | +## Viewing your data |
| 88 | + |
| 89 | +### In the UI |
| 90 | +The infrastructure relationships are used to light up our APM UI. Navigate to "New Relic -> All Entities -> Services - OpenTelemetry" and click on the service with name corresponding to value provided in `secrets.yaml` for `COLLECTOR_SERVICE_NAME`. The 'Summary' page shows metrics related to the infrastructure entities related to your collector at the bottom of the page. |
| 91 | + |
| 92 | + |
| 93 | +### From the CLI |
| 94 | +You can also query the relationships through NerdGraph using the [newrelic CLI](https://github.com/newrelic/newrelic-cli/blob/main/docs/GETTING_STARTED.md#environment-setup). Note that the api key in this case is NOT an ingest key (as used above), but instead a user key. |
| 95 | + |
| 96 | +The following script should work if your service name is sufficiently unique as the first part determines the entity guid based on the service name. |
| 97 | +If you have the correct entity guid, you can skip the first part and just query the relationships directly. |
| 98 | + |
| 99 | +```bash |
| 100 | +#!/bin/bash |
| 101 | +export NEW_RELIC_REGION='US' |
| 102 | +export NEW_RELIC_API_KEY='INSERT_USER_KEY' |
| 103 | +SERVICE_NAME='INSERT_SERVICE_NAME' |
| 104 | +
|
| 105 | +ENTITY=$(newrelic nerdgraph query "{ |
| 106 | + actor { |
| 107 | + entitySearch(queryBuilder: {name: \"${SERVICE_NAME}\"}) { |
| 108 | + results { |
| 109 | + entities { |
| 110 | + guid |
| 111 | + name |
| 112 | + } |
| 113 | + } |
| 114 | + } |
| 115 | + } |
| 116 | +}") |
| 117 | +SERVICE_ENTITY_GUID=$(jq -r '.actor.entitySearch.results.entities[0].guid' <<< "$ENTITY") |
| 118 | +
|
| 119 | +newrelic nerdgraph query "{ |
| 120 | + actor { |
| 121 | + entity(guid: \"${SERVICE_ENTITY_GUID}\") { |
| 122 | + relatedEntities(filter: {relationshipTypes: {include: HOSTS}}) { |
| 123 | + results { |
| 124 | + source { |
| 125 | + entity { |
| 126 | + name |
| 127 | + guid |
| 128 | + domain |
| 129 | + type |
| 130 | + } |
| 131 | + } |
| 132 | + type |
| 133 | + target { |
| 134 | + entity { |
| 135 | + guid |
| 136 | + name |
| 137 | + } |
| 138 | + } |
| 139 | + } |
| 140 | + } |
| 141 | + } |
| 142 | + } |
| 143 | +}" |
| 144 | +``` |
0 commit comments