Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions other-examples/collector/otel-k8-nr-infra/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
# Monitor Kubernetes with Newrelic infra agent and correlate with OpenTelemetry APM services

This example demonstrates correlation between kubernetes container monitored with the [New Relic infrastructure agent](https://docs.newrelic.com/docs/infrastructure/introduction-infra-monitoring/) and OpenTelemetry APM services.


## Requirements

* You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. Docker desktop [includes a standalone Kubernetes server and client](https://docs.docker.com/desktop/kubernetes/) which is useful for local testing.
* [A New Relic account](https://one.newrelic.com/)
* [A New Relic license key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#license-key)

## Running the example

1. Update the `NEW_RELIC_API_KEY` value in [config.yaml](./k8s/config.yml) to your New Relic license key.

```yaml
# ...omitted for brevity
otlphttp:
endpoint: https://otlp.nr-data.net
headers:
api-key: <NEW_RELIC_API_KEY>
# New Relic API key to authenticate the export requests.
# docs: https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#license-key
```

2. Run the application with the following command.

```shell
kubectl create namespace opentelemetry-demo
kubectl apply -n opentelemetry-demo -f k8s/
```

* When finished, cleanup resources with the following command. This is also useful to reset if modifying configuration.

```shell
kubectl delete -n opentelemetry-demo -f k8s/
```
3. Install the NR Infra Kubernetes agent the following command.

```shell
KSM_IMAGE_VERSION="v2.10.0" && helm repo add newrelic https://helm-charts.newrelic.com && helm repo update && helm upgrade --install newrelic-bundle newrelic/nri-bundle --set global.licenseKey=<<NEW_RELIC_API_KEY>> --set global.cluster=opentelemetry-demo --namespace=opentelemetry-demo --set newrelic-infrastructure.privileged=true --set global.lowDataMode=true --set kube-state-metrics.image.tag=${KSM_IMAGE_VERSION} --set kube-state-metrics.enabled=true --set kubeEvents.enabled=true --set newrelic-prometheus-agent.enabled=true --set newrelic-prometheus-agent.lowDataMode=true --set newrelic-prometheus-agent.config.kubernetes.integrations_filter.enabled=false --set logging.enabled=true --set newrelic-logging.lowDataMode=true
```
## Viewing your data

To review your kubernetes container data in New Relic, navigate to "New Relic -> All Entities -> Containers". You should see entities named `adservice` as defined in `name` property of the respective services in [deployment.yaml](k8s/deployement.yaml). Click to view the container summary.

To review your OpenTelemetry APM data in New Relic, navigate to "New Relic -> All Entities -> OpenTelemetry" and You should see an entity named `adservice` as defined in `OTEL_SERVICE_NAME` in `deployment.yaml`. Click to view the OpenTelemetry summary. Click "Service Map" in the left navigation, and notice the relationship to the `adservice` container entity.
21 changes: 21 additions & 0 deletions other-examples/collector/otel-k8-nr-infra/k8s/clusterrole.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: opentelemetry-demo-otelcol
labels:
app.kubernetes.io/name: otelcol
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/version: "0.82.0"
rules:
- apiGroups: [""]
resources: ["pods", "events", "namespaces", "nodes", "nodes/metrics", "pods/status", "services", "endpoints"]
verbs: ["get", "watch", "list"]
- apiGroups: [""]
resources: ["nodes/spec", "nodes/stats", "nodes/proxy", "pods/logs" ]
verbs: ["get"]
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["replicasets"]
verbs: ["get", "list", "watch"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: opentelemetry-demo-otelcol
labels:
app.kubernetes.io/name: otelcol
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/version: "0.82.0"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: opentelemetry-demo-otelcol
subjects:
- kind: ServiceAccount
name: default
namespace: opentelemetry-demo
82 changes: 82 additions & 0 deletions other-examples/collector/otel-k8-nr-infra/k8s/config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
namespace: opentelemetry-demo
data:
otel-collector-config.yaml: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
k8sattributes:
k8sattributes/2:
auth_type: "serviceAccount"
passthrough: false
filter:
# only retrieve pods running on the same node as the collector
node_from_env_var: KUBE_NODE_NAME
extract:
# The attributes provided in 'metadata' will be added to associated resources
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.pod.name
- k8s.pod.uid
- k8s.pod.start_time
- k8s.container.name
labels:
# This label extraction rule takes the value 'app.kubernetes.io/component' label and maps it to the 'app.label.component' attribute which will be added to the associated resources
- tag_name: app.label.component
key: app.kubernetes.io/component
from: pod
pod_association:
- sources:
# This rule associates all resources containing the 'k8s.pod.ip' attribute with the matching pods. If this attribute is not present in the resource, this rule will not be able to find the matching pod.
- from: resource_attribute
name: k8s.pod.ip
- sources:
# This rule associates all resources containing the 'k8s.pod.uid' attribute with the matching pods. If this attribute is not present in the resource, this rule will not be able to find the matching pod.
- from: resource_attribute
name: k8s.pod.uid
- sources:
# This rule will use the IP from the incoming connection from which the resource is received, and find the matching pod, based on the 'pod.status.podIP' of the observed pods
- from: connection
resource:
attributes:
- key: host.id
from_attribute: host.name
action: upsert
# TODO (chris): Upsert only when cluster name not found (resource detection override: true)
- key: k8s.cluster.name
action: upsert
value: "opentelemetry-demo"
- key: k8s.container.name
action: upsert
from_attribute: service.name

exporters:
logging:
loglevel: debug
otlphttp:
endpoint: "https://otlp.nr-data.net"
headers:
api-key: "<<NEW_RELIC_API_KEY>>"

service:
pipelines:
traces:
receivers: [otlp]
processors: [k8sattributes,resource]
exporters: [logging, otlphttp]
metrics:
receivers: [otlp]
processors: [k8sattributes,resource]
exporters: [logging, otlphttp]
76 changes: 76 additions & 0 deletions other-examples/collector/otel-k8-nr-infra/k8s/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
namespace: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
app: otel-collector
template:
metadata:
labels:
app: otel-collector
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:0.98.0
command:
- "/otelcol-contrib"
args:
- "--config=/etc/otel/otel-collector-config.yaml"
volumeMounts:
- name: collector-config
mountPath: /etc/otel/otel-collector-config.yaml
subPath: otel-collector-config.yaml # Using subPath to map single file
volumes:
- name: collector-config
configMap:
name: otel-collector-config

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: adservice
namespace: opentelemetry-demo
spec:
replicas: 1
selector:
matchLabels:
app: adservice
template:
metadata:
labels:
app: adservice
spec:
containers:
- name: adservice
image: otel/demo:1.10.0-adservice # Use the specific image or the latest available version
ports:
- containerPort: 8080
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-collector:4318"
- name: OTEL_SERVICE_NAME
value: "adservice"
- name: AD_SERVICE_PORT
value: "8080"
- name: OTEL_CLUSTER_NAME
value: "opentelemetry-demo"

---
apiVersion: v1
kind: Service
metadata:
name: otel-collector
namespace: opentelemetry-demo
spec:
ports:
- port: 4317
name: otlp-grpc
- port: 4318
name: otlp-http
selector:
app: otel-collector
8 changes: 8 additions & 0 deletions other-examples/collector/otel-k8-nr-infra/k8s/sa.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
labels:
app.kubernetes.io/name: otelcol
app.kubernetes.io/instance: opentelemetry-demo
app.kubernetes.io/version: "0.82.0"