Skip to content

Introduction

In the context of gnmi subscriptions (on top of terminal output) gnmic supports multiple output options:

These outputs can be mixed and matched at will with the different gnmi subscribe targets.

With multiple outputs defined in the configuration file you can collect once and export the subscriptions updates to multiple locations formatted differently.

Defining outputs#

To define an output a user needs to create the outputs section in the configuration file:

# part of ~/gnmic.yml config file
outputs:
  output1:
    type: file # output type
    file-type: stdout # or stderr
    format: json
  output2:
    type: file
    filename: /path/to/localFile.log  
    format: protojson
  output3:
    type: nats # output type
    address: 127.0.0.1:4222 # comma separated nats servers addresses
    subject-prefix: telemetry #
    format: event
  output4:
    type: file
    filename: /path/to/localFile.log  
    format: json
  output5:
    type: stan # output type
    address: 127.0.0.1:4223 # comma separated nats streaming servers addresses
    subject: telemetry #
    cluster-name: test-cluster #
    format: proto
  output6:
    type: kafka # output type
    address: localhost:9092 # comma separated kafka brokers addresses
    topic: telemetry # kafka topic
    format: proto
  output7:
    type: stan # output type
    address: 127.0.0.1:4223 # comma separated nats streaming servers addresses
    subject: telemetry
    cluster-name: test-cluster

Note

Outputs names are case insensitive

Output formats#

Different formats are supported for all outputs

Format/output proto protojson prototext json event
File ❌ ✔ ✔ ✔ ✔
NATS / STAN ✔ ✔ ❌ ✔ ✔
Kafka ✔ ✔ ❌ ✔ ✔
UDP / TCP ✔ ✔ ✔ ✔ ✔
InfluxDB NA NA NA NA NA
Prometheus NA NA NA NA NA

Formats examples#

{
  "update": {
  "timestamp": "1595491618677407414",
  "prefix": {
    "elem": [
      {
        "name": "configure"
      },
      {
        "name": "system"
      }
    ]
  },
  "update": [
    {
      "path": {
        "elem": [
          {
            "name": "name"
          }
        ]
        },
        "val": {
          "stringVal": "sr123"
        }
      }
    ]
  }
}
update: {
  timestamp: 1595491704850352047
  prefix: {
    elem: {
      name: "configure"
    }
    elem: {
      name: "system"
    }
  }
  update: {
    path: {
      elem: {
        name: "name"
      }
    }
    val: {
      string_val: "sr123"
    }
  }
}
{
  "source": "172.17.0.100:57400",
  "subscription-name": "sub1",
  "timestamp": 1595491557144228652,
  "time": "2020-07-23T16:05:57.144228652+08:00",
  "prefix": "configure/system",
  "updates": [
    {
      "Path": "name",
      "values": {
        "name": "sr123"
      }
    }
  ]
}
[
  {
    "name": "sub1",
    "timestamp": 1595491586073072000,
    "tags": {
      "source": "172.17.0.100:57400",
      "subscription-name": "sub1"
  },
    "values": {
      "/configure/system/name": "sr123"
    }
  }
]

Binding outputs#

Once the outputs are defined, they can be flexibly associated with the targets.

# part of ~/gnmic.yml config file
targets:
  router1.lab.com:
    username: admin
    password: secret
    outputs:
      - output1
      - output3
  router2.lab.com:
    username: gnmi
    password: telemetry
    outputs:
      - output2
      - output3
      - output4

Caching#

By default, gNMIc outputs write the received gNMI updates as they arrive (i.e without caching).

Caching messages before writing them to a remote location can yield a few benefits like rate limiting, batch processing, data replication, etc.

Both influxdb and prometheus outputs support caching messages before exporting. Caching support for other outputs is planned.

See more details about caching here