Skip to content

Commit

Permalink
Merge pull request #2 from DataDog/darcy-rayner/metrics
Browse files Browse the repository at this point in the history
Darcy rayner/metrics
  • Loading branch information
DarcyRaynerDD authored May 29, 2019
2 parents ecfddc3 + 1b26975 commit 933a459
Show file tree
Hide file tree
Showing 37 changed files with 1,905 additions and 29 deletions.
123 changes: 120 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,11 @@ Datadog's Lambda Go client library enables distributed tracing between serverful
go get github.com/DataDog/dd-lambda-go
```

The following Datadog environment variables should be defined via the AWS CLI or Serverless Framework:

- DATADOG_API_KEY
- DATADOG_APP_KEY

## Usage

Datadog needs to be able to read headers from the incoming Lambda event. Wrap your Lambda handler function like so:
Expand All @@ -22,28 +27,140 @@ import (

func main() {
// Wrap your lambda handler like this
lambda.Start( ddlambda.WrapHandler(myHandler))
lambda.Start(ddlambda.WrapHandler(myHandler, nil))
/* OR with manual configuration options
lambda.Start(ddlambda.WrapHandler(myHandler, &ddlambda.Config{
BatchInterval: time.Seconds * 15
APIKey: "my-api-key",
AppKey: "my-app-key",
}))
*/
}

func myHandler(ctx context.Context, event MyEvent) (string, error) {
// ...
}
```

Make sure any outbound requests have Datadog's tracing headers.
## Custom Metrics

Custom metrics can be submitted using the `Distribution` function. The metrics are submitted as [distribution metrics](https://docs.datadoghq.com/graphing/metrics/distributions/).

```go


ddlambda.Distribution(
// Context, (ctx), should be the same object passed into your lambda handler function, (or a child).
// If you don't want to pass the context through your call hierarchy, you can use ddlambda.GetContext()
ctx,
"coffee_house.order_value", // Metric name
12.45, // The value
"product:latte", "order:online" // Associated tags
)
```

### VPC

If your Lambda function is associated with a VPC, you need to ensure it has access to the [public internet](https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/).

## Distributed Tracing

[Distributed tracing](https://docs.datadoghq.com/tracing/guide/distributed_tracing/?tab=python) allows you to propagate a trace context from a service running on a host to a service running on AWS Lambda, and vice versa, so you can see performance end-to-end. Linking is implemented by injecting Datadog trace context into the HTTP request headers.

Distributed tracing headers are language agnostic, e.g., a trace can be propagated between a Java service running on a host to a Lambda function written in Go.

Because the trace context is propagated through HTTP request headers, the Lambda function needs to be triggered by AWS API Gateway or AWS Application Load Balancer.

To enable this feature, make sure any outbound requests have Datadog's tracing headers.

```go
req, err := http.NewRequest("GET", "http://api.youcompany.com/status")
// Use the same Context object given to your lambda handler.
// If you don't want to pass the context through your call hierarchy, you can use ddlambda.GetContext()
ddlambda.AddTraceHeaders(ctx, req)

client := http.Client{}
client.Do(req)
}
```

## Sampling

The traces for your Lambda function are converted by Datadog from AWS X-Ray traces. X-Ray needs to sample the traces that the Datadog tracing agent decides to sample, in order to collect as many complete traces as possible. You can create X-Ray sampling rules to ensure requests with header `x-datadog-sampling-priority:1` or `x-datadog-sampling-priority:2` via API Gateway always get sampled by X-Ray.

These rules can be created using the following AWS CLI command.

```bash
aws xray create-sampling-rule --cli-input-json file://datadog-sampling-priority-1.json
aws xray create-sampling-rule --cli-input-json file://datadog-sampling-priority-2.json
```

The file content for `datadog-sampling-priority-1.json`:

```json
{
"SamplingRule": {
"RuleName": "Datadog-Sampling-Priority-1",
"ResourceARN": "*",
"Priority": 9998,
"FixedRate": 1,
"ReservoirSize": 100,
"ServiceName": "*",
"ServiceType": "AWS::APIGateway::Stage",
"Host": "*",
"HTTPMethod": "*",
"URLPath": "*",
"Version": 1,
"Attributes": {
"x-datadog-sampling-priority": "1"
}
}
}
```

The file content for `datadog-sampling-priority-2.json`:

```json
{
"SamplingRule": {
"RuleName": "Datadog-Sampling-Priority-2",
"ResourceARN": "*",
"Priority": 9999,
"FixedRate": 1,
"ReservoirSize": 100,
"ServiceName": "*",
"ServiceType": "AWS::APIGateway::Stage",
"Host": "*",
"HTTPMethod": "*",
"URLPath": "*",
"Version": 1,
"Attributes": {
"x-datadog-sampling-priority": "2"
}
}
}
```

## Non-proxy integration

If your Lambda function is triggered by API Gateway via the non-proxy integration, then you have to set up a mapping template, which passes the Datadog trace context from the incoming HTTP request headers to the Lambda function via the event object.

If your Lambda function is deployed by the Serverless Framework, such a mapping template gets created by default.
If your Lambda function is deployed by the Serverless Framework, such a mapping template gets created by default.

## Opening Issues

If you encounter a bug with this package, we want to hear about it. Before opening a new issue, search the existing issues to avoid duplicates.

When opening an issue, include the Datadog Lambda Layer version, Python version, and stack trace if available. In addition, include the steps to reproduce when appropriate.

You can also open an issue for a feature request.

## Contributing

If you find an issue with this package and have a fix, please feel free to open a pull request following the procedures.

## License

Unless explicitly stated otherwise all files in this repository are licensed under the Apache License Version 2.0.

This product includes software developed at Datadog (https://www.datadoghq.com/). Copyright 2019 Datadog, Inc.
96 changes: 93 additions & 3 deletions ddlambda.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,48 @@ package ddlambda

import (
"context"
"fmt"
"net/http"
"os"
"runtime"
"time"

"github.com/DataDog/dd-lambda-go/internal/metrics"
"github.com/DataDog/dd-lambda-go/internal/trace"
"github.com/DataDog/dd-lambda-go/internal/wrapper"
)

type (
// Config gives options for how ddlambda should behave
Config struct {
// APIKey is your Datadog API key. This is used for sending metrics.
APIKey string
// AppKey is your Datadog App key. This is used for sending metrics.
AppKey string
// ShouldRetryOnFailure is used to turn on retry logic when sending metrics via the API. This can negatively effect the performance of your lambda,
// and should only be turned on if you can't afford to lose metrics data under poor network conditions.
ShouldRetryOnFailure bool
// BatchInterval is the period of time which metrics are grouped together for processing to be sent to the API or written to logs.
// Any pending metrics are flushed at the end of the lambda.
BatchInterval time.Duration
}
)

const (
// DatadogAPIKeyEnvVar is the environment variable that will be used as an API key by default
DatadogAPIKeyEnvVar = "DATADOG_API_KEY"
// DatadogAPPKeyEnvVar is the environment variable that will be used as an API key by default
DatadogAPPKeyEnvVar = "DATADOG_APP_KEY"
)

// WrapHandler is used to instrument your lambda functions, reading in context from API Gateway.
// It returns a modified handler that can be passed directly to the lambda.Start function.
func WrapHandler(handler interface{}) interface{} {
hl := trace.Listener{}
return trace.WrapHandlerWithListener(handler, &hl)
func WrapHandler(handler interface{}, cfg *Config) interface{} {

// Set up state that is shared between handler invocations
tl := trace.Listener{}
ml := metrics.MakeListener(cfg.toMetricsConfig())
return wrapper.WrapHandlerWithListeners(handler, &tl, &ml)
}

// GetTraceHeaders reads a map containing the DataDog trace headers from a context object.
Expand All @@ -27,3 +59,61 @@ func AddTraceHeaders(ctx context.Context, req *http.Request) {
req.Header.Add(key, value)
}
}

// GetContext retrieves the last created lambda context.
// Only use this if you aren't manually passing context through your call hierarchy.
func GetContext() context.Context {
return wrapper.CurrentContext
}

// DistributionWithContext sends a distribution metric to DataDog
func DistributionWithContext(ctx context.Context, metric string, value float64, tags ...string) {
pr := metrics.GetProcessor(GetContext())
if pr == nil {
return
}

// We add our own runtime tag to the metric for version tracking
tags = append(tags, getRuntimeTag())

m := metrics.Distribution{
Name: metric,
Tags: tags,
Values: []float64{},
}
m.AddPoint(value)
pr.AddMetric(&m)
}

// Distribution sends a distribution metric to DataDog
func Distribution(metric string, value float64, tags ...string) {
DistributionWithContext(GetContext(), metric, value, tags...)
}

func (cfg *Config) toMetricsConfig() metrics.Config {

mc := metrics.Config{
ShouldRetryOnFailure: false,
}

if cfg != nil {
mc.BatchInterval = cfg.BatchInterval
mc.ShouldRetryOnFailure = cfg.ShouldRetryOnFailure
mc.APIKey = cfg.APIKey
mc.AppKey = cfg.AppKey
}

if mc.APIKey == "" {
mc.APIKey = os.Getenv(DatadogAPIKeyEnvVar)

}
if mc.AppKey == "" {
mc.AppKey = os.Getenv(DatadogAPIKeyEnvVar)
}
return mc
}

func getRuntimeTag() string {
v := runtime.Version()
return fmt.Sprintf("dd_lambda_layer:datadog-%s", v)
}
11 changes: 11 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
module github.com/DataDog/dd-lambda-go

go 1.12

require (
github.com/aws/aws-sdk-go v1.19.40 // indirect
github.com/aws/aws-xray-sdk-go v1.0.0-rc.9
github.com/cenkalti/backoff v2.1.1+incompatible
github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575 // indirect
github.com/pkg/errors v0.8.1 // indirect
)
6 changes: 6 additions & 0 deletions go.sum
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
github.com/aws/aws-sdk-go v1.19.40/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-xray-sdk-go v1.0.0-rc.9/go.mod h1:XtMKdBQfpVut+tJEwI7+dJFRxxRdxHDyVNp2tHXRq04=
github.com/cenkalti/backoff v2.1.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/cihub/seelog v0.0.0-20170130134532-f561c5e57575/go.mod h1:9d6lWj8KzO/fd/NrVaLscBKmPigpZpn5YawRPw+e3Yo=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
105 changes: 105 additions & 0 deletions internal/metrics/api.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
package metrics

import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http"
)

type (
// Client sends metrics to Datadog
Client interface {
SendMetrics(metrics []APIMetric) error
}

// APIClient send metrics to Datadog, via the Datadog API
APIClient struct {
apiKey string
appKey string
baseAPIURL string
httpClient *http.Client
context context.Context
}

postMetricsModel struct {
Series []APIMetric `json:"series"`
}
)

// MakeAPIClient creates a new API client with the given api and app keys
func MakeAPIClient(ctx context.Context, baseAPIURL, apiKey, appKey string) *APIClient {
httpClient := &http.Client{}
return &APIClient{
apiKey: apiKey,
appKey: appKey,
baseAPIURL: baseAPIURL,
httpClient: httpClient,
context: ctx,
}
}

// PrewarmConnection sends a redundant GET request to the Datadog API to prewarm the TSL connection
func (cl *APIClient) PrewarmConnection() error {
req, err := http.NewRequest("GET", cl.makeRoute("validate"), nil)
if err != nil {
return fmt.Errorf("Couldn't create prewarming request: %v", err)
}
req = req.WithContext(cl.context)

cl.addAPICredentials(req)
_, err = cl.httpClient.Do(req)
if err != nil {
return fmt.Errorf("Couldn't contact server for prewarm request")
}
return nil
}

// SendMetrics posts a batch metrics payload to the Datadog API
func (cl *APIClient) SendMetrics(metrics []APIMetric) error {
content, err := marshalAPIMetricsModel(metrics)
if err != nil {
return fmt.Errorf("Couldn't marshal metrics model: %v", err)
}
body := bytes.NewBuffer(content)

req, err := http.NewRequest("POST", cl.makeRoute("series"), body)
if err != nil {
return fmt.Errorf("Couldn't create send metrics request:%v", err)
}
req = req.WithContext(cl.context)

defer req.Body.Close()

cl.addAPICredentials(req)

resp, err := cl.httpClient.Do(req)

if err != nil {
return fmt.Errorf("Failed to send metrics to API")
}
defer resp.Body.Close()

if resp.StatusCode != http.StatusCreated {
return fmt.Errorf("Failed to send metrics to API. Status Code %d", resp.StatusCode)
}
return nil
}

func (cl *APIClient) addAPICredentials(req *http.Request) {
query := req.URL.Query()
query.Add(apiKeyParam, cl.apiKey)
query.Add(appKeyParam, cl.appKey)
req.URL.RawQuery = query.Encode()
}

func (cl *APIClient) makeRoute(route string) string {
return fmt.Sprintf("%s/%s", cl.baseAPIURL, route)
}

func marshalAPIMetricsModel(metrics []APIMetric) ([]byte, error) {
pm := postMetricsModel{}
pm.Series = metrics
return json.Marshal(pm)
}
Loading

0 comments on commit 933a459

Please sign in to comment.