diff --git a/docs/docs/guides/create_firehose.md b/docs/docs/guides/create_firehose.md index a4fae4d4c..89f373580 100644 --- a/docs/docs/guides/create_firehose.md +++ b/docs/docs/guides/create_firehose.md @@ -133,9 +133,9 @@ _**Note:**_ [_**DATABASE**_](../sinks/influxdb-sink.md#sink_influx_db_name) _**a - it requires the following [variables](../sinks/bigquery-sink.md) to be set. - For INPUT_SCHEMA_DATA_TYPE = protobuf, this sink will generate bigquery schema from protobuf message schema and update bigquery table with the latest generated schema. - - The protobuf message of a `google.protobuf.Timestamp` field might be needed when table partitioning is enabled. +- The protobuf message of a `google.protobuf.Timestamp` field might be needed when table partitioning is enabled. - For INPUT_SCHEMA_DATA_TYPE = json, this sink will generate bigquery schema by infering incoming json. In future we will add support for json schema as well coming from stencil. - - The timestamp column is needed incase of partition table. It can be generated at the time of ingestion by setting the config. Please refer to config `SINK_BIGQUERY_ADD_EVENT_TIMESTAMP_ENABLE` in [depot bigquery sink config section](https://github.com/goto/depot/blob/main/docs/reference/configuration/bigquery-sink.md#sink_bigquery_add_event_timestamp_enable) +- The timestamp column is needed incase of partition table. It can be generated at the time of ingestion by setting the config. Please refer to config `SINK_BIGQUERY_ADD_EVENT_TIMESTAMP_ENABLE` in [depot bigquery sink config section](https://github.com/goto/depot/blob/main/docs/reference/configuration/bigquery-sink.md#sink_bigquery_add_event_timestamp_enable) - Google cloud credential with some bigquery permission is required to run this sink. ## Create a Bigtable sink @@ -149,4 +149,11 @@ _**Note:**_ [_**DATABASE**_](../sinks/influxdb-sink.md#sink_influx_db_name) _**a If you'd like to connect to a sink which is not yet supported, you can create a new sink by following the [contribution guidelines](../contribute/contribution.md) +## Create a MaxCompute sink +- it requires the following [variables](../sinks/maxcompute-sink.md) to be set. Please follow the Configuration section in the MaxCompute Sink documentation for more details. +- As of now it only supports INPUT_SCHEMA_DATA_TYPE = protobuf. Schema creation and update is inferred from protobuf schema. +- The protobuf message of a `google.protobuf.Timestamp` field might be needed when table partitioning is enabled. +- INPUT_SCHEMA_DATA_TYPE = json will be supported in future. +- Schema/Dataset need to be created in advance in MaxCompute. +- Service account requires ODPS and Tunnel Service permissions. \ No newline at end of file diff --git a/docs/docs/reference/glossary.md b/docs/docs/reference/glossary.md index 12dff4496..ea5d7bc71 100644 --- a/docs/docs/reference/glossary.md +++ b/docs/docs/reference/glossary.md @@ -87,6 +87,7 @@ [Log sink configs](../advance/generic.md) ## M +[MaxCompute Sink](../sinks/maxcompute-sink.md) [metrics](metrics.md) diff --git a/docs/docs/sinks/maxcompute-sink.md b/docs/docs/sinks/maxcompute-sink.md index b30482c68..d2722a32a 100644 --- a/docs/docs/sinks/maxcompute-sink.md +++ b/docs/docs/sinks/maxcompute-sink.md @@ -1,4 +1,11 @@ -# MaxCompute sink +# MaxCompute + +MaxCompute Sink is a sink connector that allows you to write data from Kafka to Alibaba's MaxCompute Table. It supports consuming data in Protobuf and JSON(ToDo) format. + +## Configuration + +MaxCompute Sink requires the following variables to be set along with Generic ones. +Please refer to the [Depot's MaxCompute Sink Configuration](https://github.com/goto/depot/blob/main/docs/reference/configuration/maxcompute.md) for more. ### Datatype Protobuf diff --git a/docs/sidebars.js b/docs/sidebars.js index 3191a5d60..c335d21a3 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -29,6 +29,7 @@ module.exports = { "sinks/redis-sink", "sinks/elasticsearch-sink", "sinks/blob-sink", + "sinks/maxcompute-sink", ], }, {