aws_dynamodb
component_type_dropdown::[]
Inserts items into a DynamoDB table.
-
Common
-
Advanced
# Common config fields, showing default values
output:
label: ""
aws_dynamodb:
table: "" # No default (required)
string_columns: {}
json_map_columns: {}
max_in_flight: 64
batching:
count: 0
byte_size: 0
period: ""
check: ""
# All config fields, showing default values
output:
label: ""
aws_dynamodb:
table: "" # No default (required)
string_columns: {}
json_map_columns: {}
ttl: ""
ttl_key: ""
max_in_flight: 64
batching:
count: 0
byte_size: 0
period: ""
check: ""
processors: [] # No default (optional)
region: ""
endpoint: ""
credentials:
profile: ""
id: ""
secret: ""
token: ""
from_ec2_role: false
role: ""
role_external_id: ""
max_retries: 3
backoff:
initial_interval: 1s
max_interval: 5s
max_elapsed_time: 30s
The field string_columns
is a map of column names to string values, where the values are function interpolated per message of a batch. This allows you to populate string columns of an item by extracting fields within the document payload or metadata like follows:
string_columns:
id: ${!json("id")}
title: ${!json("body.title")}
topic: ${!meta("kafka_topic")}
full_content: ${!content()}
The field json_map_columns
is a map of column names to json paths, where the dot path is extracted from each document and converted into a map value. Both an empty path and the path .
are interpreted as the root of the document. This allows you to populate map columns of an item like follows:
json_map_columns:
user: path.to.user
whole_document: .
A column name can be empty:
json_map_columns:
"": .
In which case the top level document fields will be written at the root of the item, potentially overwriting previously defined column values. If a path is not found within a document the column will not be populated.
Credentials
By default Redpanda Connect will use a shared credentials file when connecting to AWS services. It’s also possible to set them explicitly at the component level, allowing you to transfer data across accounts. You can find out more in Amazon Web Services.
Performance
This output benefits from sending multiple messages in flight in parallel for improved performance. You can tune the max number of in flight messages (or message batches) with the field max_in_flight
.
This output benefits from sending messages as a batch for improved performance. Batches can be formed at both the input and output level. You can find out more in this doc.
Fields
string_columns
A map of column keys to string values to store. This field supports interpolation functions.
Type: object
Default: {}
# Examples
string_columns:
full_content: ${!content()}
id: ${!json("id")}
title: ${!json("body.title")}
topic: ${!meta("kafka_topic")}
json_map_columns
A map of column keys to field paths pointing to value data within messages.
Type: object
Default: {}
# Examples
json_map_columns:
user: path.to.user
whole_document: .
json_map_columns:
"": .
ttl
An optional TTL to set for items, calculated from the moment the message is sent.
Type: string
Default: ""
max_in_flight
The maximum number of messages to have in flight at a given time. Increase this to improve throughput.
Type: int
Default: 64
batching
Allows you to configure a batching policy.
Type: object
# Examples
batching:
byte_size: 5000
count: 0
period: 1s
batching:
count: 10
period: 1s
batching:
check: this.contains("END BATCH")
count: 0
period: 1m
batching.count
A number of messages at which the batch should be flushed. If 0
disables count based batching.
Type: int
Default: 0
batching.byte_size
An amount of bytes at which the batch should be flushed. If 0
disables size based batching.
Type: int
Default: 0
batching.period
A period in which an incomplete batch should be flushed regardless of its size.
Type: string
Default: ""
# Examples
period: 1s
period: 1m
period: 500ms
batching.check
A Bloblang query that should return a boolean value indicating whether a message should end a batch.
Type: string
Default: ""
# Examples
check: this.type == "end_of_transaction"
batching.processors
A list of processors to apply to a batch as it is flushed. This allows you to aggregate and archive the batch however you see fit. Please note that all resulting messages are flushed as a single batch, therefore splitting the batch into smaller batches using these processors is a no-op.
Type: array
# Examples
processors:
- archive:
format: concatenate
processors:
- archive:
format: lines
processors:
- archive:
format: json_array
credentials
Optional manual configuration of AWS credentials to use. More information can be found in Amazon Web Services.
Type: object
credentials.secret
The secret for the credentials being used.
This field contains sensitive information that usually shouldn’t be added to a configuration directly. For more information, see Manage Secrets before adding it to your configuration. |
Type: string
Default: ""
credentials.token
The token for the credentials being used, required when using short term credentials.
Type: string
Default: ""
credentials.from_ec2_role
Use the credentials of a host EC2 machine configured to assume an IAM role associated with the instance.
Type: bool
Default: false
credentials.role_external_id
An external ID to provide when assuming a role.
Type: string
Default: ""
max_retries
The maximum number of retries before giving up on the request. If set to zero there is no discrete limit.
Type: int
Default: 3