Axiom applies certain limits and requirements to guarantee good service across the platform. Some of these limits depend on your pricing plan, and some of them are applied system-wide. This reference article explains all limits and requirements applied by Axiom. Limits are necessary to prevent potential issues that could arise from the ingestion of excessively large events or data structures that are too complex. Limits help maintain system performance, allow for effective data processing, and manage resources effectively.Documentation Index
Fetch the complete documentation index at: https://axiom.co/docs/llms.txt
Use this file to discover all available pages before exploring further.
Pricing-based limits
The table below summarizes the limits applied to each pricing plan. For more details on pricing and contact information, see the Axiom pricing page.| Personal | Axiom Cloud | |
|---|---|---|
| Always Free storage | 25 GB | 100 GB |
| Always Free data loading | 500 GB / month | 1,000 GB / month |
| Always Free query compute | 10 GB-hours / month | 100 GB-hours / month |
| Maximum data loading | 500 GB / month | – |
| Maximum data retention | 30 days | Custom |
| Datasets | 2 | 100 * |
| Fields per dataset | 256 | 1,024 * |
| Users | 1 | 1,000 * |
| Monitors | 3 | 500 * |
| Notifiers | Email, Discord | All supported |
| Supported edge deployments | US | US |
Restrictions on datasets and fields
Axiom restricts the number of datasets and the number of fields in your datasets. The number of datasets and fields you can use is based on your pricing plan and explained in the table above. If you ingest a new event that would exceed the allowed number of fields in a dataset, Axiom returns an error and rejects the event. To prevent this error, ensure that the number of fields in your events are within the allowed limits. To reduce the number of fields in a dataset, use one of the following approaches:- Trim the dataset and vacuum its fields.
- Use map fields.
System-wide limits
The following limits are applied to all accounts, irrespective of the pricing plan.Limits on ingested data
The table below summarizes the limits Axiom applies to each data ingest. These limits are independent of your pricing plan.| Limit | |
|---|---|
| Maximum field size | 1 MB |
| Maximum events in a batch | 10,000 |
| Maximum field name length | 200 bytes |
- Replaces strings that are too long with
<invalid string: too long>. - Replaces binary with
<invalid data>. - Truncates maps and slices that nest deeper than 100 levels and replaces them with
nilat the cut-off level. - Converts the following float values to
nil:- NaN
- +Infty
- -Infty
Special fields
Axiom creates the following two fields automatically for a new dataset:_timeis the timestamp of the event. If the data you ingest doesn’t have a_timefield, Axiom assigns the time of the data ingest to the events. If you ingest data using the Ingest data API endpoint, you can specify the timestamp field with the timestamp-field parameter._sysTimeis the time when you ingested the data.
_time to define the timestamp of events. In rare cases, if you experience clock skews on your event-producing systems, _sysTime can be useful.
Reserved field names
Axiom reserves the following field names for internal use:_blockInfo_cursor_rowID_source_sysTime
_user_FIELDNAME. For example, if you try to ingest the field _sysTime, Axiom renames it to _user_sysTime.
In general, avoid ingesting field names that start with _.
Requirements for timestamp field
The most important field requirement is about the timestamp.All events stored in Axiom must have a
_time timestamp field. If the data you ingest doesn’t have a _time field, Axiom assigns the time of the data ingest to the events. To specify the timestamp yourself, include a _time field in the ingested data._time field in the ingested data, follow these requirements:
- Timestamps are specified in the
_timefield. - The
_timefield contains timestamps in a valid time format. Axiom accepts many date strings and timestamps without knowing the format in advance, including Unix Epoch, RFC3339, or ISO 8601. - The
_timefield is a field with UTF-8 encoding. - The
_timefield isn’t used for any other purpose.
Requirements for log level fields
The Stream and Query tabs allow you to easily detect warnings and errors in your logs by highlighting the severity of log entries in different colors. As a prerequisite, specify the log level in the data you send to Axiom. For Open Telemetry logs, specify the log level in the following fields:severityseverityNumberseverityText
record.errorrecord.levelrecord.severitytype
level@levelseverity@severitystatus.code