Welcome to Zuora Product Documentation

Explore our rich library of product information

Streaming API as a source

Streaming as an API allows you to send usage events to Zuora in real time through a simple REST API. Instead of uploading batch files, your applications push usage directly to Zuora, where it is validated and processed through Zuora Mediation pipelines.

Use Streaming as an API when:

  • You need real-time or near real-time ingestion of usage.

  • Your integration prefers API-based interactions over file-based uploads.

  • Usage events arrive continuously throughout the day rather than in periodic batches.

  • You want immediate mediation processing and enrichment of incoming events.

When these conditions do not apply, you can continue to use file-based sources such as Amazon S3-based ingestion.

How Streaming as an API works

  1. Your application sends usage events to Zuora using a REST API.

  2. Zuora validates each event against the Event Definition (schema) associated with your meter.

  3. Validated records are processed in real time through your meter pipeline, including aggregation, subscription lookup, and any configured transformations.

  4. Final usage is written to Zuora Usage or other configured sinks.

Each usage event must conform to the schema configured for the meter. If the payload does not match the schema, the event is rejected during validation.

A 200 OK response only confirms that the API successfully received the request and that the request passed basic API-level validation. It does not mean that the record passed schema validation, the record moved successfully through the entire processing pipeline, or that it was written to the sink. Additional validation and processing occur after the API response.

Prerequisites

To access and configure Streaming API sources, you must have one of the following platform roles:

  • Standard User role.

  • API User role with the Run Meters or Configure Meters and Events permission.

In addition:

  • The Mediation and Usage Mediation features must be enabled for your tenant so that streaming usage events can be ingested and processed by meters.

  • You must pass a valid bearer token in the Authorization header when calling the Streaming API.

Meter volume limit

API Sandbox Zuora Developer Sandbox Zuora Central Sandbox
Streaming API
  • 1MB payload size
  • 1K rows/ each API call
  • 600 API calls/ min
  • Format: MultiJson, Single JSON
  • 2M payload size
  • 5K rows/ each API call
  • 5,000 API calls/ min
  • Format: MultiJson, Single Json
  • 4M payload size
  • 10K rows/ each API call
  • 50,000 API calls/ min
  • Format: MultiJson, Single JSON

The guidelines for Zuora Central Sandbox should be followed for your Production environment.

Best practices

To get reliable, efficient ingestion through Streaming as an API, follow these guidelines:

  • Use a well-designed schema:

    • Clearly define required fields.

    • Use correct data types (string, number, datetime).

  • Send accurate event timestamps:

    • Use the event timestamp field (for example, UsageDate) to ensure correct aggregation and rating.

  • Group records for efficiency:

    • Combine multiple usage records in a single API call within documented limits to reduce overhead.

  • Handle validation errors early:

    • If your payload does not match the schema, Zuora rejects the request.

    • Use validation feedback and the audit trail UI to identify and correct schema issues.

For streaming meters, data may take a short amount of time to be fully processed and available for downstream use. If one meter depends on the output of another, Zuora recommends introducing a brief delay before triggering the dependent meter to ensure that all data has been processed. As a general guideline, wait approximately 4-5 minutes, though this may vary based on data volume and system activity. This typically happens when:
  • Meter A processes raw events (for example, raw API calls or usage records).
  • Meter A writes the processed results to an intermediate event store.
  • Meter B reads from that intermediate dataset to perform further calculations (such as aggregation, tiering, pricing logic, or enrichment).
In this scenario, start Meter A first and allow a 4-5 min buffer before starting Meter B to help ensure Meter B works with complete and up-to-date data.

API format

Endpoint: POST /usage/bulk/{id}

Request body format: Send an array of usage records, each following the event schema defined in your meter.

Example:

[
  {
    "CustomerId": "A00000001",
    "UsageIdentifier": "API-Calls",
    "UsageDate": "2024-01-15T10:30:00-0700",
    "Quantity": 150
  },
  {
    "CustomerId": "A00000001",
    "UsageIdentifier": "Storage-GB",
    "UsageDate": "2024-01-15T10:30:00-0700",
    "Quantity": 25.5
  }
]

Troubleshooting

Issue

Cause

Solution

Records not displayed in the sink after sending events

The meter is still initializing or paused.

  • Confirm that the meter is running (not paused).

  • Ensure that events are being sent to the correct source.

Schema validation errors - Events are rejected or not processed because of validation failures

  • Incorrect field names.

  • Wrong data types.

  • Missing required fields.

  • Payload structure does not match the configured schema.

  • Compare your event payload with the meter schema.

  • Validate field names, types, and required attributes.

  • Ensure that your payload structure exactly matches the schema definition.

Multiple source error - Error indicating that multiple Streaming API sources are configured

The meter is configured with more than one Streaming API source, which is not supported.

  • Ensure that only one Streaming API source is configured per meter.

  • Remove any additional sources and run the meter.

Sending events when a meter is paused

When a meter is paused, events are still ingested into the source system (streaming API in this case). Pausing a meter does not stop ingestion. However, the paused meter does not process those events at that time. Instead, they accumulate as a backlog.

When the meter is resumed, it continues processing from the last committed position or offset and will process the accumulated backlog. This happens as long as the events are still within the source’s retention and volume limits.

Pausing a meter delays processing, it does not drop or reject events by itself. Events are only lost if the backlog exceeds the source’s retention period (7 days) or capacity limits before the meter is resumed.