BigQuery Storage Write API: "context deadline exceeded" only on low-frequency table

6 days ago 7
ARTICLE AD BOX

Problem

I'm using the BigQuery Storage Write API (Go managedwriter package) to upload data to three tables with very different ingestion rates:

Table Frequency Record Size
A ~10 records/sec Several KB
B ~10 records/min Small
C ~100 records/hour (intermittent) Small

Only Table C intermittently throws context deadline exceeded, while A and B work perfectly fine. Table C is actually the smallest table with the simplest schema.

Current Code Structure

For every record, I'm calling AppendRows and GetResult synchronously with a 3-second timeout on each:

// Per record upload ctx, cancel := context.WithTimeout(parentCtx, 3*time.Second) defer cancel() result, err := stream.AppendRows(ctx, req) if err != nil { return err } ctx2, cancel2 := context.WithTimeout(parentCtx, 3*time.Second) defer cancel2() if err := result.GetResult(ctx2); err != nil { return err }

Retry is attempted 3 times, but all retries fail with the same error.

What I've ruled out

Table size: Table C is the smallest

Batch size: Table C has the smallest batch size

Partitioning: No partition issues

Idle reconnect: Even after long idle periods, uploads succeed. The issue is truly intermittent

Schema complexity: Very simple schema (STRING, TIMESTAMP, nested RECORD with REPEATED)

Questions

Why would context deadline exceeded occur only on the lowest-frequency table, while the high-throughput table (Table A, several KB per record) works fine?

Could the 3-second timeout be too tight for intermittent/low-frequency writes, even for small records? Does BigQuery Storage API have higher latency for streams that are used infrequently?

Is there any known behavior where low-frequency write streams experience higher latency compared to high-frequency ones?

Environment

Language: Go

Package: cloud.google.com/go/bigquery/storage/managedwriter

Stream type: Default stream

Read Entire Article