Skip to content

feat: Span Streaming (WIP)#19119

Draft
Lms24 wants to merge 9 commits intodevelopfrom
lms/feat-span-first
Draft

feat: Span Streaming (WIP)#19119
Lms24 wants to merge 9 commits intodevelopfrom
lms/feat-span-first

Conversation

@Lms24 Lms24 changed the title feat(core): Add span v2 and envelope type definitions (#19100) feat: Span Streaming (WIP) Feb 2, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Feb 2, 2026

Codecov Results 📊


Generated by Codecov Action

@github-actions
Copy link
Contributor

github-actions bot commented Feb 2, 2026

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 9,168 - 9,167 +0%
GET With Sentry 1,737 19% 1,740 -0%
GET With Sentry (error only) 6,175 67% 6,123 +1%
POST Baseline 1,212 - 1,196 +1%
POST With Sentry 601 50% 600 +0%
POST With Sentry (error only) 1,051 87% 1,068 -2%
MYSQL Baseline 3,341 - 3,344 -0%
MYSQL With Sentry 535 16% 510 +5%
MYSQL With Sentry (error only) 2,701 81% 2,729 -1%

View base workflow run

@Lms24 Lms24 self-assigned this Feb 2, 2026
Lms24 added a commit that referenced this pull request Feb 9, 2026
This PR adds a simple span buffer implementation to be used for
buffering streamed spans.

Behaviour:
- buckets incoming spans by `traceId`, as we must not mix up spans of
different traces in one envelope
- flushes the entire buffer every 5s by default
- flushes the specific trace bucket if the max span limit (1000) is
reached. Relay accepts at max. 1000 spans per envelope
- computes the DSC when flushing the first span of a trace. This is the
latest time we can do it as once we flushed we have to freeze the DSC
for Dynamic Sampling consistency
- debounces the flush interval whenever we flush
- flushes the entire buffer if `Sentry.flush()` is called
- shuts down the interval-based flushing when `Sentry.close()` is called
- [implicit] Client report generation for dropped envelopes is handled
in the transport

Methods:
- `add` accepts a new span to be enqueued into the buffer
- `drain` flushes the entire buffer
- `flush(traceId)` flushes a specific traceId bucket. This can be used
by e.g. the browser span streaming implementation to flush out the trace
of a segment span directly once it ends.

Options:
- `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom
span limit. Useful for testing but we could also expose this to users if
we see a need
- `flushInterval`- allows to configure a >0 flush interval

Limitations/edge cases:
- No maximum limit of concurrently buffered traces. I'd tend to accept
this for now and see where this leads us in terms of memory pressure but
at the end of the day, the interval based flushing, in combination with
our promise buffer _should_ avoid an ever-growing map of trace buckets.
Happy to change this if reviewers have strong opinions or I'm missing
something important!
- There's no priority based scheduling relative to other telemetry
items. Just like with our other log and metric buffers.
- since `Map` is [insertion order
preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description),
we apply a FIFO strategy when`drain`ing the trace buckets. This is in
line with our [develop
spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.)
for the telemetry processor but might lead to cases where new traces are
dropped by the promise buffer if a lof of concurrently running traces
are flushed. I think that's a fine trade off.

ref #19119
@github-actions
Copy link
Contributor

github-actions bot commented Feb 9, 2026

size-limit report 📦

⚠️ Warning: Base artifact is not the latest one, because the latest workflow run is not done yet. This may lead to incorrect results. Try to re-run all tests to get up to date results.

Path Size % Change Change
@sentry/browser 25.71 kB +0.34% +87 B 🔺
@sentry/browser - with treeshaking flags 24.21 kB +0.32% +76 B 🔺
@sentry/browser (incl. Tracing) 42.72 kB +0.68% +285 B 🔺
@sentry/browser (incl. Tracing, Profiling) 47.4 kB +0.65% +306 B 🔺
@sentry/browser (incl. Tracing, Replay) 81.56 kB +0.39% +309 B 🔺
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 71.17 kB +0.42% +295 B 🔺
@sentry/browser (incl. Tracing, Replay with Canvas) 86.24 kB +0.34% +290 B 🔺
@sentry/browser (incl. Tracing, Replay, Feedback) 98.48 kB +0.29% +277 B 🔺
@sentry/browser (incl. Feedback) 42.51 kB +0.17% +69 B 🔺
@sentry/browser (incl. sendFeedback) 30.38 kB +0.28% +83 B 🔺
@sentry/browser (incl. FeedbackAsync) 35.43 kB +0.22% +77 B 🔺
@sentry/browser (incl. Metrics) 26.87 kB +0.28% +75 B 🔺
@sentry/browser (incl. Logs) 27.02 kB +0.29% +78 B 🔺
@sentry/browser (incl. Metrics & Logs) 27.69 kB +0.29% +80 B 🔺
@sentry/react 27.47 kB +0.33% +88 B 🔺
@sentry/react (incl. Tracing) 45.07 kB +0.67% +297 B 🔺
@sentry/vue 30.37 kB +0.98% +293 B 🔺
@sentry/vue (incl. Tracing) 44.59 kB +0.67% +294 B 🔺
@sentry/svelte 25.74 kB +0.32% +82 B 🔺
CDN Bundle 28.24 kB +0.24% +65 B 🔺
CDN Bundle (incl. Tracing) 43.56 kB +0.71% +303 B 🔺
CDN Bundle (incl. Logs, Metrics) 29.07 kB +0.22% +63 B 🔺
CDN Bundle (incl. Tracing, Logs, Metrics) 44.41 kB +0.7% +305 B 🔺
CDN Bundle (incl. Replay, Logs, Metrics) 68.17 kB +0.13% +84 B 🔺
CDN Bundle (incl. Tracing, Replay) 80.4 kB +0.33% +263 B 🔺
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) 81.3 kB +0.37% +297 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) 85.94 kB +0.34% +284 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) 86.83 kB +0.35% +297 B 🔺
CDN Bundle - uncompressed 82.55 kB +0.24% +196 B 🔺
CDN Bundle (incl. Tracing) - uncompressed 128.93 kB +0.68% +863 B 🔺
CDN Bundle (incl. Logs, Metrics) - uncompressed 85.38 kB +0.24% +196 B 🔺
CDN Bundle (incl. Tracing, Logs, Metrics) - uncompressed 131.76 kB +0.66% +863 B 🔺
CDN Bundle (incl. Replay, Logs, Metrics) - uncompressed 209.05 kB +0.1% +196 B 🔺
CDN Bundle (incl. Tracing, Replay) - uncompressed 245.81 kB +0.36% +863 B 🔺
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) - uncompressed 248.63 kB +0.35% +863 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 258.73 kB +0.34% +864 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) - uncompressed 261.54 kB +0.34% +864 B 🔺
@sentry/nextjs (client) 47.48 kB +0.63% +294 B 🔺
@sentry/sveltekit (client) 43.18 kB +0.67% +286 B 🔺
@sentry/node-core 52.33 kB +0.17% +85 B 🔺
@sentry/node 174.99 kB +0.17% +282 B 🔺
@sentry/node - without tracing 97.47 kB +0.09% +81 B 🔺
@sentry/aws-serverless 113.27 kB +0.07% +76 B 🔺

View base workflow run

@Lms24 Lms24 force-pushed the lms/feat-span-first branch from 28441df to bad8399 Compare February 13, 2026 15:29
Lms24 added a commit that referenced this pull request Feb 13, 2026
This PR adds a simple span buffer implementation to be used for
buffering streamed spans.

Behaviour:
- buckets incoming spans by `traceId`, as we must not mix up spans of
different traces in one envelope
- flushes the entire buffer every 5s by default
- flushes the specific trace bucket if the max span limit (1000) is
reached. Relay accepts at max. 1000 spans per envelope
- computes the DSC when flushing the first span of a trace. This is the
latest time we can do it as once we flushed we have to freeze the DSC
for Dynamic Sampling consistency
- debounces the flush interval whenever we flush
- flushes the entire buffer if `Sentry.flush()` is called
- shuts down the interval-based flushing when `Sentry.close()` is called
- [implicit] Client report generation for dropped envelopes is handled
in the transport

Methods:
- `add` accepts a new span to be enqueued into the buffer
- `drain` flushes the entire buffer
- `flush(traceId)` flushes a specific traceId bucket. This can be used
by e.g. the browser span streaming implementation to flush out the trace
of a segment span directly once it ends.

Options:
- `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom
span limit. Useful for testing but we could also expose this to users if
we see a need
- `flushInterval`- allows to configure a >0 flush interval

Limitations/edge cases:
- No maximum limit of concurrently buffered traces. I'd tend to accept
this for now and see where this leads us in terms of memory pressure but
at the end of the day, the interval based flushing, in combination with
our promise buffer _should_ avoid an ever-growing map of trace buckets.
Happy to change this if reviewers have strong opinions or I'm missing
something important!
- There's no priority based scheduling relative to other telemetry
items. Just like with our other log and metric buffers.
- since `Map` is [insertion order
preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description),
we apply a FIFO strategy when`drain`ing the trace buckets. This is in
line with our [develop
spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.)
for the telemetry processor but might lead to cases where new traces are
dropped by the promise buffer if a lof of concurrently running traces
are flushed. I think that's a fine trade off.

ref #19119
@Lms24 Lms24 force-pushed the lms/feat-span-first branch from bad8399 to 5a165d4 Compare February 16, 2026 17:38
Lms24 added a commit that referenced this pull request Feb 16, 2026
This PR adds a simple span buffer implementation to be used for
buffering streamed spans.

Behaviour:
- buckets incoming spans by `traceId`, as we must not mix up spans of
different traces in one envelope
- flushes the entire buffer every 5s by default
- flushes the specific trace bucket if the max span limit (1000) is
reached. Relay accepts at max. 1000 spans per envelope
- computes the DSC when flushing the first span of a trace. This is the
latest time we can do it as once we flushed we have to freeze the DSC
for Dynamic Sampling consistency
- debounces the flush interval whenever we flush
- flushes the entire buffer if `Sentry.flush()` is called
- shuts down the interval-based flushing when `Sentry.close()` is called
- [implicit] Client report generation for dropped envelopes is handled
in the transport

Methods:
- `add` accepts a new span to be enqueued into the buffer
- `drain` flushes the entire buffer
- `flush(traceId)` flushes a specific traceId bucket. This can be used
by e.g. the browser span streaming implementation to flush out the trace
of a segment span directly once it ends.

Options:
- `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom
span limit. Useful for testing but we could also expose this to users if
we see a need
- `flushInterval`- allows to configure a >0 flush interval

Limitations/edge cases:
- No maximum limit of concurrently buffered traces. I'd tend to accept
this for now and see where this leads us in terms of memory pressure but
at the end of the day, the interval based flushing, in combination with
our promise buffer _should_ avoid an ever-growing map of trace buckets.
Happy to change this if reviewers have strong opinions or I'm missing
something important!
- There's no priority based scheduling relative to other telemetry
items. Just like with our other log and metric buffers.
- since `Map` is [insertion order
preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description),
we apply a FIFO strategy when`drain`ing the trace buckets. This is in
line with our [develop
spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.)
for the telemetry processor but might lead to cases where new traces are
dropped by the promise buffer if a lof of concurrently running traces
are flushed. I think that's a fine trade off.

ref #19119
Lms24 added a commit that referenced this pull request Mar 2, 2026
This PR adds a simple span buffer implementation to be used for
buffering streamed spans.

Behaviour:
- buckets incoming spans by `traceId`, as we must not mix up spans of
different traces in one envelope
- flushes the entire buffer every 5s by default
- flushes the specific trace bucket if the max span limit (1000) is
reached. Relay accepts at max. 1000 spans per envelope
- computes the DSC when flushing the first span of a trace. This is the
latest time we can do it as once we flushed we have to freeze the DSC
for Dynamic Sampling consistency
- debounces the flush interval whenever we flush
- flushes the entire buffer if `Sentry.flush()` is called
- shuts down the interval-based flushing when `Sentry.close()` is called
- [implicit] Client report generation for dropped envelopes is handled
in the transport

Methods:
- `add` accepts a new span to be enqueued into the buffer
- `drain` flushes the entire buffer
- `flush(traceId)` flushes a specific traceId bucket. This can be used
by e.g. the browser span streaming implementation to flush out the trace
of a segment span directly once it ends.

Options:
- `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom
span limit. Useful for testing but we could also expose this to users if
we see a need
- `flushInterval`- allows to configure a >0 flush interval

Limitations/edge cases:
- No maximum limit of concurrently buffered traces. I'd tend to accept
this for now and see where this leads us in terms of memory pressure but
at the end of the day, the interval based flushing, in combination with
our promise buffer _should_ avoid an ever-growing map of trace buckets.
Happy to change this if reviewers have strong opinions or I'm missing
something important!
- There's no priority based scheduling relative to other telemetry
items. Just like with our other log and metric buffers.
- since `Map` is [insertion order
preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description),
we apply a FIFO strategy when`drain`ing the trace buckets. This is in
line with our [develop
spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.)
for the telemetry processor but might lead to cases where new traces are
dropped by the promise buffer if a lof of concurrently running traces
are flushed. I think that's a fine trade off.

ref #19119
@Lms24 Lms24 force-pushed the lms/feat-span-first branch from 24015b1 to e5c1208 Compare March 4, 2026 14:20
Lms24 added a commit that referenced this pull request Mar 4, 2026
This PR adds a simple span buffer implementation to be used for
buffering streamed spans.

Behaviour:
- buckets incoming spans by `traceId`, as we must not mix up spans of
different traces in one envelope
- flushes the entire buffer every 5s by default
- flushes the specific trace bucket if the max span limit (1000) is
reached. Relay accepts at max. 1000 spans per envelope
- computes the DSC when flushing the first span of a trace. This is the
latest time we can do it as once we flushed we have to freeze the DSC
for Dynamic Sampling consistency
- debounces the flush interval whenever we flush
- flushes the entire buffer if `Sentry.flush()` is called
- shuts down the interval-based flushing when `Sentry.close()` is called
- [implicit] Client report generation for dropped envelopes is handled
in the transport

Methods:
- `add` accepts a new span to be enqueued into the buffer
- `drain` flushes the entire buffer
- `flush(traceId)` flushes a specific traceId bucket. This can be used
by e.g. the browser span streaming implementation to flush out the trace
of a segment span directly once it ends.

Options:
- `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom
span limit. Useful for testing but we could also expose this to users if
we see a need
- `flushInterval`- allows to configure a >0 flush interval

Limitations/edge cases:
- No maximum limit of concurrently buffered traces. I'd tend to accept
this for now and see where this leads us in terms of memory pressure but
at the end of the day, the interval based flushing, in combination with
our promise buffer _should_ avoid an ever-growing map of trace buckets.
Happy to change this if reviewers have strong opinions or I'm missing
something important!
- There's no priority based scheduling relative to other telemetry
items. Just like with our other log and metric buffers.
- since `Map` is [insertion order
preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description),
we apply a FIFO strategy when`drain`ing the trace buckets. This is in
line with our [develop
spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.)
for the telemetry processor but might lead to cases where new traces are
dropped by the promise buffer if a lof of concurrently running traces
are flushed. I think that's a fine trade off.

ref #19119
Lms24 and others added 8 commits March 6, 2026 11:18
This PR introduces span v2 types as defined in our [develop
spec](https://develop.sentry.dev/sdk/telemetry/spans/span-protocol/):

* Envelope types:
* `SpanV2Envelope`, `SpanV2EnvelopeHeaders`, `SpanContainerItem`,
`SpanContainerItemHeaders`
* Span v2 types:
* `SpanV2JSON` the equivalent to today's `SpanJSON`. Users will interact
with spans in this format in `beforeSendSpan`. SDK integrations will use
this format in `processSpan` (and related) hooks.
* `SerializedSpan` the final, serialized format for v2 spans, sent in
the envelope container item.

Closes #19101 (added automatically)

ref #17836
…ility utilities (#19120)

This adds the foundation for user-facing span streaming configuration:

- **`traceLifecycle` option**: New option in `ClientOptions` that
controls whether spans are sent statically (when the entire local span
tree is complete) or streamed (in batches following interval- and
action-based triggers).

Because the span JSON will look different for streamed spans vs. static
spans (i.e. our current ones, we also need some helpers for
`beforeSendSpan` where users consume and interact with
`StreamedSpanJSON`:

- **`withStreamedSpan()` utility**: Wrapper function that marks a
`beforeSendSpan` callback as compatible with the streamed span format
(`StreamedSpanJSON`)

- **`isStreamedBeforeSendSpanCallback()` type guard**: Internal utility
to check if a callback was wrapped with `withStreamedSpan`
Adds a utility to create a span v2 envelope from a `SerializedSpan`
array + tests.

Note: I think here, the "v2" naming makes more sense than the
`StreamSpan` patter we use for user-facing functionality. This function
should never be called by users, and the envelope is type `span` with
content type `span.v2+json`

ref #17836
This PR adds span JSON conversion and serialization helpers for span
streaming:

* `spanToStreamedSpanJSON`: Converts a `Span` instance to a JSON object
used as intermediate representation as outlined in
#19100
* Adds `SentrySpan::getStreamedSpanJSON` method to convert our own spans
  * Directly converts any OTel spans
  * This is analogous to how `spanToJSON` works today.
* `spanJsonToSerializedSpan`: Converts a `StreamedSpanJSON` into the
final `SerializedSpan` to be sent to Sentry.

This PR also adds unit tests for both helpers.

ref #17836

---------

Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Jan Peer Stöcklmair <jan.peer@sentry.io>
This PR adds the `captureSpan` pipeline, which takes a `Span` instance,
processes it and ultimately returns a `SerializedStreamedSpan` which can
then be enqueued into the span buffer.

ref #17836
This PR adds a simple span buffer implementation to be used for
buffering streamed spans.

Behaviour:
- buckets incoming spans by `traceId`, as we must not mix up spans of
different traces in one envelope
- flushes the entire buffer every 5s by default
- flushes the specific trace bucket if the max span limit (1000) is
reached. Relay accepts at max. 1000 spans per envelope
- computes the DSC when flushing the first span of a trace. This is the
latest time we can do it as once we flushed we have to freeze the DSC
for Dynamic Sampling consistency
- debounces the flush interval whenever we flush
- flushes the entire buffer if `Sentry.flush()` is called
- shuts down the interval-based flushing when `Sentry.close()` is called
- [implicit] Client report generation for dropped envelopes is handled
in the transport

Methods:
- `add` accepts a new span to be enqueued into the buffer
- `drain` flushes the entire buffer
- `flush(traceId)` flushes a specific traceId bucket. This can be used
by e.g. the browser span streaming implementation to flush out the trace
of a segment span directly once it ends.

Options:
- `maxSpanLimit` - allows to configure a 0 < maxSpanLimit < 1000 custom
span limit. Useful for testing but we could also expose this to users if
we see a need
- `flushInterval`- allows to configure a >0 flush interval

Limitations/edge cases:
- No maximum limit of concurrently buffered traces. I'd tend to accept
this for now and see where this leads us in terms of memory pressure but
at the end of the day, the interval based flushing, in combination with
our promise buffer _should_ avoid an ever-growing map of trace buckets.
Happy to change this if reviewers have strong opinions or I'm missing
something important!
- There's no priority based scheduling relative to other telemetry
items. Just like with our other log and metric buffers.
- since `Map` is [insertion order
preserving](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#description),
we apply a FIFO strategy when`drain`ing the trace buckets. This is in
line with our [develop
spec](https://develop.sentry.dev/sdk/telemetry/telemetry-processor/backend-telemetry-processor/#:~:text=The%20span%20buffer,in%20the%20buffer.)
for the telemetry processor but might lead to cases where new traces are
dropped by the promise buffer if a lof of concurrently running traces
are flushed. I think that's a fine trade off.

ref #19119
This PR adds the final big building block for span streaming
functionality in the browser SDK: `spanStreamingIntegation`.

This integration:
- enables `traceLifecycle: 'stream'` if not already set by users. This
allows us to avoid the double-opt-in problem we usually have in browser
SDKs because we want to keep integration tree-shakeable but also support
the runtime-agnostic `traceLifecycle` option.
- to do this properly, I decided to introduce a new integration hook:
`beforeSetup`. This is allows us to safely modify client options before
other integrations read it. We'll need this because
`browserTracingIntegration` needs to check for span streaming later on.
Let me know what you think!
- validates that `beforeSendSpan` is compatible with span streaming. If
not, it falls back to static tracing (transactions).
- listens to a new `afterSpanEnd` hook. Once called, it will capture the
span and hand it off to the span buffer.
- listens to a new `afterSegmentSpanEnd` hook. Once called it will flush
the trace from the buffer to ensure we flush out the trace as soon as
possible. In browser, it's more likely that users refresh or close the
tab/window before our buffer's internal flush interval triggers. We
don't _have_ to do this but I figured it would be a good trigger point.

While "final building block" sounds nice, there's still a lot of stuff
to take care of in the browser. But with this in place we can also start
integration-testing the browser SDKs.

ref #17836

---------

Co-authored-by: Jan Peer Stöcklmair <jan.peer@sentry.io>
Adds weight-based flushing and span size estimation to the span buffer.

Behaviour:
- tracks weight independently per trace
- weight estimation follows the same strategy we use for logs and
metrics. I optimized the calculation, adding fixed sizes for as many
fields as possible. Only span name, attributes and links are computed
dynamically, with the same assumptions and considerations as in logs and
metrics.
- My tests show that the size estimation roughly compares to factor 0.8
to 1.2 to the real sizes, depending on data on spans (no, few, many,
primitive, array attributes and links, etc.)
- For now, the limit is set to 5MB which is half of the 10MB Relay
accepts for span envelopes.
@Lms24 Lms24 force-pushed the lms/feat-span-first branch from e5c1208 to 12c9b29 Compare March 6, 2026 10:18
This PR adds browser integration to test testing span streaming:

- Added test helpers:
  - `waitForStreamedSpan`: Returns a promise of a single matching span
- `waitForStreamedSpans`: Returns a promise of all spans in an array
whenever the callback returns true
- `waitForStreamedSpanEnvelope`: Returns an entire streamed span (v2)
envelope (including headers)
- `observeStreamedSpan`: Can be used to observe sent span envelopes
without blocking the test if no envelopes are sent (good for testing
that spans are _not_ sent)
- `getSpanOp`: Small helper to easily get the op of a span which we
almost always need for the `waitFor*` function callbacks

Added 50+ tests, mostly converted from transaction integration tests
around spans from `browserTracingIntegration`:
- tests asserting the entire span v2 envelope payloads of manually
started, pageload and navigation span trees
- tests for trace linking and trace lifetime
- tests for spans coming from browserTracingIntegration (fetch, xhr,
long animation frame, long tasks)

Also, this PR fixes two bugs discovered through tests:
- negatively sampled spans were still sent (because non-recording spans
go through the same span life cycle)
- cancelled spans received status `error` instead of `ok`. We want them
to have status `ok` but an attribute detailing the cancellation reason.

Lastly, I discovered a problem with timing data on fetch and XHR spans.
Will try to fix as a follow-up. Tracked in #19613

ref #17836
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Span Streaming Implementation

1 participant