Here's what nobody tells you about building web components: you can follow all the best practices, optimize your Shadow DOM, and keep your bundle sizes tiny, but if you're not measuring performance, you're flying blind. Sure, components perform well on your beefy MacBook Pro, but what about on a mid-range Android device over spotty 3G connection?
The good news? There's a browser API specifically designed for this. The User Timing API lets you drop performance markers directly into your component lifecycle and see exactly where time is being spent. No third-party libraries, no complex setup, just widely-available native browser capabilities that integrate beautifully with dev tools and Lighthouse.
This guide will walk you through instrumenting your web components with performance tracking that actually matters; measurements you can act on with insights into the user experience.
Performance isn't just about making things fast, it's about understanding where your components spend their time. These insights help you make informed decisions about where to spend your user's attention. Did that fancy animation you added slow down initial render? By how much? Is your component's first paint happening fast enough that users don't notice a delay? You won't know unless you track the data.
The web can be unforgiving. Users expect instant responses with limited attention to spare; every millisecond counts. Performance marks show up in your dev tools, giving you visibility into what's actually happening when your components hit the DOM.
Understanding the User Timing API
Before we dive into implementation, let's talk about what the User Timing API actually does. It provides high-precision timestamps that are part of the browser's performance timeline, and it does this through two simple concepts: marks and measures.
Marks are timestamps you place at specific points in your code. Think of them as performance breadcrumbs--they tell you when something happened.
Measures calculate the elapsed time between two marks. They're the answer to "how long did that take?"
The beauty of this API is simplicity. No complex configuration or heavyweight monitoring solutions. You're just saying "mark this moment" and "measure from here to there."
Core implementation strategy
When instrumenting web components, you want to capture the moments that matter most to users. Here's a few things you could be tracking:
Component registration: When the component is defined and added to the custom elements registry
DOM connection: When the component is first inserted into the document
First render: When the component completes its initial render and becomes visible
Upgrade time: The total time from registration to first render
These measurements tell you the complete story of your component's lifecycle performance. Let's look at how to implement this.
Implementation patterns
The first step is identifying where to place your marks. For web components, there are natural lifecycle hooks that map perfectly to performance milestones.
When your component connects to the DOM, drop a mark:
connectedCallback() {
performance.mark(`${this.localName}:connected`);
this.render();
}
In this and following examples, this.localName represents a property on the web component that stores it's tag name, i.e., custom-button.
After your first render completes, drop another. You'll need to track whether this is the first render:
render() {
this.shadowRoot.innerHTML = `
<button>Click me</button>
`;
if (!this._hasRendered) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
}
}
For component registration, you'll want to add a mark when you define the custom element:
customElements.define('my-component', MyComponent);
performance.mark('my-component:defined');
Creating meaningful measures
Now that you have marks in place, you can create measures that tell you how long critical paths are taking. The measure between connection and first render is especially valuable because it represents the user-facing initialization time:
render() {
this.shadowRoot.innerHTML = `
<button>Click me</button>
`;
if (!this._hasRendered) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
performance.measure(
`${this.localName}:upgrade`,
`${this.localName}:connected`,
`${this.localName}:first-render`
);
}
}
Using consistent naming conventions
This is crucial: establish a naming convention and stick with it. I recommend using the component's tag name followed by a colon and the event name:
component-name:defined
component-name:connected
component-name:first-render
component-name:upgrade
This pattern makes it easy to filter and analyze your metrics later. When you're looking at DevTools with dozens of components on the page, clear naming saves you from the cognitive overhead of figuring out which measurement belongs to what.
Building reusable patterns
Rather than duplicating this code across every component, there are a few optimization options you can use: create or add it to a base class or use a mixin. Both approaches give you performance tracking for free, once set up, but they serve different architectural needs.
Option 1: Base class
A base class works well when you want a standardized component foundation. You set it up once, and every component that extends it gets performance tracking ✨automagically✨.
export class PerformanceTrackedElement extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
this._hasRendered = false;
}
connectedCallback() {
if (!this.hasAttribute('data-perf-marked')) {
performance.mark(`${this.localName}:connected`);
this.setAttribute('data-perf-marked', '');
}
this.render();
}
_markFirstRender() {
if (!this._hasRendered) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
performance.measure(
`${this.localName}:upgrade`,
`${this.localName}:connected`,
`${this.localName}:first-render`
);
}
}
}
Now any component can extend this base class:
export class MyButton extends PerformanceTrackedElement {
render() {
this.shadowRoot.innerHTML = `
<style>
button {
padding: 0.5rem 1rem;
border: none;
border-radius: 4px;
background: #007bff;
color: white;
cursor: pointer;
}
</style>
<button>Click me</button>
`;
this._markFirstRender();
}
}
Option 2: Mixin
If you already have a base class hierarchy or want more flexibility, a mixin lets you add performance tracking to any class without changing your inheritance chain. This is particularly useful when working with existing component libraries or when you need to compose multiple behaviors.
export const PerformanceTrackingMixin = (SuperClass) => {
return class extends SuperClass {
constructor() {
super();
this._hasRendered = false;
}
connectedCallback() {
if (!this.hasAttribute('data-perf-marked')) {
performance.mark(`${this.localName}:connected`);
this.setAttribute('data-perf-marked', '');
}
if (super.connectedCallback) {
super.connectedCallback();
}
}
_markFirstRender() {
if (!this._hasRendered) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
performance.measure(
`${this.localName}:upgrade`,
`${this.localName}:connected`,
`${this.localName}:first-render`
);
}
}
disconnectedCallback() {
performance.clearMarks(`${this.localName}:connected`);
performance.clearMarks(`${this.localName}:first-render`);
performance.clearMeasures(`${this.localName}:upgrade`);
if (super.disconnectedCallback) {
super.disconnectedCallback();
}
}
};
};
Apply the mixin to any component class:
export class MyButton extends PerformanceTrackingMixin(HTMLElement) {
constructor() {
super();
this.attachShadow({ mode: 'open' });
}
connectedCallback() {
super.connectedCallback();
this.render();
}
render() {
this.shadowRoot.innerHTML = `
<style>
button {
padding: 0.5rem 1rem;
border: none;
border-radius: 4px;
background: #007bff;
color: white;
cursor: pointer;
}
</style>
<button>Click me</button>
`;
this._markFirstRender();
}
}
Or apply the mixin to your existing base class:
export class MyCard extends PerformanceTrackingMixin(MyBaseElement) {
}
Composing multiple mixins
The beauty of mixins is that you can compose multiple behaviors. If you have other mixins for logging, analytics, or error handling, you can stack them:
export class MyComponent extends
PerformanceTrackingMixin(
LoggingMixin(
AnalyticsMixin(HTMLElement)
)
) {
}
Choosing between base class and mixin
Use a base class when:
- You're starting fresh with a new component library
- You want a single, consistent foundation for all components
- You control the entire inheritance chain
- You prefer a simpler mental model
Use a mixin when:
- You need to add performance tracking to existing components
- You're working with multiple base classes
- You want to compose multiple behaviors
- You need flexibility to opt-in per component
Once you've instrumented your components, you need to actually look at the data. There are several ways to do this, each useful for different purposes.
You can retrieve all your marks and measures programmatically:
const marks = performance.getEntriesByType('mark');
const measures = performance.getEntriesByType('measure');
const buttonMeasures = performance.getEntriesByName('my-button:upgrade');
Performance marks appear in the Performance tab, where you can see them visualized on a timeline alongside other browser events. This is incredibly useful for understanding how your component initialization fits into the overall page load sequence.
To view your marks:
- Open Chrome DevTools
- Go to the Performance tab
- Record a new profile
- Look for your marks in the User Timing section
Integration with Lighthouse
Lighthouse extracts User Timing data and displays it in your reports. This means your custom performance marks show up automatically in your Lighthouse audits without any additional configuration.
Reporting to analytics
Performance marks are only useful if you act on them. For production monitoring, you'll want to send these metrics to your analytics platform so you can track performance over time and across different user segments.
_markFirstRender() {
if (!this._hasRendered) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
const measure = performance.measure(
`${this.localName}:upgrade`,
`${this.localName}:connected`,
`${this.localName}:first-render`
);
if (window.analytics) {
window.analytics.track('Component Performance', {
component: this.localName,
duration: measure.duration,
userAgent: navigator.userAgent
});
}
}
}
Advanced patterns
Conditional instrumentation
You probably don't want performance overhead in production unless you're actively monitoring. Use environment flags to control when instrumentation runs. This works equally well with both the base class and mixin approaches:
With a base class:
export class PerformanceTrackedElement extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
this._hasRendered = false;
}
connectedCallback() {
if (this.shouldTrackPerformance() && !this.hasAttribute('data-perf-marked')) {
performance.mark(`${this.localName}:connected`);
this.setAttribute('data-perf-marked', '');
}
this.render();
}
shouldTrackPerformance() {
const isDev = !window.location.hostname.includes('production.com');
return isDev || Math.random() < 0.1;
}
_markFirstRender() {
if (!this._hasRendered && this.shouldTrackPerformance()) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
performance.measure(
`${this.localName}:upgrade`,
`${this.localName}:connected`,
`${this.localName}:first-render`
);
}
}
}
With a mixin:
export const PerformanceTrackingMixin = (SuperClass) => {
return class extends SuperClass {
constructor() {
super();
this._hasRendered = false;
}
connectedCallback() {
if (this.shouldTrackPerformance() && !this.hasAttribute('data-perf-marked')) {
performance.mark(`${this.localName}:connected`);
this.setAttribute('data-perf-marked', '');
}
if (super.connectedCallback) {
super.connectedCallback();
}
}
shouldTrackPerformance() {
const isDev = !window.location.hostname.includes('production.com');
return isDev || Math.random() < 0.1;
}
_markFirstRender() {
if (!this._hasRendered && this.shouldTrackPerformance()) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
performance.measure(
`${this.localName}:upgrade`,
`${this.localName}:connected`,
`${this.localName}:first-render`
);
}
}
disconnectedCallback() {
performance.clearMarks(`${this.localName}:connected`);
performance.clearMarks(`${this.localName}:first-render`);
performance.clearMeasures(`${this.localName}:upgrade`);
if (super.disconnectedCallback) {
super.disconnectedCallback();
}
}
};
};
Measuring nested component trees
When you have components that render other components, you can track the entire composition tree:
performance.mark('page-header:composition-start');
performance.mark('page-header:composition-end');
performance.measure(
'page-header:full-composition',
'page-header:composition-start',
'page-header:composition-end'
);
Tracking lazy-loaded components
For components loaded dynamically, track the time from import to render:
performance.mark('my-component:import-start');
const { MyComponent } = await import('./my-component.js');
performance.mark('my-component:import-end');
performance.measure(
'my-component:lazy-load',
'my-component:import-start',
'my-component:import-end'
);
customElements.define('my-component', MyComponent);
Cleaning up marks and measures
Performance entries accumulate in the browser's performance buffer. Use clearMarks and clearMeasures to clean up once data is no longer needed.
performance.clearMarks('my-component:connected');
performance.clearMarks('my-component:first-render');
performance.clearMeasures('my-component:upgrade');
performance.clearMarks();
performance.clearMeasures();
For components that might be added and removed from the DOM multiple times, clear marks in the disconnectedCallback:
disconnectedCallback() {
performance.clearMarks(`${this.localName}:connected`);
performance.clearMarks(`${this.localName}:first-render`);
performance.clearMeasures(`${this.localName}:upgrade`);
}
What to measure (and what to skip)
Not everything needs a performance mark. Focus on the moments that impact user experience:
Measure these
- Initial connection to DOM: Users are waiting for the component to appear
- First render complete: The component is now visible and interactive
- Component upgrade time: Total time from definition to first paint
- Critical async operations: Data fetching, heavy computations
- User interactions: Click handlers, form submissions, animations
Skip these
- Every update cycle: Too noisy, adds overhead
- Trivial getters and setters: Not worth the measurement cost
- Internal helper methods: Focus on user-facing milestones
- Style recalculations: The browser already tracks these
Real-world example
Let's put this all together with a complete example of a button component that tracks its performance:
export class PerformanceTrackedButton extends HTMLElement {
constructor() {
super();
this.attachShadow({ mode: 'open' });
this._hasRendered = false;
}
static get observedAttributes() {
return ['label', 'disabled'];
}
connectedCallback() {
if (!this.hasAttribute('data-perf-marked')) {
performance.mark(`${this.localName}:connected`);
this.setAttribute('data-perf-marked', '');
}
this.render();
}
attributeChangedCallback(name, oldValue, newValue) {
if (oldValue !== newValue) {
this.render();
}
}
render() {
const label = this.getAttribute('label') || 'Click me';
const disabled = this.hasAttribute('disabled');
this.shadowRoot.innerHTML = `
<style>
:host {
display: inline-block;
}
button {
padding: 0.5rem 1rem;
border: none;
border-radius: 4px;
background: var(--button-bg, #007bff);
color: white;
cursor: pointer;
font-family: inherit;
font-size: inherit;
}
button:disabled {
opacity: 0.5;
cursor: not-allowed;
}
button:not(:disabled):hover {
background: var(--button-hover-bg, #0056b3);
}
</style>
<button ${disabled ? 'disabled' : ''}>
${label}
</button>
`;
if (!this._hasRendered) {
this._hasRendered = true;
performance.mark(`${this.localName}:first-render`);
const measure = performance.measure(
`${this.localName}:upgrade`,
`${this.localName}:connected`,
`${this.localName}:first-render`
);
if (window.location.hostname === 'localhost') {
console.log(`${this.localName} upgraded in ${measure.duration.toFixed(2)}ms`);
}
}
}
}
customElements.define('perf-button', PerformanceTrackedButton);
performance.mark('perf-button:defined');
Common pitfalls
Forgetting to check for existing marks
If a component disconnects and reconnects, you'll create duplicate marks unless you guard against it. Always check if the mark already exists or use a flag to track marking state.
Over-instrumenting
Resist the urge to mark everything. Too many marks create noise and make it harder to find the signals that matter. Focus on initialization and critical user interactions.
Ignoring browser support
The User Timing API is supported in all major browsers, but always check if the performance object exists before calling its methods:
if (window.performance && performance.mark) {
performance.mark('my-component:connected');
}
Not testing on real devices
Your laptop is fast. Your users' phones might not be. Always test your instrumented components on representative devices to see what the real performance looks like.
Dos and don'ts
Do
- Track component lifecycle milestones: connection, first render, upgrade time
- Use consistent naming conventions across your component library
- Report metrics to analytics for real-world performance tracking
- Clean up marks and measures when they're no longer needed
- Choose the right pattern: use a base class for new libraries, a mixin for existing ones
- Compose mixins when you need multiple behaviors
- Always call super methods in mixins to maintain the inheritance chain
Don't
- Mark every render cycle: too expensive and noisy
- Forget to check for performance API availability: defensive coding matters
- Ignore the data: measure, analyze, act
- Over-complicate your naming: keep it simple and descriptive
- Skip testing on slower devices: that's where performance matters most
- Forget to call super methods in mixins: breaks the inheritance chain
Wrapping up
Performance monitoring doesn't have to be complicated. The User Timing API gives you exactly what you need to understand how your web components perform in the wild. By instrumenting key lifecycle moments and reporting metrics that matter, you'll have the insights you need to make data-driven decisions about optimization.
The pattern is straightforward: mark important moments, measure the time between them, and act on what you learn. Whether you choose a base class or a mixin depends on your architecture—base classes work great for new projects with a clean slate, while mixins shine when you need to add tracking to existing components or compose multiple behaviors. Either way, you set it up once and suddenly your entire component library has performance visibility without any per-component boilerplate.
Start simple; track connection and first render. See what the numbers tell you. Then expand to track the interactions and operations that matter most to your users. The data will surprise you, and that's exactly the point. You can't optimize what you can't measure, and now you can measure everything that matters.
Six months from now when you're investigating a performance regression or optimizing for a new market with slower network speeds, you'll be grateful you have this data. Your users will be grateful too, even if they never know you're tracking it. Build fast components, measure to prove they're fast, and keep measuring to make sure they stay that way.