Sunday, 2 November 2025

Core Web Vitals

Core Web Vitals (CWV) are Google’s user-experience metrics that measure page speed, responsiveness, and visual stability.

They directly influence page experience signals, which in turn affect rankings, crawl efficiency, and user retention.

The three metrics are:

How They Affect Rankings
  • Direct impact: CWV are part of Google’s Page Experience ranking signals. If two pages have similar relevance, the faster and more stable one ranks higher.
  • Indirect impact: Better CWV → lower bounce rate, higher engagement and conversion → positive behavioral signals → stronger SEO over time.
  • Crawl efficiency: Fast, stable pages allow Googlebot to crawl more URLs — important for large documentation sites like Stripe’s.
How to Improve Them

A. Improve LCP (Loading Speed)
  • Optimize hero images and SVGs; use WebP/AVIF formats.
  • Use server-side rendering (SSR) or static generation for docs pages.
  • Implement lazy-loading for below-the-fold assets.
  • Use CDNs (Stripe already does globally) to serve content near the user.
  • Preload key assets: fonts, CSS, hero images.
B. Improve INP (Interactivity)
  • Minimize heavy JavaScript bundles; split code and defer non-critical scripts.
  • Use HTTP/2 and caching to reduce latency.
  • Audit third-party scripts (analytics, chat widgets) that block input.
C. Improve CLS (Visual Stability)
  • Always set width/height attributes for images and embeds.
  • Reserve space for dynamic elements (ads, banners, cookie notices).
  • Load fonts using font-display: swap to avoid text shifts.

What is Largest Contentful Paint (LCP)?

Largest Contentful Paint is the metric that measures the time a website takes to show the user the largest content on the screen, complete and ready for interaction.

Google defines that this metric considers only the content above the page’s fold, meaning everything that appears without scrolling.

There is another relevant point, which is related to the type of content considered. The metric only counts the loading time of what is relevant to the user experience, that is:

  • Images.
  • Image tags.
  • Video thumbnails.
  • Background images with CSS.
  • Text elements, such as paragraphs, headings, and lists.

How to measure it?

You can measure LCP in two ways:

  • Directly on the site, by using a method known as “field”.
  • Through performance simulations performed by algorithms, in this case, the lab mode.

For each of these methods, different tools speed up the work and make the measurement more accurate. Starting with the field method, you could use:

  • Chrome User Experience Report;
  • PageSpeed Insights;
  • Search Console (Core Web Vitals report).

As for the lab tools, the most recommended are:

  • Chrome DevTools.
  • Lighthouse.
  • WebPageTest.

What is a good score?

Since metric monitoring is essential, it is also necessary to understand the minimum performance standards expected.

There’s a specific demand from Google’s algorithms regarding page loading time, considering the main content. In this case, we are talking about up to 2.5 seconds, which is a good result.


How to optimize it?

  • Optimize image sizes

Always use images with the right sizing. Your hosting will suggest specific dimensions, either for the desktop or mobile version. Using the indicated size avoids overloading, which can generate a high LCP. Use Webp format images.

  • Use an image CDN

A CDN service can make images load faster. ImageEngine is a tool that does all the work of image adequacy, considering factors such as accessing device, file size, and user location.

  • Avoid JavaScript to load images

JavaScript loading can slow down the process, so the best thing to do is to leave this work to your browser. That helps to prevent any delay, keeping the LCP within the recommended average.

  • Choose a good hosting service

Your hosting service also impacts pages’ loading time. Therefore, find a quality one that has a good reputation in the market, and, mainly, that offers an adequate infrastructure for your site’s size and volume of access.

  • Minify HTML, CSS & JavaScript
Reduces the file size by removing the space in code
  • Enable caching
  • Use system font
If using special fonts then use code: font-display: swap; in CSS so that initialy it will load system font and then when the fancy font is loaded from server then it will show that.
  • Keep DOM small
Excessive DOM size happens when there are too many DOM nodes (or HTML tags) on your page or when they are nested too deep. 
  • Don't keep JavaScript at top or use defer attribute with JavaScript

Async allows parallel download but files get executed immediately after download. Defer allows parallel download and files get executed after HTML is done parsing. So, Defer is better than Async
  • Get the devices list from Google Analytics and test according to real device that user are using.

What is First Input Delay (FID)?

FID is the time your website takes to start working on first touch/click 

First Input Delay is the metric that measures the delay in response time for a specific user action performed for the first time on a website. 

How to measure it?

Since FID is a stat measured only by real user interaction, it cannot be replicated in a lab setting.

However, Total Blocking Time (TBT) is a metric that essentially measures how much time a browser is blocked and therefore can closely estimate FID. 

To provide a good user experience, sites should strive to have a Total Blocking Time of less than 200 milliseconds when tested on average mobile hardware.

How to find Total Blocking Time (TBT) score of landing page?

  • Press F12 or right click and do "inspect" in chrome browser
  • Enter URL
  • Go to "Performance" tab and do check "Web Vital" & click the reload button

How TBT is calculated?

For example, consider the following diagram of the browser's main thread during page load:
The above timeline has five tasks, three of which are Long Tasks because their duration exceeds 50 ms. The next diagram shows the blocking time for each of the long tasks:
So while the total time spent running tasks on the main thread is 560 ms, only 345 ms of that time is considered blocking time.

What is a good score?

There is a general understanding that a good score is in less than 100ms. 



How to optimize it?
  • Reduce JS execution time
Minify and compress your code & Remove unused code.
  • Optimize CSS
Minify and compress your CSS & Remove unused code.
  • Minimize main thread work
Utilize web workers to do tasks off the main thread & Cut the complexity of your styles and layouts
  • Reduce third party code
Third-party JavaScript often refers to scripts that can be embedded into any site directly from a third-party vendor. These scripts can include ads, analytics, widgets and other scripts that make the web more dynamic and interactive.

What is Cumulative Layout Shift (CLS)?

Cumulative Layout Shift (or CLS) is a measure of how much a webpage unexpectedly shifts during its life. For example, if a website visitor loaded a page and, while they were reading it, a banner loads and the page jumps down, that would constitute a large CLS score.

How do you measure Cumulative Layout Shift?

To analyze performance in PageSpeed Insights:
  • Enter a website URL into Google's PageSpeed Insights tool.
  • Click 'Analyze.'
  • Check your performance. You can review both mobile and desktop performance, which you can switch between using the top left corner navigation.
To analyze performance using Lighthouse tools:
  • Open up the website you want to analyze in Chrome.
  • Navigate to Developer Tools by clicking the three dots in the top right corner of the browser window, choosing “More Tools” and then “Developer Tools.”
  • When the console opens, choose “Lighthouse” from the options along the top.
  • Click “Generate Report.”
How to analyze Cumulative Layout Shift issue on website?

  • Press F12 or right click and do "inspect" in chrome browser
  • Enter URL
  • Go to "Performance" tab and do check "Web Vital" & click the reload button
  • Click on each "Layout Shift" in red to analyse which elements need to be fixed
Whats is a good CLS score?

A good cumulative layout score is anything less than 0.1.
How to optimize it?
  • Fonts
Use fonts that are similar to system fonts, Use system fonts to begin with & Self host your fonts
  • Image
Optimize your image in JPEG or PNG format, Convert these optimzed images in WEBP format & Specify proper size for images 
  • Iframe or Embedded Elements
Give your embedded elements proper size to avoid layout shift
  • Ads
Allot space for ads and set fallback image in case ads spots don't get fill

How to find current Device Usage report in Google Analytics?
  • Open up the Audience tab, and select Mobile and Devices.

Imp SEO QA

 What are the main ranking factors that influence Google’s search results today?

1. Content Quality & Relevance

  • Helpful, original content that satisfies search intent.
  • Strong E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness).
  • Proper keyword usage in titles, headings, and contextually throughout the content.
  • Clear topical coverage — depth matters more than word count.
  • Semantic relevance (using entities and related topics that Google associates with the query).

2. Technical SEO

  • Crawlability: No blocked pages or broken links; optimized robots.txt and sitemap.
  • Indexability: Correct canonical tags and meta directives.
  • Core Web Vitals: Fast load time, interactivity, and visual stability.
  • Mobile-first indexing: Fully responsive design.
  • Structured data: Schema markup for better context and rich snippets.

3. Backlinks & Authority

  • High-quality backlinks from relevant, authoritative domains.
  • A natural anchor text profile — not over-optimized.
  • Internal linking that distributes authority effectively.
  • Mentions (implied links) and digital PR also contribute to trust signals.

4. User Engagement Signals

  • Click-through rate (CTR) from SERPs.
  • Dwell time / engagement — users spending time on your content.
  • Low bounce rate and strong on-site navigation.
  • Positive brand searches and repeat visits.

5. Local & Entity Signals (if applicable)

  • Consistent NAP (Name, Address, Phone) info for local businesses.
  • Verified Google Business Profile.
  • Entity relationships via structured data and consistent mentions across the web.

AI & LLM

 Difference in LLM and Google Search

LLM (e.g., ChatGPT)

  • Generates new answers using trained data.
  • Works on static knowledge (not live web).
  • Provides conversational, summarized responses.
  • May produce incorrect info (no direct sources).
  • Focus on understanding context and language.

Google Search

  • Retrieves existing web pages using crawling and indexing.
  • Always updated with real-time web data.
  • Shows ranked links and snippets.
  • Easier to verify info (shows sources).
  • SEO directly affects visibility and ranking.

What is a Large Language Model (LLM), and how does it differ from traditional search engines like Google?

A Large Language Model (LLM) is an advanced AI system  like GPT-5 or Gemini  trained on vast amounts of text data to understand, generate, and reason with natural language.

It uses deep learning (transformer architecture) to predict the next word in a sequence, allowing it to:

  • Generate human-like text,
  • Answer questions,
  • Summarize complex documents,
  • Write code or explain APIs,
  • And perform conversational reasoning.

LLMs don’t “search” the web in real time  they use patterns and relationships learned from data to produce answers.

How LLMs Differ from Traditional Search Engines (like Google)

How do LLMs (like ChatGPT) change the way people search for information online?

  • The Shift: From “Find Information”  “Get Answers”

Traditional search (Google/Bing) = users browse links to find answers.

LLMs (like ChatGPT, Gemini, or Perplexity) = users get the answer directly  often in conversational form.

This means people no longer search for pages; they ask for solutions.

  • Key Ways LLMs Change Search Behavior

A. Conversational Queries

Users ask longer, more natural, multi-step questions  closer to how they’d talk to a human expert.

B. Fewer Clicks, More Direct Answers

LLMs summarize multiple sources in one response, which means:

Less traffic to traditional websites.

More emphasis on being the cited or trusted source that LLMs draw from.

C. Contextual, Multi-Turn Search

LLMs allow follow-up questions  users refine their query conversationally (“Show me the code example,” “Compare costs”).

That creates deeper, guided search sessions  something Google search doesn’t handle natively.

D. Emergence of AI Discovery Platforms

Tools like Perplexity.ai, ChatGPT with browsing, and Google AI Overviews blend LLMs and traditional search.

So content needs to be:

  1. Factually precise (to avoid being excluded),
  2. Semantically rich (clear context and relationships),
  3. Authoritative (so AI models trust it enough to reference).

  • SEO Implications for a Company like Stripe

  1. Focus on “answerability”  make every guide or page self-contained, clear, and factual.
  2. Structure content with schema, FAQs, definitions, and question-based subheadings.
  3. Maintain authoritative tone & brand consistency so Stripe is recognized as a reliable source in AI answers.
  4. Track brand mentions and AI citation visibility (e.g., Perplexity sources, AI Overviews).

How can businesses optimize their content for visibility in AI-generated answers or chatbots?

  • Understand the New Discovery Model

AI chatbots (ChatGPT, Gemini, Perplexity, Copilot, etc.) don’t show a list of blue links  they synthesize answers using trusted sources.

To appear in those synthesized answers, a business must create content that’s factual, structured, and authoritative enough for AI systems to confidently reference or cite.

  • Optimize for “Answerability”

AI models extract concise, verifiable facts. So content should be designed for direct question-answer use:

  1. Use clear, question-based headings (H2s)  e.g., “What is a payment gateway?” or “How does Stripe’s API handle recurring billing?”
  2. Provide short, factual summaries (4060 words) immediately below those headings.
  3. Add definitions, stats, and examples in plain language.
  4. Use FAQ sections and schema markup (FAQ, HowTo, Product) to help AIs identify context.

  • Strengthen E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness)

AI models prioritize credible voices.

To reinforce that:

  1. Use expert bylines  “Written by Stripe Developer Advocate.”
  2. Reference official documentation, research, or regulatory standards.
  3. Maintain consistent, brand-backed facts (API specs, pricing, security compliance).
  4. Keep content updated and timestamped  freshness signals reliability.

  • Structure Content for Machine Readability

  1. Implement structured data markup so AIs understand entities (products, prices, use cases).
  2. Use clean HTML hierarchy  H1  H2  H3 with consistent topic grouping.
  3. Create clear internal link networks between core topics (e.g., from “Payments API”  “Billing Integration”).
  4. Avoid fluff  AI models downweight vague or promotional language.

  • Publish on Trusted Platforms & Earn Mentions

LLMs train or reference content from high-authority sources:

  1. Earn citations or backlinks from reputable media, developer forums, GitHub, or industry case studies.
  2. Encourage mentions on discussion platforms (Reddit, Stack Overflow, Medium)  where AI models often learn contextual examples.
  3. Use open-access content (not gated) to increase crawl and training visibility.