Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lighthouse performance output is significantly different from the browser inbuilt one #31

Open
fasatrix opened this issue Jul 26, 2022 · 7 comments

Comments

@fasatrix
Copy link

Hi there,
I noticed that the outputted metrics are significantly different from the browser inbuilt one, why is that(am I doing anything wrong)?

test:

import { chromium } from 'playwright';
import type { Browser } from 'playwright';
import { playAudit } from 'playwright-lighthouse';
import { test as base } from '@playwright/test';

export const lighthouseTest = base.extend<
    {},
    { port: number; browser: Browser}
    >({
    port: [
        async ({}, use) => {
            await use(9222);
        },
        { scope: 'worker' },
    ],

    browser: [
        async ({ port }, use) => {
            const browser = await chromium.launch({
                args: [`--remote-debugging-port=${port}`],
                headless: false,
            });
            await use(browser);
        },

        { scope: 'worker' },
    ],

});

lighthouseTest.describe('Lighthouse', () => {
    lighthouseTest('should pass lighthouse tests', async ({ page ,port }) => {
        await page.goto('https://angular.io');
        await playAudit({
            page,
            port,
            opts: { screenEmulation: { disabled: true } },
            thresholds: {
                performance: 40,
                accessibility: 50,
                'best-practices': 50,
                seo: 50,
                pwa: 50,
            },
        });
    });
});

image

From the inbuilt lighthouse
image

What I have noticed is that the output from your lib is very closed to the mobile output. However I added opts: { screenEmulation: { disabled: true } } to make sure mobile device is not run

@ryanrosello-og
Copy link

hey @fasatrix , I was also experiencing wild numbers when I run my tests. I then stumbled upon this article which has a good explanation as to why the numbers may fluctuate and offers up some suggestions on how to mitigate.

https://github.com/GoogleChrome/lighthouse/blob/master/docs/variability.md

@fasatrix
Copy link
Author

hey @fasatrix , I was also experiencing wild numbers when I run my tests. I then stumbled upon this article which has a good explanation as to why the numbers may fluctuate and offers up some suggestions on how to mitigate.

https://github.com/GoogleChrome/lighthouse/blob/master/docs/variability.md

Hey, @ryanrosello-og thanks for that. My tests were run from the same computer on the same network multiple times using the web browser version and this package. So the environment was consistently the same however results were consistently different

@badsyntax
Copy link
Contributor

What I have noticed is that the output from your lib is very closed to the mobile output. However I added opts: { screenEmulation: { disabled: true } } to make sure mobile device is not run

Can you try with different configs?

eg:

import lighthouseDesktopConfig from 'lighthouse/lighthouse-core/config/lr-desktop-config';
// import lighthouseMobileConfig from 'lighthouse/lighthouse-core/config/lr-mobile-config';

await playAudit({
  // ...
  config: lighthouseDesktopConfig,
  // ...
})

@das-en
Copy link

das-en commented Dec 8, 2022

I am having the same issue as @fasatrix . Any update on this?

@YonatanKra
Copy link

Hi,

I'm having the same thing for a11y as well.

Lighthouse in dev tools reports a11y errors while the a11y score when using playwright-lighthouse is 100.

Here's my config:

	const config = {
		extends: 'lighthouse:default',
		settings: {
			maxWaitForFcp: 15 * 1000,
			maxWaitForLoad: 35 * 1000,
			formFactor: 'desktop',
			screenEmulation: {
				mobile: false,
				width: 1350,
				height: 940,
				deviceScaleFactor: 1,
				disabled: false,
			},
			emulatedUserAgent: 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36',
			skipAudits: [
				'html-has-lang',
				'document-title'
			],
		},
	};

	await testWrapper?.screenshot({
		path: './snapshots/select-a11y.png',
	});

	await playAudit({
		page: page,
		thresholds: {
			accessibility: 100,
		},
		port: 9222,
		reports: {
			formats: {
				html: true,
			},
			name: 'select-a11y',
		},
		config,
	});

In the dev tools I see 13 audits while in the playwright-lighthouse output I see only 3.

Any idea why?

@Kolby-Udacity
Copy link

Kolby-Udacity commented Jun 6, 2023

I'm seeing the same problem. Running the test in my browser will return a performance score of ~85, running it in playwright I get ~70, running it in CI I get around ~50.

@vincerubinetti
Copy link

I have the same issue.

I have a Vite React SPA, and I'm using this package for automated performance and accessibility tests. The performance scores when I use the package are consistently less than half of when I run Lighthouse directly in Chrome:

Lighthouse PackageDev Tools Lighthouse
Screenshot 2024-10-28 at 7 31 24 PM Screenshot 2024-10-28 at 7 32 00 PM

Both cases are running on a production build of my app, with npm run build && npm run preview (Vite's mechanism for building an optimized production build and serving it to be previewed locally). My Playwright is configured to use this same command for all tests.

This is not variability in between runs/devices/whatever. This is analyzing the same page multiple times on the same device. Something is amiss. But I'm not sure if the problem lies in this package, Lighthouse itself, or Vite.

I've tried using the config recommendations above, with no success.

Another thing to note: running playwright test with the --debug flag (opens a headed browser and lets you step through the execution of tests one line at a time) and running the Dev Tools Lighthouse in the opened browser there produces the same degraded performance result.

vincerubinetti added a commit to JRaviLab/molevolvr2.0 that referenced this issue Oct 29, 2024
Closes #33 

- upload test reports when running on gh-actions
- upgrade all packages
- change playwright config to use more cpu cores/workers
- change playwright server command to build production version of site
- hoist list of `paths` (routes) into separate file
- in axe test, instead of `console.warn`, use playwright test annotation
(shows when you click on a test that passed in the html report)
- change nature of json imports in tests to get rid of node log warnings
- implement lighthouse tests following the [`playwright-lighthouse`
instructions
here](https://github.com/abhinaba-ghosh/playwright-lighthouse?tab=readme-ov-file#usage-with-playwright-test-runner)
- choose appropriate thresholds for lighthouse test, except for
performance which is held back by [some kind of upstream
bug](abhinaba-ghosh/playwright-lighthouse#31)
- APCA color contrast checking is unfortunately [NOT available in
lighthouse](GoogleChrome/lighthouse#16237 (reply in thread))

---------

Co-authored-by: Vincent Rubinetti <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants