Merge branch 'main' into webhook
This commit is contained in:
commit
3a202ece6a
|
|
@ -2,4 +2,6 @@
|
|||
!bin
|
||||
!lib
|
||||
!package.json
|
||||
!LICENSE
|
||||
!npm-shrinkwrap.json
|
||||
!docker
|
||||
|
|
|
|||
|
|
@ -4,3 +4,6 @@ assets/*
|
|||
sitespeed-result/*
|
||||
lib/plugins/yslow/scripts/*
|
||||
lib/plugins/html/assets/js/*
|
||||
bin/browsertimeWebPageReplay.js
|
||||
test/data/*
|
||||
test/prepostscripts/*
|
||||
|
|
|
|||
|
|
@ -5,19 +5,27 @@
|
|||
"es6": true
|
||||
},
|
||||
"parserOptions": {
|
||||
"ecmaVersion": 8
|
||||
"ecmaVersion": "latest",
|
||||
"sourceType": "module"
|
||||
},
|
||||
"plugins": ["prettier"],
|
||||
"extends": "eslint:recommended",
|
||||
"plugins": ["prettier", "unicorn"],
|
||||
"extends": ["eslint:recommended", "plugin:unicorn/recommended"],
|
||||
"rules": {
|
||||
"prettier/prettier": [
|
||||
"error",
|
||||
{
|
||||
"singleQuote": true
|
||||
"singleQuote": true,
|
||||
"trailingComma": "none",
|
||||
"arrowParens": "avoid",
|
||||
"embeddedLanguageFormatting": "off"
|
||||
}
|
||||
],
|
||||
"require-atomic-updates": 0,
|
||||
"no-extra-semi": 0,
|
||||
"no-mixed-spaces-and-tabs": 0
|
||||
"no-mixed-spaces-and-tabs": 0,
|
||||
"unicorn/filename-case": 0,
|
||||
"unicorn/prevent-abbreviations": 0,
|
||||
"unicorn/no-array-reduce": 0,
|
||||
"unicorn/prefer-spread":0
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -21,4 +21,4 @@ If you have an idea or something that you need sitespeed.io to handle, add an is
|
|||
|
||||
Thanks for your time & support!
|
||||
|
||||
Peter, Tobias & Jonathan
|
||||
Peter
|
||||
|
|
|
|||
|
|
@ -1,9 +0,0 @@
|
|||
|
||||
<!--
|
||||
Thanks for reporting issues back to sitespeed.io!
|
||||
|
||||
Please read https://www.sitespeed.io/documentation/sitespeed.io/bug-report/ and create reproducable issues.
|
||||
|
||||
Make sure you run the latest stable version, we move quite fast and fixes things.
|
||||
|
||||
-->
|
||||
|
|
@ -0,0 +1,65 @@
|
|||
name: Bug Report
|
||||
description: File a bug report
|
||||
labels: [bug]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: Thanks for reporting issues back to sitespeed.io!
|
||||
- type: checkboxes
|
||||
id: Reproducable
|
||||
attributes:
|
||||
label: Have you read the documentation?
|
||||
description: Please double check that this question hasn't already answered in the [documentation](https://www.sitespeed.io/documentation/sitespeed.io/) (use the `Search`). Also please read [how to make a good bug report](https://www.sitespeed.io/documentation/sitespeed.io/bug-report/) and check [how to debug your script](https://www.sitespeed.io/documentation/sitespeed.io/scripting/#debug).
|
||||
options:
|
||||
- label: Yes, I've read the [how to make a reproducable bug guide](https://www.sitespeed.io/documentation/sitespeed.io/bug-report/)
|
||||
required: true
|
||||
- label: Yes, I've read the [how to debug my script guide](https://www.sitespeed.io/documentation/sitespeed.io/scripting/#debug)
|
||||
required: false
|
||||
- type: input
|
||||
id: url
|
||||
attributes:
|
||||
label: URL
|
||||
description: What URL did you run sitespeed.io on? If you can't share your URL please make a minimial repro to a public location (e.g. https://glitch.com/, http://jsbin.com/, etc)
|
||||
placeholder: https://example.com
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: whaw
|
||||
attributes:
|
||||
label: What are you trying to accomplish
|
||||
description: A brief description of what you tried to do and what went wrong.
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: browser
|
||||
attributes:
|
||||
label: What browser did you use?
|
||||
description: Extra bonus if you try the issue in multiple browsers
|
||||
multiple: true
|
||||
options:
|
||||
- Chrome
|
||||
- Firefox
|
||||
- Edge
|
||||
- Safari Mac OS
|
||||
- Safari iOS
|
||||
- Chrome Android
|
||||
- Firefox Android
|
||||
- Other
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: how-to-reproduce
|
||||
attributes:
|
||||
label: How to reproduce
|
||||
description: Please copy and paste how you run so we can reproduce. This will be automatically formatted into code, so no need for backticks. Remember to follow the [how to make a good bug report guide](https://www.sitespeed.io/documentation/sitespeed.io/bug-report/)!
|
||||
render: shell
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: Log output
|
||||
description: Please copy and paste the full log output from your test (please DO NOT take a screenshot of the log output). This will be automatically formatted into code, so no need for backticks. If the log output is large please use a [gist](https://gist.github.com)!
|
||||
render: shell
|
||||
validations:
|
||||
required: false
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
name: New feature or improvement
|
||||
description: Suggest a new feature or something that can be improved
|
||||
labels: [feature, improvement]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Suggest a new feature or something that can be improved
|
||||
- type: textarea
|
||||
id: your-idea
|
||||
attributes:
|
||||
label: Feature/improvement
|
||||
description: You can also disuss new features/improvements in the [sitespeed.io Slack channel](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw).
|
||||
validations:
|
||||
required: true
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
name: Question
|
||||
description: Ask a question about sitespeed.io
|
||||
labels: [question]
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Ask a question about sitespeed.io
|
||||
- type: textarea
|
||||
id: your-question
|
||||
attributes:
|
||||
label: Your question
|
||||
description: Please double check that this question hasn't already answered in the [documentation](https://www.sitespeed.io/documentation/sitespeed.io/) (use the `Search`) or [old GitHub issues](https://github.com/sitespeedio/sitespeed.io/issues?q=is%3Aissue+is%3Aclosed). You can also ask questions in the [sitespeed.io Slack channel](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw). And if your question is more like a bug, please [use the bug report form](https://github.com/sitespeedio/sitespeed.io/issues/new?assignees=&labels=bug&template=BUG_REPORT.yml)
|
||||
validations:
|
||||
required: true
|
||||
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"budget": {
|
||||
"thirdParty": {
|
||||
"requests": 0
|
||||
},
|
||||
"score": {
|
||||
"bestpractice": 100,
|
||||
"privacy": 100,
|
||||
"performance": 98
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,29 @@
|
|||
name: Build autobuild container that runs tests on dashboard.sitespeed.io
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
docker:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
-
|
||||
name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
-
|
||||
name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
-
|
||||
name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
-
|
||||
name: Build and push sitespeed.io
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
platforms: linux/amd64
|
||||
push: true
|
||||
provenance: false
|
||||
tags: sitespeedio/sitespeed.io-autobuild:main
|
||||
|
|
@ -0,0 +1,72 @@
|
|||
name: Build Docker containers on new tag
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- 'v*.*.*'
|
||||
jobs:
|
||||
docker:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
-
|
||||
name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
-
|
||||
name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
-
|
||||
name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
-
|
||||
name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
-
|
||||
name: Get the tag
|
||||
id: tag
|
||||
uses: dawidd6/action-get-tag@v1
|
||||
with:
|
||||
strip_v: true
|
||||
-
|
||||
name: Build and push sitespeed.io
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
provenance: false
|
||||
tags: sitespeedio/sitespeed.io:${{steps.tag.outputs.tag}},sitespeedio/sitespeed.io:latest
|
||||
-
|
||||
name: Build and push sitespeed.io+1
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
file: ./docker/Dockerfile-plus1
|
||||
build-args: version=${{steps.tag.outputs.tag}}
|
||||
push: true
|
||||
provenance: false
|
||||
tags: sitespeedio/sitespeed.io:${{steps.tag.outputs.tag}}-plus1
|
||||
-
|
||||
name: Build and push sitespeed.io+wpt
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
file: ./docker/Dockerfile-webpagetest
|
||||
build-args: version=${{steps.tag.outputs.tag}}
|
||||
push: true
|
||||
provenance: false
|
||||
tags: sitespeedio/sitespeed.io:${{steps.tag.outputs.tag}}-webpagetest
|
||||
-
|
||||
name: Build and push sitespeed.io-slim
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
file: ./Dockerfile-slim
|
||||
build-args: version=${{steps.tag.outputs.tag}}
|
||||
push: true
|
||||
provenance: false
|
||||
tags: sitespeedio/sitespeed.io:${{steps.tag.outputs.tag}}-slim
|
||||
|
|
@ -0,0 +1,22 @@
|
|||
name: Test CRUX
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Use Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20.x'
|
||||
- name: Install sitespeed.io
|
||||
run: npm ci
|
||||
- name: Run tests with CruX
|
||||
run: bin/sitespeed.js -b chrome -n 1 --crux.key ${{ secrets.CRUX_KEY }} https://en.wikipedia.org/wiki/Main_Page --plugins.remove browsertime
|
||||
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
name: Docker security scan
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
jobs:
|
||||
build:
|
||||
name: Build
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Build an image from Dockerfile
|
||||
run: |
|
||||
docker buildx install
|
||||
docker buildx build --load --platform linux/amd64 -t docker.io/sitespeedio/sitespeed.io:${{ github.sha }} .
|
||||
|
||||
- name: Run Trivy vulnerability scanner
|
||||
uses: aquasecurity/trivy-action@master
|
||||
with:
|
||||
image-ref: 'docker.io/sitespeedio/sitespeed.io:${{ github.sha }}'
|
||||
format: 'table'
|
||||
exit-code: '1'
|
||||
ignore-unfixed: true
|
||||
vuln-type: 'os,library'
|
||||
severity: 'CRITICAL'
|
||||
|
|
@ -10,19 +10,29 @@ jobs:
|
|||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
- name: Build Docker containers
|
||||
run: |
|
||||
docker build -t sitespeedio/sitespeed.io .
|
||||
docker build -t sitespeedio/sitespeed.io:slim --file Dockerfile-slim .
|
||||
docker buildx install
|
||||
docker buildx build --load --platform linux/amd64 -t sitespeedio/sitespeed.io .
|
||||
docker buildx build --load --platform linux/amd64 -t sitespeedio/sitespeed.io:slim --file Dockerfile-slim .
|
||||
- name: Install local HTTP server
|
||||
run: npm install serve -g
|
||||
- name: Start local HTTP server
|
||||
run: (serve test/data/html/ -l 3001&)
|
||||
- name: Run test on default container for Chrome
|
||||
run: docker run --rm sitespeedio/sitespeed.io https://www.sitespeed.io -n 1 -b chrome
|
||||
run: docker run --rm -v "$(pwd)":/sitespeed.io --network=host sitespeedio/sitespeed.io http://127.0.0.1:3001 -n 1 -b chrome
|
||||
- name: Run test on default container for Firefox
|
||||
run: docker run --rm sitespeedio/sitespeed.io https://www.sitespeed.io -n 1 -b firefox
|
||||
run: docker run --rm -v "$(pwd)":/sitespeed.io --network=host sitespeedio/sitespeed.io http://127.0.0.1:3001 -n 1 -b firefox
|
||||
- name: Run test on default container for Edge
|
||||
run: docker run --rm -v "$(pwd)":/sitespeed.io --network=host sitespeedio/sitespeed.io http://127.0.0.1:3001 -n 1 -b edge
|
||||
- name: Run test on slim container
|
||||
run: docker run --rm sitespeedio/sitespeed.io:slim https://www.sitespeed.io -n 1
|
||||
- name: Test WebPageReplay in the default container
|
||||
run: docker run --cap-add=NET_ADMIN --rm -e REPLAY=true -e LATENCY=100 sitespeedio/sitespeed.io https://www.sitespeed.io -n 1 -b chrome
|
||||
|
||||
|
||||
|
||||
run: docker run --rm -v "$(pwd)":/sitespeed.io --network=host sitespeedio/sitespeed.io:slim http://127.0.0.1:3001 -n 1 --browsertime.firefox.preference "devtools.netmonitor.persistlog:true"
|
||||
- name: Test WebPageReplay with Chrome
|
||||
run: docker run --cap-add=NET_ADMIN --rm -v "$(pwd)":/sitespeed.io -e REPLAY=true -e LATENCY=100 sitespeedio/sitespeed.io https://www.sitespeed.io -n 3 -b chrome
|
||||
- name: Test WebPageReplay user journey with Chrome
|
||||
run: docker run --cap-add=NET_ADMIN --rm -v "$(pwd)":/sitespeed.io -e REPLAY=true -e LATENCY=100 sitespeedio/sitespeed.io test/prepostscripts/multiWindows.cjs -n 1 -b chrome --multi
|
||||
- name: Test WebPageReplay with Firefox
|
||||
run: docker run --cap-add=NET_ADMIN --rm -v "$(pwd)":/sitespeed.io --network=host -e REPLAY=true -e LATENCY=100 sitespeedio/sitespeed.io https://www.sitespeed.io -n 3 -b firefox --browsertime.firefox.acceptInsecureCerts true
|
||||
- name: Run Chrome test with config
|
||||
run: docker run --rm -v "$(pwd)":/sitespeed.io --network=host sitespeedio/sitespeed.io http://127.0.0.1:3001 -b chrome --config test/exampleConfig.json
|
||||
|
|
|
|||
|
|
@ -10,45 +10,70 @@ jobs:
|
|||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
- name: Use Node.js
|
||||
uses: actions/setup-node@v1
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '12.x'
|
||||
node-version: '20.x'
|
||||
- name: Install sitespeed.io
|
||||
run: npm ci
|
||||
- name: Install Chrome
|
||||
run: |
|
||||
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
|
||||
sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
|
||||
sudo apt-get update
|
||||
sudo apt-get --only-upgrade install google-chrome-stable
|
||||
google-chrome --version
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo ACCEPT_EULA=Y apt-get upgrade google-chrome-stable -y
|
||||
python -m pip install --upgrade --user pip
|
||||
python -m pip install --user scipy
|
||||
python -m pip show scipy
|
||||
- name: Install Firefox
|
||||
uses: browser-actions/setup-firefox@latest
|
||||
#with:
|
||||
# firefox-version: '94.0'
|
||||
- name: Setup environment
|
||||
run: docker-compose -f test/docker-compose.yml up -d
|
||||
- name: Browser versions
|
||||
run: |
|
||||
google-chrome --version
|
||||
firefox --version
|
||||
- name: Install local HTTP server
|
||||
run: npm install serve -g
|
||||
- name: Start local HTTP server
|
||||
run: (serve test/data/html/ -l 3001&)
|
||||
- name: Test old budget
|
||||
run: bin/sitespeed.js -b firefox -n 2 --budget.configPath test/oldBudget.json --summary --xvfb https://www.sitespeed.io/
|
||||
- name: Test new budget file
|
||||
run: bin/sitespeed.js --useHash -n 1 --budget.configPath test/budget.json --xvfb https://www.sitespeed.io/#heybaberia
|
||||
run: bin/sitespeed.js -b firefox -n 2 --budget.configPath test/oldBudget.json --summary --xvfb http://127.0.0.1:3001/simple/
|
||||
- name: Test new budget file with junit
|
||||
run: bin/sitespeed.js --useHash -n 1 --budget.configPath test/budget.json --xvfb --budget.output junit http://127.0.0.1:3001/simple/#heybaberia
|
||||
- name: Test new budget file with tap
|
||||
run: bin/sitespeed.js --useHash -n 1 --budget.configPath test/budget.json --xvfb --budget.output tap http://127.0.0.1:3001/simple/#heybaberia
|
||||
- name: Test new budget file with json
|
||||
run: bin/sitespeed.js --useHash -n 1 --budget.configPath test/budget.json --xvfb --budget.output json http://127.0.0.1:3001/simple/#heybaberia
|
||||
- name: Use AXE
|
||||
run: bin/sitespeed.js --useAlias start --mobile -n 1 --utc --axe.enable --xvfb https://www.sitespeed.io/
|
||||
run: bin/sitespeed.js --useAlias start --mobile -n 1 --utc --axe.enable --xvfb http://127.0.0.1:3001/simple/
|
||||
- name: Use Fireefox with --mobile
|
||||
run: bin/sitespeed.js -b firefox --metrics.list --mobile -n 1 https://www.sitespeed.io/ --sustainable.enable --xvfb
|
||||
run: bin/sitespeed.js -b firefox --metrics.list --mobile -n 1 http://127.0.0.1:3001/simple/ --sustainable.enable --xvfb
|
||||
- name: Test --multi
|
||||
run: bin/sitespeed.js --multi -b chrome -n 1 test/prepostscripts/preSample.js https://www.sitespeed.io/documentation/ test/prepostscripts/postSample.js --xvfb
|
||||
run: bin/sitespeed.js --multi -b chrome -n 1 test/prepostscripts/preSample.js http://127.0.0.1:3001/simple/ test/prepostscripts/postSample.js --xvfb --browsertime.cjs
|
||||
- name: Test --multi and --tcpdump
|
||||
run: bin/sitespeed.js --multi -n 1 https://www.sitespeed.io/ https://www.sitespeed.io/documentation/ --tcpdump --xvfb
|
||||
run: bin/sitespeed.js --multi -n 1 http://127.0.0.1:3001/simple/ http://127.0.0.1:3001/dimple/ --tcpdump --xvfb --browsertime.cjs
|
||||
- name: Test --multi with one file
|
||||
run: bin/sitespeed.js --multi -n 3 test/prepostscripts/multi.js --xvfb
|
||||
run: bin/sitespeed.js --multi -n 3 test/prepostscripts/multi.js --xvfb --browsertime.cjs
|
||||
- name: Test setting HTML output pageSummaries
|
||||
run: bin/sitespeed.js https://www.sitespeed.io/ https://www.google.com -v -n 1 --html.pageSummaryMetrics transferSize.css --html.pageSummaryMetrics requests.httpErrors --html.pageSummaryMetrics score.performance --xvfb
|
||||
run: bin/sitespeed.js http://127.0.0.1:3001/simple/ http://127.0.0.1:3001/dimple/ -v -n 1 --html.pageSummaryMetrics transferSize.css --html.pageSummaryMetrics requests.httpErrors --html.pageSummaryMetrics score.performance --xvfb
|
||||
- name: Test setting HTML output summary boxes
|
||||
run: bin/sitespeed.js https://www.sitespeed.io/ -v -n 1 --html.summaryBoxes score.performance --html.summaryBoxes timings.firstPaint --xvfb
|
||||
run: bin/sitespeed.js http://127.0.0.1:3001/simple/ -v -n 1 --html.summaryBoxes score.performance --html.summaryBoxes timings.firstPaint --xvfb
|
||||
- name: Run test with Graphite
|
||||
run: bin/sitespeed.js https://www.sitespeed.io/ -n 1 --graphite.host 127.0.0.1 --xvfb
|
||||
run: bin/sitespeed.js http://127.0.0.1:3001/simple/ -n 1 --graphite.host 127.0.0.1 --xvfb
|
||||
- name: Run test without a CLI
|
||||
run: xvfb-run node test/runWithoutCli.js
|
||||
- name: Run tests with CruX
|
||||
run: bin/sitespeed.js -b chrome -n 1 --crux.key ${{ secrets.CRUX_KEY }} --xvfb https://www.sitespeed.io
|
||||
|
||||
- name: Run test with Influx 1.8
|
||||
run: bin/sitespeed.js http://127.0.0.1:3001/simple/ -n 1 --influxdb.host 127.0.0.1 --xvfb --logToFile
|
||||
- name: Run test with Influx 2.6.1
|
||||
run: bin/sitespeed.js http://127.0.0.1:3001/simple/ -n 1 --influxdb.host 127.0.0.1 --influxdb.port 8087 --influxdb.version 2 --influxdb.organisation sitespeed --influxdb.token sitespeed --xvfb
|
||||
- name: Run Chrome test with config
|
||||
run: node bin/sitespeed.js --config test/exampleConfig.json http://127.0.0.1:3001/simple/ --xvfb
|
||||
- name: Run Chrome test using compare plugin
|
||||
run: node bin/sitespeed.js --compare.id compare --compare.saveBaseline --compare.baselinePath test/ http://127.0.0.1:3001/simple/ --xvfb
|
||||
|
|
@ -10,14 +10,20 @@ jobs:
|
|||
build:
|
||||
runs-on: macos-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
- name: Use Node.js
|
||||
uses: actions/setup-node@v1
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '12.x'
|
||||
node-version: '20.x'
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
sudo safaridriver --enable
|
||||
npm ci
|
||||
- name: Install local HTTP server
|
||||
run: npm install serve -g
|
||||
- name: Start local HTTP server
|
||||
run: (serve test/data/html/ -l 3001&)
|
||||
- name: Run test
|
||||
run: bin/sitespeed.js -b safari https://www.sitespeed.io/
|
||||
run: bin/sitespeed.js -b safari http://127.0.0.1:3001/
|
||||
- name: Run Safari test with config
|
||||
run: node bin/sitespeed.js -b safari --config test/exampleConfig.json http://127.0.0.1:3001/
|
||||
|
|
|
|||
|
|
@ -0,0 +1,21 @@
|
|||
name: sitespeed.io action example
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
run-sitespeed:
|
||||
runs-on: ubuntu-latest
|
||||
name: running sitespeed.io
|
||||
steps:
|
||||
- name: code checkout
|
||||
uses: actions/checkout@v4
|
||||
# Here we build our own container to make sure we test against our latest code
|
||||
# but YOU can just used the latest version by specifying
|
||||
# sitespeedio/sitespeed.io:VERSION
|
||||
- name: Build Docker containers
|
||||
run: |
|
||||
docker buildx install
|
||||
docker buildx build --load --platform linux/amd64 -t sitespeedio/sitespeed.io .
|
||||
- name: running sitespeed.io container with arguments and optional Docker options
|
||||
run: docker run -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io https://www.sitespeed.io --budget.configPath .github/budget.json -n 1
|
||||
|
|
@ -11,18 +11,21 @@ jobs:
|
|||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
node-version: [10.x, 12.x, 14.x]
|
||||
node-version: [18.x, 20.x]
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
- name: Use Node.js ${{ matrix.node-version }}
|
||||
uses: actions/setup-node@v1
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: ${{ matrix.node-version }}
|
||||
- name: Install dependencies
|
||||
- name: Install dependencies and Chrome
|
||||
run: |
|
||||
npm ci
|
||||
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
|
||||
sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
|
||||
sudo apt-get update
|
||||
sudo ACCEPT_EULA=Y apt-get upgrade google-chrome-stable -y
|
||||
sudo apt-get --only-upgrade install google-chrome-stable
|
||||
google-chrome --version
|
||||
- name: Browser versions
|
||||
run: |
|
||||
google-chrome --version
|
||||
|
|
|
|||
|
|
@ -10,11 +10,11 @@ jobs:
|
|||
build:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/checkout@v4
|
||||
- name: Use Node.js
|
||||
uses: actions/setup-node@v1
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '12.x'
|
||||
node-version: '20.x'
|
||||
- name: Install sitespeed.io
|
||||
run: npm ci
|
||||
env:
|
||||
|
|
@ -24,6 +24,12 @@ jobs:
|
|||
- name: Install dependencies
|
||||
run: choco install microsoft-edge --force
|
||||
- name: Run Edge test
|
||||
run: node bin/sitespeed.js -b edge https://www.sitespeed.io/
|
||||
run: node bin/sitespeed.js -b edge https://www.sitespeed.io/
|
||||
shell: cmd
|
||||
|
||||
- name: Run Edge test with scripting
|
||||
run: node bin/sitespeed.js -b edge --multi test/prepostscripts/multiWindows.cjs -n 1
|
||||
shell: cmd
|
||||
- name: Run Edge test with config
|
||||
run: node bin/sitespeed.js -b edge --config test/exampleConfig.json https://www.sitespeed.io/
|
||||
shell: cmd
|
||||
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
name: Example to run sitespeed.io on Windows
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
build:
|
||||
runs-on: windows-latest
|
||||
steps:
|
||||
- name: Use Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: '20.x'
|
||||
- name: Install sitespeed.io
|
||||
run: npm install sitespeed.io -g
|
||||
shell: bash
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
choco install ffmpeg
|
||||
choco outdated
|
||||
choco install python
|
||||
choco install googlechrome
|
||||
python -m pip install --upgrade --user pip
|
||||
python -m pip install --upgrade --user setuptools
|
||||
python -m pip install --user pyssim OpenCV-Python Numpy scipy
|
||||
python -m pip --version
|
||||
python -m pip show Pillow
|
||||
python -m pip show pyssim
|
||||
shell: cmd
|
||||
- name: Example running test on Windows
|
||||
run: sitespeed.io -n 1 --video --visualMetrics --viewPort 1024x768 https://www.sitespeed.io/
|
||||
shell: bash
|
||||
|
|
@ -1,3 +1,4 @@
|
|||
docker/*
|
||||
docs/*
|
||||
test/*
|
||||
test/*
|
||||
sitespeed-result/*
|
||||
1107
CHANGELOG.md
1107
CHANGELOG.md
File diff suppressed because it is too large
Load Diff
|
|
@ -45,3 +45,4 @@ Many many many thanks to:
|
|||
* [Devrim Tufan](https://github.com/tufandevrim)
|
||||
* [Keith Cirkel](https://github.com/keithamus)
|
||||
* [Jonathan Lee](https://github.com/beenanner)
|
||||
* [Pavel Bairov](https://github.com/Amerousful)
|
||||
|
|
|
|||
13
Dockerfile
13
Dockerfile
|
|
@ -1,9 +1,11 @@
|
|||
FROM sitespeedio/webbrowsers:chrome-90.0-firefox-88.0-edge-89.0-dev
|
||||
FROM sitespeedio/webbrowsers:chrome-120.0-firefox-121.0-edge-120.0
|
||||
|
||||
ARG TARGETPLATFORM=linux/amd64
|
||||
|
||||
ENV SITESPEED_IO_BROWSERTIME__XVFB true
|
||||
ENV SITESPEED_IO_BROWSERTIME__DOCKER true
|
||||
|
||||
COPY docker/webpagereplay/wpr /usr/local/bin/
|
||||
COPY docker/webpagereplay/$TARGETPLATFORM/wpr /usr/local/bin/
|
||||
COPY docker/webpagereplay/wpr_cert.pem /webpagereplay/certs/
|
||||
COPY docker/webpagereplay/wpr_key.pem /webpagereplay/certs/
|
||||
COPY docker/webpagereplay/deterministic.js /webpagereplay/scripts/deterministic.js
|
||||
|
|
@ -24,8 +26,9 @@ RUN wpr installroot --https_cert_file /webpagereplay/certs/wpr_cert.pem --https_
|
|||
RUN mkdir -p /usr/src/app
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
COPY package.* /usr/src/app/
|
||||
RUN npm install --production
|
||||
COPY package.json /usr/src/app/
|
||||
COPY npm-shrinkwrap.json /usr/src/app/
|
||||
RUN npm install --production && npm cache clean --force
|
||||
COPY . /usr/src/app
|
||||
|
||||
COPY docker/scripts/start.sh /start.sh
|
||||
|
|
@ -41,4 +44,6 @@ RUN echo 'ALL ALL=NOPASSWD: /usr/sbin/tc, /usr/sbin/route, /usr/sbin/ip' > /etc/
|
|||
|
||||
ENTRYPOINT ["/start.sh"]
|
||||
VOLUME /sitespeed.io
|
||||
VOLUME /baseline
|
||||
|
||||
WORKDIR /sitespeed.io
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
FROM node:14.16.0-buster-slim
|
||||
FROM node:20.9.0-bookworm-slim
|
||||
|
||||
ARG TARGETPLATFORM=linux/amd64
|
||||
|
||||
ENV SITESPEED_IO_BROWSERTIME__DOCKER true
|
||||
ENV SITESPEED_IO_BROWSERTIME__VIDEO false
|
||||
|
|
@ -6,31 +8,20 @@ ENV SITESPEED_IO_BROWSERTIME__BROWSER firefox
|
|||
ENV SITESPEED_IO_BROWSERTIME__VISUAL_METRICS false
|
||||
ENV SITESPEED_IO_BROWSERTIME__HEADLESS true
|
||||
|
||||
ENV FIREFOX_VERSION 88.0
|
||||
|
||||
ENV PATH="/usr/local/bin:${PATH}"
|
||||
|
||||
RUN buildDeps='wget bzip2' && apt-get update && apt -y install $buildDeps && \
|
||||
# Download and unpack the correct Firefox version
|
||||
wget https://ftp.mozilla.org/pub/firefox/releases/${FIREFOX_VERSION}/linux-x86_64/en-US/firefox-${FIREFOX_VERSION}.tar.bz2 && \
|
||||
tar -xjf firefox-${FIREFOX_VERSION}.tar.bz2 && \
|
||||
rm firefox-${FIREFOX_VERSION}.tar.bz2 && \
|
||||
mv firefox /opt/ && \
|
||||
ln -s /opt/firefox/firefox /usr/local/bin/firefox && \
|
||||
# Install dependencies for Firefox
|
||||
apt-get install -y --no-install-recommends --no-install-suggests libxt6 \
|
||||
`apt-cache depends firefox-esr | awk '/Depends:/{print$2}'` && \
|
||||
# iproute2 = tc
|
||||
apt -y install tcpdump iproute2 ca-certificates sudo --no-install-recommends --no-install-suggests && \
|
||||
RUN echo "deb http://deb.debian.org/debian/ unstable main contrib non-free" >> /etc/apt/sources.list.d/debian.list && \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends firefox tcpdump iproute2 ca-certificates sudo --no-install-recommends --no-install-suggests && \
|
||||
# Cleanup
|
||||
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $toolDeps \
|
||||
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
|
||||
&& rm -rf /var/lib/apt/lists/* /tmp/*
|
||||
|
||||
# Install sitespeed.io
|
||||
RUN mkdir -p /usr/src/app
|
||||
WORKDIR /usr/src/app
|
||||
COPY . /usr/src/app
|
||||
RUN CHROMEDRIVER_SKIP_DOWNLOAD=true EGDEDRIVER_SKIP_DOWNLOAD=true npm install --production
|
||||
RUN CHROMEDRIVER_SKIP_DOWNLOAD=true EGDEDRIVER_SKIP_DOWNLOAD=true npm install --production && npm cache clean --force && npm uninstall npm -g
|
||||
WORKDIR /usr/src/app
|
||||
COPY docker/scripts/start-slim.sh /start.sh
|
||||
|
||||
|
|
|
|||
2
HELP.md
2
HELP.md
|
|
@ -2,7 +2,7 @@
|
|||
We want to make sitespeed.io one of the best web performance tool in the world and we hope you can help us!
|
||||
|
||||
## Developers
|
||||
We love to have more people involved in improving sitespeed.io. We are constantly working on adding more documentation and trying to write more information in the issues so its easier to help out. If there's an [issue](https://github.com/sitespeedio/sitespeed.io/issues) that you want to take on, ping the the issue and we can help you get started. You can also [join our Slack channel](https://sitespeedio.herokuapp.com/) if you need help!
|
||||
We love to have more people involved in improving sitespeed.io. We are constantly working on adding more documentation and trying to write more information in the issues so its easier to help out. If there's an [issue](https://github.com/sitespeedio/sitespeed.io/issues) that you want to take on, ping the the issue and we can help you get started. You can also [join our Slack channel](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw) if you need help!
|
||||
|
||||
## Designers
|
||||
As a designer there's a lot you can do: You can help us improve the HTML result pages. Maybe we should restructure the metrics ? Or could the header/footer look better? You could also have look at [https://www.sitespeed.io](https://www.sitespeed.io/) where we have all the documentation. You can pretty much help us with everything, no one in the core team got design skills :)
|
||||
|
|
|
|||
2
LICENSE
2
LICENSE
|
|
@ -1,6 +1,6 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2012,2013,2014,2015,2016,2017,2018,2019 Peter Hedenskog & Tobias Lidskog
|
||||
Copyright (c) 2012-2023 Peter Hedenskog
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
|
|
|||
274
README.md
274
README.md
|
|
@ -12,154 +12,222 @@
|
|||
[![Changelog #212][changelog-image]][changelog-url]
|
||||
|
||||
|
||||
[Website](https://www.sitespeed.io/) | [Documentation](https://www.sitespeed.io/documentation/sitespeed.io/) | [Changelog](https://github.com/sitespeedio/sitespeed.io/blob/main/CHANGELOG.md) | [Twitter](https://twitter.com/SiteSpeedio)
|
||||
[Website](https://www.sitespeed.io/) | [Documentation](https://www.sitespeed.io/documentation/sitespeed.io/) | [Changelog](https://github.com/sitespeedio/sitespeed.io/blob/main/CHANGELOG.md) | [Mastodon](https://fosstodon.org/@sitespeedio)
|
||||
|
||||
## Welcome to the wonderful world of web performance!
|
||||
|
||||
**Sitespeed.io is a *complete web performance tool* that helps you measure the performance of your website. What exactly does that mean?**
|
||||
# Table of Contents
|
||||
- [Welcome to the Wonderful World of Web Performance](#welcome-to-the-wonderful-world-of-web-performance)
|
||||
- [What is sitespeed.io?](#what-is-sitespeedio)
|
||||
- [Why Choose sitespeed.io?](#why-choose-sitespeedio)
|
||||
- [Dive Into Our Documentation](#dive-into-our-documentation)
|
||||
- [Installation](#installation)
|
||||
- [Docker](#docker)
|
||||
- [NodeJS](#nodejs)
|
||||
- [Usage](#usage)
|
||||
- [Basic Usage](#basic-usage)
|
||||
- [Advanced Configuration](#advanced-configuration)
|
||||
- [Mobile Performance Testing](#mobile-performance-testing)
|
||||
- [Examples](#examples)
|
||||
- [Contributing](#contributing)
|
||||
- [Reporting Issues](#reporting-issues)
|
||||
- [Community and Support](#community-and-support)
|
||||
- [License](#license)
|
||||
|
||||
Before we start telling you all about sitespeed.io you should just try it out:
|
||||
|
||||
|
||||
# Welcome to the wonderful world of web performance!
|
||||
|
||||
Welcome to `sitespeed.io`, the comprehensive web performance tool designed for everyone passionate about web speed. Whether you're a developer, a site owner, or just someone curious about website performance, `sitespeed.io` offers a powerful yet user-friendly way to analyze and optimize your website.
|
||||
|
||||
## What is sitespeed.io?
|
||||
|
||||
`sitespeed.io` is more than just a tool; it's a complete solution for measuring, monitoring, and improving your website's performance. Built with simplicity and efficiency in mind, it enables you to:
|
||||
|
||||
- **Test Websites Using Real Browsers**: Simulate real user interactions and conditions to get accurate performance data.
|
||||
- **Speed Optimization Feedback**: Get detailed insights into your website's construction and discover opportunities for enhancing speed.
|
||||
- **Track Performance Over Time**: Monitor changes and trends in your website's performance to stay ahead of potential issues.
|
||||
|
||||
Use cases on when to use `sitespeed.io`.
|
||||
- **Web performance audit**: Run performance tests from your terminal.
|
||||
- **Continuous Integration**: Detect web performance regressions early in the development cycle.
|
||||
- **Production Monitoring**: Monitor performance in production and get alerted on regressions.
|
||||
|
||||
## Why Choose sitespeed.io?
|
||||
|
||||
- **Open Source and Community-Driven**: Built and maintained by a community, ensuring continuous improvement and innovation.
|
||||
- **Versatile and Extensible**: Whether you're running a simple blog or a complex e-commerce site, `sitespeed.io` adapts to your needs.
|
||||
- **Seamless Integration**: Easily incorporate `sitespeed.io` into your development workflow, continuous integration systems, and monitoring setups.
|
||||
|
||||
## Dive Into Our Documentation
|
||||
|
||||
We've put countless hours into our [documentation](https://www.sitespeed.io/documentation/sitespeed.io/) to help you get the most out of `sitespeed.io`. From installation guides to advanced usage scenarios, our documentation is a treasure trove of information and tips.
|
||||
|
||||
|
||||
# Installation
|
||||
|
||||
Getting started with `sitespeed.io` is straightforward. You can install it using Docker or NodeJS, depending on your preference and setup. Follow these simple steps to begin optimizing your website's performance.
|
||||
|
||||
## Docker
|
||||
|
||||
Using Docker is the easiest way to get started with `sitespeed.io`, especially if you don't want to handle dependencies manually. Run the following command to use `sitespeed.io` in a Docker container:
|
||||
|
||||
```bash
|
||||
$ docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io https://www.sitespeed.io/
|
||||
docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io https://www.sitespeed.io/
|
||||
```
|
||||
|
||||
Or using npm (you need Chrome, Firefox, Edge or Safari installed or Chrome/Firefox on Android)
|
||||
This command pulls the latest sitespeed.io Docker image and runs a test on the sitespeed.io website. The **-v "$(pwd)":/sitespeed.io** part mounts the current directory into the container, allowing you to easily access test results.
|
||||
|
||||
## NodeJS
|
||||
|
||||
If you prefer installing `sitespeed.io` as an npm package, ensure you have NodeJS installed on your system. Then, install `sitespeed.io` globally using npm:
|
||||
|
||||
```bash
|
||||
$ npm i -g sitespeed.io && sitespeed.io https://www.sitespeed.io/
|
||||
npm i -g sitespeed.io
|
||||
```
|
||||
|
||||
Ok, now you have tried it, let us tell you more about sitespeed.io. We think of a complete web performance tool as having three key capabilities:
|
||||
After installation, you can start using sitespeed.io by running:
|
||||
|
||||
```bash
|
||||
sitespeed.io https://www.example.com
|
||||
```
|
||||
|
||||
- It test web sites using real browsers, simulating real users connectivity and collect important user centric metrics like Speed Index and First Visual Render.
|
||||
- It analyse how your page is built and give feedback how you can make it faster for the end user.
|
||||
- It collect and keep data how your pages is built so you easily can track changes.
|
||||
Replace https://www.example.com with the URL you wish to test. Note that using NodeJS might require additional dependencies like FFmpeg and Python. Detailed installation instructions for these dependencies can be found [here](https://www.sitespeed.io/documentation/sitespeed.io/installation/).
|
||||
|
||||
**What is sitespeed.io good for?**
|
||||
Choose the method that best suits your environment and get ready to dive into web performance optimization with sitespeed.io!
|
||||
|
||||
It is usually used in two different areas:
|
||||
# Usage
|
||||
|
||||
- Running in your continuous integration to find web performance regressions early: on commits or when you move code to your test environment
|
||||
- Monitoring your performance in production, alerting on regressions.
|
||||
`sitespeed.io` is tailored to be user-friendly, making web performance testing accessible regardless of your technical expertise. Here's a straightforward guide to help you begin your web performance optimization journey.
|
||||
|
||||
To understand how sitespeed.io does these things, let's talk about how it works.
|
||||
## Basic Usage
|
||||
|
||||
First a few key concepts:
|
||||
|
||||
- Sitespeed.io is built upon a couple of other Open Source tools in the sitespeed.io suite.
|
||||
- [Browsertime](https://github.com/sitespeedio/browsertime) is the tool that drives the browser and collect metrics.
|
||||
- [The Coach](https://github.com/sitespeedio/coach) knows how to build fast websites and analyse your page and give you feedback what you should change.
|
||||
- Visual Metrics is metrics collected from a video recording of the browser screen.
|
||||
- Everything in sitespeed.io is a [plugin](https://www.sitespeed.io/documentation/sitespeed.io/plugins/) and they communicate by passing messages on a queue.
|
||||
|
||||
When you as user choose to test a URL, this is what happens on a high level:
|
||||
|
||||
1. sitespeed.io starts and initialise all configured plugins.
|
||||
2. The URL is passed around the plugins through the queue.
|
||||
1. Browsertime gets the URL and opens the browser.
|
||||
2. It starts to record a video of the browser screen.
|
||||
3. The browser access the URL.
|
||||
4. When the page is finished, Browsertime takes a screenshot of the page.
|
||||
5. Then run some JavaScripts to analyse the page (using Coach and Browsertime scripts).
|
||||
6. Stop the video and close the browser.
|
||||
7. Analyse the video to get Visual Metrics like First Visual Change and Speed Index.
|
||||
8. Browsertime passes all metrics and data on the queue so other plugins can use it.
|
||||
3. The HTML/Graphite/InfluxDB plugin collects the metrics in queue.
|
||||
4. When all URLs are tested, sitespeed sends a message telling plugins to summarise the metrics and then render it.
|
||||
5. Plugins pickup the render message and the HTML plugin writes the HTML to disk.
|
||||
|
||||
## Try it out
|
||||
|
||||
Using Docker (use latest Docker):
|
||||
To start testing your website, simply run `sitespeed.io` with the URL of the site you want to analyze. For example:
|
||||
|
||||
```bash
|
||||
$ docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io https://www.sitespeed.io/
|
||||
sitespeed.io https://www.example.com --browser chrome -n 5
|
||||
```
|
||||
|
||||
Or install using npm:
|
||||
This command tests https://www.example.com using Chrome and performs 5 iterations of the test. This approach helps in obtaining a more accurate median performance measurement by testing the site multiple times.
|
||||
|
||||
```bash
|
||||
$ npm i -g sitespeed.io
|
||||
```
|
||||
## Advanced Configuration
|
||||
|
||||
Or clone the repo and test the latest changes:
|
||||
sitespeed.io offers a wide range of configuration options to tailor the tests to your specific needs. You can specify different browsers, adjust connectivity settings, and much more. For a comprehensive list of all available options, visit our [configuration documentation](https://www.sitespeed.io/documentation/sitespeed.io/configuration/).
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/sitespeedio/sitespeed.io.git
|
||||
$ cd sitespeed.io
|
||||
$ npm install
|
||||
$ bin/sitespeed.js --help
|
||||
$ bin/sitespeed.js https://www.sitespeed.io/
|
||||
Additionally, for a quick overview of all command-line options, you can run:
|
||||
|
||||
```bash
|
||||
sitespeed.io --help
|
||||
```
|
||||
|
||||
## More details
|
||||
This command displays all the available flags and settings you can use with sitespeed.io, helping you fine-tune your performance testing to fit your unique requirements.
|
||||
|
||||
Using sitespeed.io you can:
|
||||
* Test your web site against Web Performance best practices using the [Coach](https://github.com/sitespeedio/coach).
|
||||
* Collect Navigation Timing API, User Timing API and Visual Metrics from Firefox/Chrome using [Browsertime](https://github.com/sitespeedio/browsertime).
|
||||
* Run your custom-made JavaScript and collect whichever metric(s) you need.
|
||||
* Test one or multiple pages, across one or many runs to get more-accurate metrics.
|
||||
* Create HTML-result pages or store the metrics in Graphite.
|
||||
* Write your own plugins that can do whatever tests you want/need.
|
||||
Whether you're running a quick check or a detailed analysis, sitespeed.io provides the flexibility and power you need to deeply understand and improve your website's performance.
|
||||
|
||||
See all the latest changes in the [Changelog](https://github.com/sitespeedio/sitespeed.io/blob/main/CHANGELOG.md).
|
||||
## Mobile Performance Testing
|
||||
|
||||
Checkout our example [dashboard.sitespeed.io](https://dashboard.sitespeed.io/dashboard/db/page-summary)
|
||||
In today's mobile-first world, ensuring your website performs optimally on smartphones and tablets is crucial. With `sitespeed.io`, you can simulate and analyze the performance of your website on mobile devices, helping you understand and improve the user experience for mobile audiences.
|
||||
|
||||
A summary report in HTML:
|
||||
<img src="https://raw.githubusercontent.com/sitespeedio/sitespeed.io/main/docs/img/start-readme.jpg">
|
||||
### Why Test on Mobile?
|
||||
|
||||
Individual page report:
|
||||
<img src="https://raw.githubusercontent.com/sitespeedio/sitespeed.io/main/docs/img/start-url-readme.jpg">
|
||||
- **User Experience**: A significant portion of web traffic comes from mobile devices. Testing on mobile ensures your site is optimized for these users.
|
||||
- **Search Engine Ranking**: Search engines like Google prioritize mobile-friendly websites in their search results.
|
||||
- **Performance Insights**: Mobile devices have different performance characteristics than desktops, such as CPU limitations and network variability.
|
||||
|
||||
Collected metrics from a URL in Graphite/Grafana:
|
||||
<img src="https://raw.githubusercontent.com/sitespeedio/sitespeed.io/main/docs/img/grafana-readme.jpg">
|
||||
### How sitespeed.io Helps
|
||||
|
||||
And look at trends in Grafana:
|
||||
<img src="https://raw.githubusercontent.com/sitespeedio/sitespeed.io/main/docs/img/grafana-trends-readme.jpg">
|
||||
- **Real Browser Testing**: Simulate mobile browsers to get accurate performance data as experienced by real users.
|
||||
- **Device-Specific Metrics**: Gain insights into how your site performs on different mobile devices and networks.
|
||||
- **Responsive Design Analysis**: Test how well your site adapts to various screen sizes and orientations.
|
||||
|
||||
Video - easiest using Docker. This gif is optimized, the quality is much better IRL:
|
||||
### Getting Started
|
||||
|
||||
<img src="https://raw.githubusercontent.com/sitespeedio/sitespeed.io/main/docs/img/barack.gif">
|
||||
To start testing your website’s mobile performance, you need to setup your mobile phone for testing. We got [documentation for setting up your Android phone](https://www.sitespeed.io/documentation/sitespeed.io/mobile-phones/#test-on-android) and [iOS](https://www.sitespeed.io/documentation/sitespeed.io/mobile-phones/#test-on-ios).
|
||||
|
||||
## Test using WebPageReplay
|
||||
We have a special Docker container that comes with [WebPageReplay](https://github.com/catapult-project/catapult/blob/main/web_page_replay_go/README.md) installed. This is a really early alpha release but we think you should try it out.
|
||||
When your setup is ready, you can run tests on your Android phone.
|
||||
|
||||
WebPageReplay will let you replay your page locally (getting rid of server latency etc) and makes it easier to have stable metrics and find front end regressions.
|
||||
|
||||
It works like this:
|
||||
1. WebPageReplay is started in record mode
|
||||
2. Browsertime access the URLs you choose one time (so it is recorded)
|
||||
3. WebPageReplay is closed down
|
||||
4. WebPageReplay in replay mode is started
|
||||
5. Sitespeed.io (using Browsertime) test the URL so many times you choose
|
||||
6. WebPageReplay in replay mode is closed down
|
||||
|
||||
You can change latency by setting a Docker environment variable. Use REPLAY to turn on the replay functionality.
|
||||
|
||||
Default browser is Chrome:
|
||||
|
||||
```
|
||||
docker run --cap-add=NET_ADMIN --rm -v "$(pwd)":/sitespeed.io -e REPLAY=true -e LATENCY=100 sitespeedio/sitespeed.io -n 5 -b chrome https://en.wikipedia.org/wiki/Barack_Obama
|
||||
```bash
|
||||
sitespeed.io https://www.example.com --android
|
||||
```
|
||||
|
||||
Use Firefox:
|
||||
## Examples
|
||||
|
||||
```
|
||||
docker run --cap-add=NET_ADMIN --rm -v "$(pwd)":/sitespeed.io -e REPLAY=true -e LATENCY=100 sitespeedio/sitespeed.io -n 11 -b firefox https://en.wikipedia.org/wiki/Barack_Obama
|
||||
```
|
||||
`sitespeed.io` provides insightful HTML reports that help you visualize and understand your website's performance. Here are some examples to illustrate what you can achieve:
|
||||
|
||||
<hr>
|
||||
### Summary Report
|
||||
|
||||
# Sponsors
|
||||
Here's an example of a summary report in HTML, offering a comprehensive overview of your site's performance metrics:
|
||||
|
||||
<a href="https://www.macstadium.com"><img src="https://uploads-ssl.webflow.com/5ac3c046c82724970fc60918/5c019d917bba312af7553b49_MacStadium-developerlogo.png"></a>
|
||||

|
||||
|
||||
We have a Mac Mini sponsored by [MacStadium](https://www.macstadium.com) and you too can sponsor us to help us keep sitespeed.io running and making sure we can have the best test setup as possible. Read our [sponsor page](https://github.com/sponsors/soulgalore) for more info.
|
||||
This report includes key performance indicators like load times, page size, and request counts, giving you a quick snapshot of your site’s overall health.
|
||||
|
||||
### Individual Page Report
|
||||
|
||||
For more detailed analysis, here's an individual page report:
|
||||
|
||||

|
||||
|
||||
This report dives deeper into a single page's performance, providing metrics on aspects like scripting, rendering, and network activity, crucial for pinpointing specific areas of improvement.
|
||||
|
||||
### Performance Monitoring Dashboard
|
||||
|
||||
To monitor your website’s performance over time, check out our live setup at [dashboard.sitespeed.io](https://dashboard.sitespeed.io/), which integrates `sitespeed.io` with Graphite and Grafana.
|
||||
|
||||
#### Metrics in Graphite/Grafana
|
||||
|
||||
Collected metrics from a URL visualized in Graphite/Grafana:
|
||||
|
||||

|
||||
|
||||
This setup allows for continuous tracking of performance, helping you identify trends and potential issues.
|
||||
|
||||
#### Trends in Grafana
|
||||
|
||||
Trends over time in Grafana provide a long-term view of your site's performance:
|
||||
|
||||

|
||||
|
||||
With these insights, you can make informed decisions about optimizations and track the impact of changes you make.
|
||||
|
||||
### Video Performance Analysis
|
||||
|
||||
For visual feedback, `sitespeed.io` can generate videos, making it easier to see how your site loads in real-time. Here's an sample video:
|
||||
|
||||

|
||||
|
||||
Video analysis is most easily done using Docker and offers a unique perspective on user experience, highlighting areas that need attention.
|
||||
|
||||
|
||||
# Contributing
|
||||
|
||||
We welcome contributions from the community! Whether you're fixing a bug, adding a feature, or improving documentation, your help is valuable. Here’s how you can contribute:
|
||||
|
||||
1. **Create an Issue**: Create an issue and discuss with us how to implement the issue.
|
||||
2. **Fork and Clone**: Fork the repository and clone it locally.
|
||||
3. **Create a Branch**: Create a new branch for your feature or bug fix.
|
||||
4. **Develop**: Make your changes. Ensure you adhere to the coding standards and write tests if applicable.
|
||||
5. **Test**: Run tests to ensure everything works as expected.
|
||||
6. **Submit a Pull Request**: Push your changes to your fork and submit a pull request to the main repository.
|
||||
|
||||
Before contributing, please read our [CONTRIBUTING.md](.gitub/CONTRIBUTING.md) for more detailed information on how to contribute.
|
||||
|
||||
# Reporting Issues
|
||||
Found a bug or have a feature request? Please use the [GitHub Issues](https://github.com/sitespeedio/sitespeed.io/issues) to report them. Be sure to check existing issues to avoid duplicates.
|
||||
|
||||
# Community and Support
|
||||
|
||||
Join our community! Whether you need help, want to share your experience, or discuss potential improvements, there are several ways to get involved:
|
||||
|
||||
- **Slack**: Connect with fellow users and the development team on [Slack](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw).
|
||||
- **GitHub Issues**: For technical questions, feature requests, and bug reports, use our [GitHub issues](https://github.com/sitespeedio/sitespeed.io/issues).
|
||||
- **RSS/Changelog**: Latest releases and information can always be found in our [RSS feed](https://github.com/sitespeedio/sitespeed.io/releases.atom) and in our [changelog](https://github.com/sitespeedio/sitespeed.io/blob/main/CHANGELOG.md).
|
||||
- **Mastodon**: Follow us on Mastodon [https://fosstodon.org/@sitespeedio](https://fosstodon.org/@sitespeedio).
|
||||
|
||||
We're excited to have you in our community and look forward to your contributions and interactions!
|
||||
|
||||
# License
|
||||
[The MIT License (MIT)](LICENSE).
|
||||
|
||||
[travis-image]: https://img.shields.io/travis/sitespeedio/sitespeed.io.svg?style=flat-square
|
||||
[travis-url]: https://travis-ci.org/sitespeedio/sitespeed.io
|
||||
[stars-url]: https://github.com/sitespeedio/sitespeed.io/stargazers
|
||||
[stars-image]: https://img.shields.io/github/stars/sitespeedio/sitespeed.io.svg?style=flat-square
|
||||
[downloads-image]: https://img.shields.io/npm/dt/sitespeed.io.svg?style=flat-square
|
||||
|
|
|
|||
|
|
@ -2,6 +2,6 @@
|
|||
|
||||
This roadmap is the plan for the core team, priorities can and and will change over time. This will give you a view of our current vision and plan.
|
||||
|
||||
### The rest of 2020
|
||||
### 2024
|
||||
|
||||
We gonna focus refactor the core engine [Browsertime](https://github.com/sitespeedio/browsertime) and fine tune the [Coach](https://github.com/sitespeedio/coach).
|
||||
Lets get the online version if sitespeed.io up and running.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
------------------
|
||||
First, check the official [sitespeed.io documentation](https://www.sitespeed.io/documentation/).
|
||||
|
||||
If you require further help or support then check [new and old issues on GitHub](https://github.com/sitespeedio/sitespeed.io/issues) or join the [sitespeed.io Slack](https://sitespeedio.herokuapp.com).
|
||||
If you require further help or support then check [new and old issues on GitHub](https://github.com/sitespeedio/sitespeed.io/issues) or join the [sitespeed.io Slack](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw).
|
||||
|
||||
**Please note:**
|
||||
- The sitespeed.io project uses GitHub for tracking bugs and feature requests.
|
||||
|
|
@ -1,26 +1,29 @@
|
|||
#!/usr/bin/env node
|
||||
|
||||
'use strict';
|
||||
import { readFileSync } from 'node:fs';
|
||||
|
||||
const yargs = require('yargs');
|
||||
const browsertime = require('browsertime');
|
||||
const merge = require('lodash.merge');
|
||||
const getURLs = require('../lib/cli/util').getURLs;
|
||||
const get = require('lodash.get');
|
||||
const set = require('lodash.set');
|
||||
const findUp = require('find-up');
|
||||
const fs = require('fs');
|
||||
const browsertimeConfig = require('../lib/plugins/browsertime/index').config;
|
||||
import merge from 'lodash.merge';
|
||||
import set from 'lodash.set';
|
||||
import get from 'lodash.get';
|
||||
import yargs from 'yargs';
|
||||
import { hideBin } from 'yargs/helpers';
|
||||
|
||||
import { findUpSync } from 'find-up';
|
||||
import { BrowsertimeEngine, configureLogging } from 'browsertime';
|
||||
|
||||
import { getURLs } from '../lib/cli/util.js';
|
||||
|
||||
import {config as browsertimeConfig} from '../lib/plugins/browsertime/index.js';
|
||||
|
||||
const iphone6UserAgent =
|
||||
'Mozilla/5.0 (iPhone; CPU iPhone OS 6_1_3 like Mac OS X) AppleWebKit/536.26 ' +
|
||||
'(KHTML, like Gecko) Version/6.0 Mobile/10B329 Safari/8536.25';
|
||||
|
||||
const configPath = findUp.sync(['.sitespeed.io.json']);
|
||||
const configPath = findUpSync(['.sitespeed.io.json']);
|
||||
let config;
|
||||
|
||||
try {
|
||||
config = configPath ? JSON.parse(fs.readFileSync(configPath)) : {};
|
||||
config = configPath ? JSON.parse(readFileSync(configPath)) : {};
|
||||
} catch (e) {
|
||||
if (e instanceof SyntaxError) {
|
||||
/* eslint no-console: off */
|
||||
|
|
@ -33,9 +36,18 @@ try {
|
|||
throw e;
|
||||
}
|
||||
|
||||
async function testURLs(engine, urls) {
|
||||
async function testURLs(engine, urls, isMulti) {
|
||||
try {
|
||||
await engine.start();
|
||||
|
||||
if(isMulti) {
|
||||
const result = await engine.runMultiple(urls);
|
||||
for (let errors of result[0].errors) {
|
||||
if (errors.length > 0) {
|
||||
process.exitCode = 1;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for (let url of urls) {
|
||||
const result = await engine.run(url);
|
||||
for (let errors of result[0].errors) {
|
||||
|
|
@ -44,13 +56,15 @@ async function testURLs(engine, urls) {
|
|||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} finally {
|
||||
engine.stop();
|
||||
}
|
||||
}
|
||||
|
||||
async function runBrowsertime() {
|
||||
let parsed = yargs
|
||||
let yargsInstance = yargs(hideBin(process.argv));
|
||||
let parsed = yargsInstance
|
||||
.env('SITESPEED_IO')
|
||||
.require(1, 'urlOrFile')
|
||||
.option('browsertime.browser', {
|
||||
|
|
@ -119,6 +133,23 @@ async function runBrowsertime() {
|
|||
describe:
|
||||
'Short key to use Android. Will automatically use com.android.chrome for Chrome and stable Firefox. If you want to use another Chrome version, use --chrome.android.package'
|
||||
})
|
||||
.option('chrome.enableChromeDriverLog', {
|
||||
describe: 'Log Chromedriver communication to a log file.',
|
||||
type: 'boolean',
|
||||
group: 'chrome'
|
||||
})
|
||||
.option('chrome.enableVerboseChromeDriverLog', {
|
||||
describe: 'Log verboose Chromedriver communication to a log file.',
|
||||
type: 'boolean',
|
||||
group: 'chrome'
|
||||
})
|
||||
.option('verbose', {
|
||||
alias: ['v'],
|
||||
describe:
|
||||
'Verbose mode prints progress messages to the console. Enter up to three times (-vvv)' +
|
||||
' to increase the level of detail.',
|
||||
type: 'count'
|
||||
})
|
||||
.config(config);
|
||||
|
||||
const defaultConfig = {
|
||||
|
|
@ -141,7 +172,11 @@ async function runBrowsertime() {
|
|||
};
|
||||
|
||||
const btOptions = merge({}, parsed.argv.browsertime, defaultConfig);
|
||||
browsertime.logging.configure(parsed.argv);
|
||||
// hack to keep backward compability to --android
|
||||
if (parsed.argv.android[0] === true) {
|
||||
set(btOptions, 'android.enabled', true);
|
||||
}
|
||||
configureLogging(parsed.argv);
|
||||
|
||||
// We have a special hack in sitespeed.io when you set --mobile
|
||||
if (parsed.argv.mobile) {
|
||||
|
|
@ -151,7 +186,7 @@ async function runBrowsertime() {
|
|||
const emulation = get(
|
||||
btOptions,
|
||||
'chrome.mobileEmulation.deviceName',
|
||||
'iPhone 6'
|
||||
'Moto G4'
|
||||
);
|
||||
btOptions.chrome.mobileEmulation = {
|
||||
deviceName: emulation
|
||||
|
|
@ -170,12 +205,20 @@ async function runBrowsertime() {
|
|||
get(btOptions, 'chrome.android.package', 'com.android.chrome')
|
||||
);
|
||||
}
|
||||
else if (parsed.argv.browser === 'firefox') {
|
||||
set(
|
||||
btOptions,
|
||||
'firefox.android.package',
|
||||
get(btOptions, 'firefox.android.package', 'org.mozilla.firefox')
|
||||
);
|
||||
}
|
||||
}
|
||||
const engine = new browsertime.Engine(btOptions);
|
||||
const urls = getURLs(parsed.argv._);
|
||||
|
||||
const engine = new BrowsertimeEngine(btOptions);
|
||||
const urls = parsed.argv.multi ? parsed.argv._ : getURLs(parsed.argv._);
|
||||
|
||||
try {
|
||||
await testURLs(engine, urls);
|
||||
await testURLs(engine, urls, parsed.argv.multi);
|
||||
} catch (e) {
|
||||
console.error('Could not run ' + e);
|
||||
process.exit(1);
|
||||
|
|
|
|||
204
bin/sitespeed.js
204
bin/sitespeed.js
|
|
@ -2,49 +2,177 @@
|
|||
|
||||
/*eslint no-console: 0*/
|
||||
|
||||
'use strict';
|
||||
import { writeFileSync } from 'node:fs';
|
||||
import { execSync } from 'node:child_process';
|
||||
import { platform } from 'node:os';
|
||||
import { resolve, basename } from 'node:path';
|
||||
import { readFileSync } from 'node:fs';
|
||||
|
||||
const fs = require('fs');
|
||||
const cli = require('../lib/cli/cli');
|
||||
const sitespeed = require('../lib/sitespeed');
|
||||
import merge from 'lodash.merge';
|
||||
import ora from 'ora';
|
||||
|
||||
async function run(options) {
|
||||
process.exitCode = 1;
|
||||
try {
|
||||
const result = await sitespeed.run(options);
|
||||
if (result.errors.length > 0) {
|
||||
throw new Error('Errors while running:\n' + result.errors.join('\n'));
|
||||
}
|
||||
import { parseCommandLine } from '../lib/cli/cli.js';
|
||||
import { run } from '../lib/sitespeed.js';
|
||||
import { addTest, waitAndGetResult, get } from '../lib/api/send.js';
|
||||
|
||||
if (options.storeResult) {
|
||||
fs.writeFileSync('result.json', JSON.stringify(result));
|
||||
}
|
||||
async function api(options) {
|
||||
const action = options.api.action ?? 'addAndGetResult';
|
||||
|
||||
if (
|
||||
parsed.options.budget &&
|
||||
Object.keys(result.budgetResult.failing).length > 0
|
||||
) {
|
||||
process.exitCode = 1;
|
||||
budgetFailing = true;
|
||||
}
|
||||
|
||||
if (
|
||||
!budgetFailing ||
|
||||
(parsed.options.budget && parsed.options.budget.suppressExitCode)
|
||||
) {
|
||||
process.exitCode = 0;
|
||||
}
|
||||
} catch (e) {
|
||||
if (action === 'get' && !options.api.id) {
|
||||
process.exitCode = 1;
|
||||
} finally {
|
||||
console.log('Missing test id --api.id');
|
||||
process.exit();
|
||||
}
|
||||
}
|
||||
let parsed = cli.parseCommandLine();
|
||||
let budgetFailing = false;
|
||||
// hack for getting in the unchanged cli options
|
||||
parsed.options.explicitOptions = parsed.explicitOptions;
|
||||
parsed.options.urls = parsed.urls;
|
||||
parsed.options.urlsMetaData = parsed.urlsMetaData;
|
||||
|
||||
run(parsed.options);
|
||||
const hostname = options.api.hostname;
|
||||
let apiOptions = options.explicitOptions;
|
||||
// Delete the hostname to make sure the server do not end in
|
||||
// a forever loop
|
||||
delete apiOptions.api.hostname;
|
||||
|
||||
// Add support for running multi tests
|
||||
if (options.multi) {
|
||||
const scripting = await readFileSync(
|
||||
new URL(resolve(process.cwd(), options._[0]), import.meta.url)
|
||||
);
|
||||
apiOptions.api.scripting = scripting.toString();
|
||||
apiOptions.api.scriptingName = basename(options._[0]);
|
||||
}
|
||||
|
||||
if (apiOptions.mobile) {
|
||||
apiOptions.api.testType = 'emulatedMobile';
|
||||
} else if (apiOptions.android) {
|
||||
apiOptions.api.testType = 'android';
|
||||
} else if (apiOptions.safari && apiOptions.safari.ios) {
|
||||
apiOptions.api.testType = 'ios';
|
||||
} else {
|
||||
apiOptions.api.testType = 'desktop';
|
||||
}
|
||||
|
||||
if (options.config) {
|
||||
const config = JSON.parse(
|
||||
await readFileSync(
|
||||
new URL(resolve(process.cwd(), options.config), import.meta.url)
|
||||
)
|
||||
);
|
||||
apiOptions = merge(options.explicitOptions, config);
|
||||
delete apiOptions.config;
|
||||
}
|
||||
|
||||
if (action === 'add' || action === 'addAndGetResult') {
|
||||
const spinner = ora({
|
||||
text: `Send test to ${hostname}`,
|
||||
isSilent: options.api.silent
|
||||
}).start();
|
||||
|
||||
try {
|
||||
const data = await addTest(hostname, apiOptions);
|
||||
const testId = JSON.parse(data).id;
|
||||
spinner.color = 'yellow';
|
||||
spinner.text = `Added test with id ${testId}`;
|
||||
|
||||
if (action === 'add') {
|
||||
spinner.succeed(`Added test with id ${testId}`);
|
||||
console.log(testId);
|
||||
process.exit();
|
||||
} else if (action === 'addAndGetResult') {
|
||||
const result = await waitAndGetResult(
|
||||
testId,
|
||||
hostname,
|
||||
apiOptions,
|
||||
spinner
|
||||
);
|
||||
if (result.status === 'completed') {
|
||||
spinner.succeed(`Got test result with id ${testId}`);
|
||||
if (options.api.json) {
|
||||
console.log(JSON.stringify(result));
|
||||
} else {
|
||||
console.log(result.result);
|
||||
}
|
||||
} else if (result.status === 'failed') {
|
||||
spinner.fail('Test failed');
|
||||
process.exitCode = 1;
|
||||
process.exit();
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
spinner.fail(error.message);
|
||||
process.exitCode = 1;
|
||||
process.exit();
|
||||
}
|
||||
} else if (action === 'get') {
|
||||
try {
|
||||
const result = await get(options.api.id, hostname, apiOptions);
|
||||
if (options.api.json) {
|
||||
console.log(JSON.stringify(result));
|
||||
} else {
|
||||
console.log(result);
|
||||
}
|
||||
} catch (error) {
|
||||
process.exitCode = 1;
|
||||
console.log(error);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function start() {
|
||||
let parsed = await parseCommandLine();
|
||||
let budgetFailing = false;
|
||||
// hack for getting in the unchanged cli options
|
||||
parsed.options.explicitOptions = parsed.explicitOptions;
|
||||
parsed.options.urls = parsed.urls;
|
||||
parsed.options.urlsMetaData = parsed.urlsMetaData;
|
||||
|
||||
let options = parsed.options;
|
||||
|
||||
if (options.api && options.api.hostname) {
|
||||
api(options);
|
||||
} else {
|
||||
try {
|
||||
const result = await run(options);
|
||||
|
||||
// This can be used as an option to get hold of where the data is stored
|
||||
// for third parties
|
||||
if (options.storeResult) {
|
||||
if (options.storeResult == 'true') {
|
||||
writeFileSync('result.json', JSON.stringify(result));
|
||||
} else {
|
||||
// Use the name supplied
|
||||
writeFileSync(options.storeResult, JSON.stringify(result));
|
||||
}
|
||||
}
|
||||
|
||||
if ((options.open || options.o) && platform() === 'darwin') {
|
||||
execSync('open ' + result.localPath + '/index.html');
|
||||
} else if ((options.open || options.o) && platform() === 'linux') {
|
||||
execSync('xdg-open ' + result.localPath + '/index.html');
|
||||
}
|
||||
|
||||
if (
|
||||
parsed.options.budget &&
|
||||
Object.keys(result.budgetResult.failing).length > 0
|
||||
) {
|
||||
process.exitCode = 1;
|
||||
budgetFailing = true;
|
||||
}
|
||||
|
||||
if (
|
||||
!budgetFailing ||
|
||||
(parsed.options.budget && parsed.options.budget.suppressExitCode)
|
||||
) {
|
||||
process.exitCode = 0;
|
||||
}
|
||||
if (result.errors.length > 0) {
|
||||
console.log('Errors while running:\n' + result.errors.join('\n'));
|
||||
throw new Error('Errors while running:\n' + result.errors.join('\n'));
|
||||
}
|
||||
} catch (error) {
|
||||
process.exitCode = 1;
|
||||
console.log(error);
|
||||
} finally {
|
||||
process.exit();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
await start();
|
||||
|
|
|
|||
78
cz-config.js
78
cz-config.js
|
|
@ -1,78 +0,0 @@
|
|||
module.exports = {
|
||||
types: [
|
||||
{ value: 'feat', name: 'feat: A new feature' },
|
||||
{ value: 'fix', name: 'fix: A bug fix' },
|
||||
{ value: 'docs', name: 'docs: Documentation only changes' },
|
||||
{
|
||||
value: 'style',
|
||||
name:
|
||||
'style: Changes that do not affect the meaning of the code\n (white-space, formatting, missing semi-colons, etc)'
|
||||
},
|
||||
{
|
||||
value: 'refactor',
|
||||
name:
|
||||
'refactor: A code change that neither fixes a bug nor adds a feature'
|
||||
},
|
||||
{
|
||||
value: 'perf',
|
||||
name: 'perf: A code change that improves performance'
|
||||
},
|
||||
{ value: 'test', name: 'test: Adding missing tests' },
|
||||
{
|
||||
value: 'chore',
|
||||
name:
|
||||
'chore: Changes to the build process or auxiliary tools\n and libraries such as documentation generation'
|
||||
},
|
||||
{ value: 'revert', name: 'revert: Revert to a commit' },
|
||||
{ value: 'WIP', name: 'WIP: Work in progress' }
|
||||
],
|
||||
|
||||
// scopes: [
|
||||
// { name: 'accounts' },
|
||||
// { name: 'admin' },
|
||||
// { name: 'exampleScope' },
|
||||
// { name: 'changeMe' }
|
||||
// ],
|
||||
|
||||
allowTicketNumber: false,
|
||||
isTicketNumberRequired: false,
|
||||
// ticketNumberPrefix: 'TICKET-',
|
||||
// ticketNumberRegExp: '\\d{1,5}',
|
||||
|
||||
// it needs to match the value for field type. Eg.: 'fix'
|
||||
/*
|
||||
scopeOverrides: {
|
||||
fix: [
|
||||
{name: 'merge'},
|
||||
{name: 'style'},
|
||||
{name: 'e2eTest'},
|
||||
{name: 'unitTest'}
|
||||
]
|
||||
},
|
||||
*/
|
||||
// override the messages, defaults are as follows
|
||||
messages: {
|
||||
type: "Select the type of change that you're committing:",
|
||||
// scope: '\nDenote the SCOPE of this change (optional):',
|
||||
// used if allowCustomScopes is true
|
||||
// customScope: 'Denote the SCOPE of this change:',
|
||||
subject: 'Write a SHORT, IMPERATIVE tense description of the change:\n',
|
||||
body:
|
||||
'Provide a LONGER description of the change (optional). Use "|" to break new line:\n',
|
||||
breaking: 'List any BREAKING CHANGES (optional):\n',
|
||||
footer:
|
||||
'List any ISSUES CLOSED by this change (optional). E.g.: #31, #34:\n',
|
||||
confirmCommit: 'Are you sure you want to proceed with the commit above?'
|
||||
},
|
||||
|
||||
// allowCustomScopes: true,
|
||||
allowBreakingChanges: ['feat', 'fix'],
|
||||
// skip any questions you want
|
||||
skipQuestions: ['body'],
|
||||
|
||||
// limit subject length
|
||||
subjectLimit: 100
|
||||
// breaklineChar: '|', // It is supported for fields body and footer.
|
||||
// footerPrefix : 'ISSUES CLOSED:'
|
||||
// askForBreakingChangeFirst : true, // default is false
|
||||
};
|
||||
|
|
@ -3,16 +3,17 @@ FROM sitespeedio/sitespeed.io:${version}
|
|||
|
||||
ENV SITESPEED_IO_BROWSERTIME__XVFB true
|
||||
ENV SITESPEED_IO_BROWSERTIME__DOCKER true
|
||||
ENV SITESPEED_IO_PLUGINS__ADD /lighthouse,/gpsi
|
||||
ENV SITESPEED_IO_PLUGINS__ADD /lighthouse/index.js,/gpsi/lib/index.js
|
||||
|
||||
RUN sudo apt-get update && sudo apt-get install git -y
|
||||
|
||||
RUN node --version
|
||||
RUN npm --version
|
||||
WORKDIR /gpsi
|
||||
RUN git clone https://github.com/sitespeedio/plugin-gpsi.git .
|
||||
RUN npm install --production
|
||||
|
||||
WORKDIR /lighthouse
|
||||
RUN git clone https://github.com/sitespeedio/plugin-lighthouse.git .
|
||||
RUN git clone https://github.com/sitespeedio/plugin-lighthouse.git .
|
||||
RUN npm install --production
|
||||
|
||||
VOLUME /sitespeed.io
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
version: '3'
|
||||
services:
|
||||
grafana:
|
||||
image: grafana/grafana:7.5.3
|
||||
image: grafana/grafana:10.0.2
|
||||
hostname: grafana
|
||||
depends_on:
|
||||
- graphite
|
||||
|
|
@ -10,25 +10,30 @@ services:
|
|||
ports:
|
||||
- "3000:3000"
|
||||
environment:
|
||||
# See https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/
|
||||
- GF_SECURITY_ADMIN_PASSWORD=hdeAga76VG6ga7plZ1
|
||||
- GF_SECURITY_ADMIN_USER=sitespeedio
|
||||
- GF_AUTH_ANONYMOUS_ENABLED=true
|
||||
- GF_USERS_ALLOW_SIGN_UP=false
|
||||
- GF_USERS_ALLOW_ORG_CREATE=false
|
||||
- GF_INSTALL_PLUGINS=grafana-piechart-panel
|
||||
- GF_INSTALL_PLUGINS=grafana-piechart-panel,marcusolsson-json-datasource,marcusolsson-dynamictext-panel
|
||||
- GF_DASHBOARDS_DEFAULT_HOME_DASHBOARD_PATH=/var/lib/grafana/dashboards/Welcome.json
|
||||
volumes:
|
||||
- grafana:/var/lib/grafana
|
||||
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
|
||||
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
|
||||
- ./grafana/provisioning/dashboards:/var/lib/grafana/dashboards
|
||||
restart: always
|
||||
graphite:
|
||||
image: sitespeedio/graphite:1.1.7-9
|
||||
image: sitespeedio/graphite:1.1.10-3
|
||||
hostname: graphite
|
||||
ports:
|
||||
- "2003:2003"
|
||||
- "8080:80"
|
||||
restart: always
|
||||
volumes:
|
||||
# In production you should configure/map these to your container
|
||||
# Make sure whisper and graphite.db/grafana.db lives outside your containerr
|
||||
# In production, you should configure/map these to your container
|
||||
# Make sure whisper and graphite.db/grafana.db lives outside your container
|
||||
# https://www.sitespeed.io/documentation/sitespeed.io/graphite/#graphite-for-production-important
|
||||
- whisper:/opt/graphite/storage/whisper
|
||||
# Download an empty graphite.db from https://github.com/sitespeedio/sitespeed.io/tree/main/docker/graphite
|
||||
|
|
@ -39,13 +44,6 @@ services:
|
|||
# - /absolute/path/to/graphite/conf/storage-schemas.conf:/opt/graphite/conf/storage-schemas.conf
|
||||
# - /absolute/path/to/graphite/conf/storage-aggregation.conf:/opt/graphite/conf/storage-aggregation.conf
|
||||
# - /absolute/path/to/graphite/conf/carbon.conf:/opt/graphite/conf/carbon.conf
|
||||
grafana-setup:
|
||||
image: sitespeedio/grafana-bootstrap:17.0.0
|
||||
links:
|
||||
- grafana
|
||||
environment:
|
||||
- GF_PASSWORD=hdeAga76VG6ga7plZ1
|
||||
- GF_USER=sitespeedio
|
||||
volumes:
|
||||
grafana:
|
||||
whisper:
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,166 @@
|
|||
{
|
||||
"__inputs": [
|
||||
{
|
||||
"name": "graphite",
|
||||
"label": "graphite",
|
||||
"description": "",
|
||||
"type": "datasource",
|
||||
"pluginId": "graphite",
|
||||
"pluginName": "Graphite"
|
||||
}
|
||||
],
|
||||
"__requires": [
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "dashlist",
|
||||
"name": "Dashboard list",
|
||||
"version": ""
|
||||
},
|
||||
{
|
||||
"type": "grafana",
|
||||
"id": "grafana",
|
||||
"name": "Grafana",
|
||||
"version": "7.0.0"
|
||||
},
|
||||
{
|
||||
"type": "datasource",
|
||||
"id": "graphite",
|
||||
"name": "Graphite",
|
||||
"version": "1.0.0"
|
||||
},
|
||||
{
|
||||
"type": "panel",
|
||||
"id": "text",
|
||||
"name": "Text",
|
||||
"version": ""
|
||||
}
|
||||
],
|
||||
"annotations": {
|
||||
"list": [
|
||||
{
|
||||
"builtIn": 1,
|
||||
"datasource": "-- Grafana --",
|
||||
"enable": true,
|
||||
"hide": true,
|
||||
"iconColor": "rgba(0, 211, 255, 1)",
|
||||
"name": "Annotations & Alerts",
|
||||
"type": "dashboard"
|
||||
}
|
||||
]
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"id": null,
|
||||
"links": [],
|
||||
"panels": [
|
||||
{
|
||||
"content": "<h1>Welcome to sitespeed.io</h1>\n<p>\n<a href=\"https://www.sitespeed.io/\">sitespeed.io</a> | \n<a href=\"https://www.sitespeed.io/documentation/sitespeed.io/\">Documentation</a> | \n<a href=\"https://www.sitespeed.io/documentation/sitespeed.io/performance-dashboard/\">Setup your own dashboard</a> | \n<a href=\"https://github.com/sitespeedio/sitespeed.io/blob/master/CHANGELOG.md\">Changelog</a> | \n<a href=\"https://twitter.com/SiteSpeedio\">Twitter</a> | \n<a href=\"https://opencollective.com/sitespeedio\">Open Collective</a>\n</p>\n\n<p>\nSitespeed.io is a <a href=\"https://www.sitespeed.io/documentation/\">set of Open Source tools</a> that makes it easy to monitor and measure the performance of your web site.\n</p>\n<p>If you don't know what you can do with sitespeed.io, you should look at the <a href=\"https://dashboard.sitespeed.io/d/000000059/page-timing-metrics\">page timing dashboard</a> or checkout what the <a href=\"https://examples.sitespeed.io/13.x/2020-05-20-08-47-56/index.html\">HTML result</a> looks like.\n",
|
||||
"datasource": "graphite",
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {
|
||||
"align": null
|
||||
},
|
||||
"mappings": [],
|
||||
"thresholds": {
|
||||
"mode": "absolute",
|
||||
"steps": [
|
||||
{
|
||||
"color": "green",
|
||||
"value": null
|
||||
},
|
||||
{
|
||||
"color": "red",
|
||||
"value": 80
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"gridPos": {
|
||||
"h": 18,
|
||||
"w": 13,
|
||||
"x": 0,
|
||||
"y": 0
|
||||
},
|
||||
"id": 4,
|
||||
"mode": "html",
|
||||
"pluginVersion": "7.0.0",
|
||||
"targets": [
|
||||
{
|
||||
"refId": "A",
|
||||
"target": ""
|
||||
}
|
||||
],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "",
|
||||
"type": "text"
|
||||
},
|
||||
{
|
||||
"datasource": "graphite",
|
||||
"description": "",
|
||||
"fieldConfig": {
|
||||
"defaults": {
|
||||
"custom": {}
|
||||
},
|
||||
"overrides": []
|
||||
},
|
||||
"folderId": null,
|
||||
"gridPos": {
|
||||
"h": 18,
|
||||
"w": 11,
|
||||
"x": 13,
|
||||
"y": 0
|
||||
},
|
||||
"headings": true,
|
||||
"id": 2,
|
||||
"limit": 10,
|
||||
"query": "",
|
||||
"recent": false,
|
||||
"search": true,
|
||||
"starred": false,
|
||||
"tags": [],
|
||||
"targets": [
|
||||
{
|
||||
"refId": "A",
|
||||
"target": ""
|
||||
}
|
||||
],
|
||||
"timeFrom": null,
|
||||
"timeShift": null,
|
||||
"title": "Installed dashboards",
|
||||
"type": "dashlist"
|
||||
}
|
||||
],
|
||||
"schemaVersion": 25,
|
||||
"style": "dark",
|
||||
"tags": [],
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"time": {
|
||||
"from": "now-6h",
|
||||
"to": "now"
|
||||
},
|
||||
"timepicker": {
|
||||
"refresh_intervals": [
|
||||
"10s",
|
||||
"30s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"30m",
|
||||
"1h",
|
||||
"2h",
|
||||
"1d"
|
||||
]
|
||||
},
|
||||
"timezone": "",
|
||||
"title": "Welcome to sitespeed.io",
|
||||
"uid": "3zStduRGk",
|
||||
"version": 6
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,24 @@
|
|||
apiVersion: 1
|
||||
|
||||
providers:
|
||||
# <string> an unique provider name. Required
|
||||
- name: 'sitespeed.io'
|
||||
# <int> Org id. Default to 1
|
||||
orgId: 1
|
||||
# <string> name of the dashboard folder.
|
||||
folder: 'sitespeed.io'
|
||||
# <string> folder UID. will be automatically generated if not specified
|
||||
folderUid: ''
|
||||
# <string> provider type. Default to 'file'
|
||||
type: file
|
||||
# <bool> disable dashboard deletion
|
||||
disableDeletion: false
|
||||
# <int> how often Grafana will scan for changed dashboards
|
||||
updateIntervalSeconds: 10
|
||||
# <bool> allow updating provisioned dashboards from the UI
|
||||
allowUiUpdates: true
|
||||
options:
|
||||
# <string, required> path to dashboard files on disk. Required when using the 'file' type
|
||||
path: /var/lib/grafana/dashboards
|
||||
# <bool> use folder names from filesystem to create folders in Grafana
|
||||
foldersFromFilesStructure: true
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
# Configuration file version
|
||||
apiVersion: 1
|
||||
|
||||
# List of data sources to delete from the database.
|
||||
deleteDatasources:
|
||||
- name: graphite
|
||||
orgId: 1
|
||||
|
||||
# List of data sources to insert/update depending on what's
|
||||
# available in the database.
|
||||
datasources:
|
||||
# <string, required> Sets the name you use to refer to
|
||||
# the data source in panels and queries.
|
||||
- name: graphite
|
||||
# <string, required> Sets the data source type.
|
||||
type: graphite
|
||||
# <string, required> Sets the access mode, either
|
||||
# proxy or direct (Server or Browser in the UI).
|
||||
# Some data sources are incompatible with any setting
|
||||
# but proxy (Server).
|
||||
access: proxy
|
||||
# <int> Sets the organization id. Defaults to orgId 1.
|
||||
orgId: 1
|
||||
# <string> Sets a custom UID to reference this
|
||||
# data source in other parts of the configuration.
|
||||
# If not specified, Grafana generates one.
|
||||
uid:
|
||||
# <string> Sets the data source's URL, including the
|
||||
# port.
|
||||
url: http://graphite:80
|
||||
# <Btring> Sets the database user, if necessary.
|
||||
user:
|
||||
# <string> Sets the database name, if necessary.
|
||||
database:
|
||||
# <bool> Enables basic authorization.
|
||||
basicAuth: true
|
||||
# <string> Sets the basic authorization username.
|
||||
basicAuthUser: guest
|
||||
# <bool> Enables credential headers.
|
||||
withCredentials: false
|
||||
# <bool> Toggles whether the data source is pre-selected
|
||||
# for new panels. You can set only one default
|
||||
# data source per organization.
|
||||
isDefault: true
|
||||
secureJsonData:
|
||||
# <string> Sets the basic authorization password.
|
||||
basicAuthPassword: guest
|
||||
version: 1
|
||||
# <bool> Allows users to edit data sources from the
|
||||
# Grafana UI.
|
||||
editable: true
|
||||
|
|
@ -0,0 +1,18 @@
|
|||
# Configuration file version
|
||||
apiVersion: 1
|
||||
|
||||
# List of data sources to delete from the database.
|
||||
deleteDatasources:
|
||||
- name: json-api
|
||||
orgId: 1
|
||||
|
||||
# List of data sources to insert/update depending on what's
|
||||
# available in the database.
|
||||
datasources:
|
||||
# <string, required> Sets the name you use to refer to
|
||||
# the data source in panels and queries.
|
||||
- name: json-api
|
||||
# <string, required> Sets the data source type.
|
||||
type: marcusolsson-json-datasource
|
||||
url: https://changeme.example.com
|
||||
editable: true
|
||||
|
|
@ -1,8 +1,19 @@
|
|||
#!/bin/bash
|
||||
#
|
||||
# All browsers do not exist in all architectures.
|
||||
if [[ `which google-chrome` ]]; then
|
||||
google-chrome --version
|
||||
elif [[ `which chromium-browser` ]]; then
|
||||
chromium-browser --version
|
||||
fi
|
||||
|
||||
google-chrome --version
|
||||
firefox --version
|
||||
microsoft-edge --version
|
||||
if [[ `which firefox` ]]; then
|
||||
firefox --version
|
||||
fi
|
||||
|
||||
if [[ `which microsoft-edge` ]]; then
|
||||
microsoft-edge --version
|
||||
fi
|
||||
|
||||
BROWSERTIME=/usr/src/app/bin/browsertimeWebPageReplay.js
|
||||
SITESPEEDIO=/usr/src/app/bin/sitespeed.js
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
|
|
@ -4,7 +4,7 @@ markdown: kramdown
|
|||
compress_html:
|
||||
clippings: all
|
||||
endings: all
|
||||
include: ["_headers"]
|
||||
include: ["_headers", "_redirects"]
|
||||
highlighter: none
|
||||
kramdown:
|
||||
syntax_highlighter_opts:
|
||||
|
|
|
|||
|
|
@ -22,4 +22,6 @@
|
|||
/js/*
|
||||
Cache-Control: public, max-age=3600000
|
||||
/css/*
|
||||
Cache-Control: public, max-age=3600000
|
||||
Cache-Control: public, max-age=3600000
|
||||
/feed/*
|
||||
Access-Control-Allow-Origin: *
|
||||
|
|
@ -27,15 +27,15 @@
|
|||
<div class="col-1-5">
|
||||
<h3>Connect</h3>
|
||||
<ul>
|
||||
<li><a href="https://twitter.com/SiteSpeedio">Twitter</a></li>
|
||||
<li><a href="https://www.facebook.com/sitespeed.io">Facebook</a></li>
|
||||
<li><a rel="me" href="https://fosstodon.org/@sitespeedio">Mastodon</a></li>
|
||||
<li><a href="https://github.com/sitespeedio">GitHub</a></li>
|
||||
<li><a href="https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw">Slack</a></li>
|
||||
</div>
|
||||
<div class="col-1-5">
|
||||
<h3>sitespeed.io</h3>
|
||||
<ul>
|
||||
<li><a href="{{site.baseurl}}/aboutus/">About Us</a></li>
|
||||
<li><a href="{{site.baseurl}}/important/">Important - how we work</a></li>
|
||||
<li><a href="{{site.baseurl}}/important/">How we work</a></li>
|
||||
<li><a href="https://dashboard.sitespeed.io/">The dashboard</a></li>
|
||||
<li><a href="{{site.baseurl}}/logo/">Logos</a></li>
|
||||
<li><a href="{{site.baseurl}}/privacy-policy/">Privacy Policy</a></li>
|
||||
|
|
@ -46,7 +46,6 @@
|
|||
<h3><span class="red">♥</span> Open Source <span class="red">♥</span></h3>
|
||||
<ul>
|
||||
<li><a href="http://www.seleniumhq.org/">Selenium</a></li>
|
||||
<li><a href="https://github.com/WPO-Foundation/visualmetrics">Visual Metrics</a></li>
|
||||
<li><a href="https://github.com/micmro/PerfCascade">PerfCascade</a></li>
|
||||
<li><a href="http://getskeleton.com/">Skeleton</a></li>
|
||||
<li><a href="https://github.com/cgiffard/node-simplecrawler">Simplecrawler</a></li>
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@
|
|||
<li><a href="{{site.baseurl}}/documentation/browsertime/">Browsertime</a></li>
|
||||
<li><a href="{{site.baseurl}}/documentation/pagexray/">PageXray</a></li>
|
||||
<li><a href="{{site.baseurl}}/documentation/throttle/">Throttle</a></li>
|
||||
<li><a href="{{site.baseurl}}/documentation/humble/">Humble</a></li>
|
||||
<li><a href="{{site.baseurl}}/documentation/compare/">Compare</a></li>
|
||||
<li><a href="{{site.baseurl}}/documentation/chrome-har/">Chrome HAR</a></li>
|
||||
</ul>
|
||||
|
|
|
|||
|
|
@ -6,5 +6,6 @@
|
|||
* [Coach](/documentation/coach/) (core) {% include version/coach-core.txt %} [[Docker](https://hub.docker.com/r/sitespeedio/coach/)/[npm](https://www.npmjs.com/package/webcoach)/[changelog](https://github.com/sitespeedio/coach-core/blob/main/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/coach-core/releases.atom)]
|
||||
* [PageXray](/documentation/pagexray/) {% include version/pagexray.txt %} [[npm](https://www.npmjs.com/package/pagexray)/[changelog](https://github.com/sitespeedio/pagexray/blob/main/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/pagexray/releases.atom)]
|
||||
* [Compare](https://compare.sitespeed.io/) {% include version/compare.txt %} [[npm](https://www.npmjs.com/package/@sitespeed.io/compare)/[changelog](https://github.com/sitespeedio/compare/blob/main/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/compare/releases.atom)]
|
||||
* [Humble](https://github.com/sitespeedio/humble) {% include version/compare.txt %} /[changelog](https://github.com/sitespeedio/humble/blob/main/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/humble/releases.atom)]
|
||||
* [Throttle](/documentation/throttle/) {% include version/throttle.txt %} [[npm](https://www.npmjs.com/package/@sitespeed.io/throttle)/[changelog](https://github.com/sitespeedio/throttle/blob/main/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/throttle/releases.atom)]
|
||||
* [Chrome-HAR](/documentation/chrome-har/) {% include version/chrome-har.txt %} [[npm](https://www.npmjs.com/package/chrome-har)/[changelog](https://github.com/sitespeedio/chrome-har/blob/main/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/chrome-har/releases.atom)]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
|
||||
[<img src="{{site.baseurl}}/img/pippi.png" class="pull-left img-big" alt="The power of sitespeed.io - Pippi Longstocking logo" width="180" height="151">](https://dashboard.sitespeed.io)
|
||||
|
||||
If you want to measure the performance and are only interested in timing metrics, you should focus on using [Browsertime]({{site.baseurl}}/documentation/browsertime/). If you want it all: use [sitespeed.io]({{site.baseurl}}/documentation/sitespeed.io/). It is the main tool that uses all sitespeed.io tools and add supports for testing multiple pages as well as adds the ability to report the metrics to a TSDB (Graphite and InfluxDB). Use it to monitor the performance of your web site.
|
||||
Get a comprehensive performance measurement with [sitespeed.io]({{site.baseurl}}/documentation/sitespeed.io/) - the ultimate tool for monitoring and enhancing web performance. It's the main tool that uses all the other sitespeed.io tools, supports testing multiple pages and reporting metrics to time series databases (Graphite and InfluxDB) for monitoring your website.
|
||||
|
||||
If you are a performance tool maker you should look at [The coach]({{site.baseurl}}/documentation/coach/), [Browsertime]({{site.baseurl}}/documentation/browsertime/), [Chrome-HAR](https://github.com/sitespeedio/chrome-har), [PageXray]({{site.baseurl}}/documentation/pagexray/) and [Throttle]({{site.baseurl}}/documentation/throttle/). They can all help you depending on what you are building.
|
||||
If you're looking to focus on timing metrics only, then [Browsertime]({{site.baseurl}}/documentation/browsertime/) is the perfect choice for you. But for a complete performance measurement, use [sitespeed.io]({{site.baseurl}}/documentation/sitespeed.io/).
|
||||
|
||||
If you're a developer of performance tools, then take advantage of the other tools that Sitespeed.io offers such as [The coach]({{site.baseurl}}/documentation/coach/), [Browsertime]({{site.baseurl}}/documentation/browsertime/), [Chrome-HAR](https://github.com/sitespeedio/chrome-har), [PageXray]({{site.baseurl}}/documentation/pagexray/) and [Throttle]({{site.baseurl}}/documentation/throttle/). Each tool can help you in different ways, depending on what you're building.
|
||||
|
|
@ -1,7 +1,3 @@
|
|||
## Performance leaderboard
|
||||
## Google Web Vitals
|
||||
* * *
|
||||
[<img src="{{site.baseurl}}/img/leaderboard.png" class="pull-left img-big" alt="Performance leaderboard" width="200" height="141">]({{site.baseurl}}/documentation/sitespeed.io/leaderboard/)
|
||||
|
||||
Do you want to compare your performance against other web sites? Use the performance leaderboard! You can check out our [example dashboard](https://dashboard.sitespeed.io/dashboard/db/leaderboard) or go directly to the [documentation]({{site.baseurl}}/documentation/sitespeed.io/leaderboard/).
|
||||
|
||||
You can compare performance timings, how the page is built, how much CPU the page is using and many many more things. And the leaderboard is also configurable through Grafana, so you can add the metrics that are important to you!
|
||||
Experience unparalleled performance tracking and monitoring with our advanced Google Web Vitals monitoring tools. From First Contentful Paint to Largest Contentful Paint, Cumulative Layout Shift, and Total Blocking Time/First Input Delay, we've got you covered. Discover the most efficient way to track and monitor all your key performance metrics. Visit our [Google Web Vitals documentation](/documentation/sitespeed.io/google-web-vitals/) now to learn more.
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
## We believe in privacy
|
||||
* * *
|
||||
We take your privacy really serious: Our [documentation site](https://www.sitespeed.io/), our [dashboard](https://dashboard.sitespeed.io) and our [compare tool](https://dashboard.sitespeed.io) do not use any tracking software at all (no Google Analytics or any other tracking software). None of the sitespeed.io tools call home.
|
||||
At sitespeed.io, we understand the importance of privacy and take it extremely seriously. That's why our [documentation site](https://www.sitespeed.io/), [dashboard](https://dashboard.sitespeed.io), and [compare tool](https://dashboard.sitespeed.io) are completely free of any tracking software, including Google Analytics or any other similar programs. None of our tools will send any data back to us, giving you complete control over your information.
|
||||
|
||||
But beware: Chrome and Firefox can call home (we know for fact that they both do). We would love PRs and tips how to make sure browsers don't call home when you run your tests.
|
||||
But we also know that your browser can be a weak point. Chrome and Firefox have been known to send data back to their servers. We're always working to find ways to prevent this, and we welcome any contributions or suggestions from our users to improve your privacy when using our tools.
|
||||
|
||||
[Read more]({{site.baseurl}}/important/) about how we do things.
|
||||
We take pride in our commitment to protecting your privacy and ensuring that you have complete control over your data. To learn more about our privacy practices, please [read more]({{site.baseurl}}/important/).
|
||||
|
|
@ -3,8 +3,8 @@
|
|||
|
||||
[<img src="{{site.baseurl}}/img/public.png" class="pull-left img-big" alt="The power of sitespeed.io" width="150" height="150" alt="sitespeed.io Public Enemy logo">]({{site.baseurl}}/documentation/sitespeed.io/performance-dashboard/#cost)
|
||||
|
||||
Sitespeed.io is Open Source and totally free. But what does it cost to have an instance of sitespeed.io up and running?
|
||||
Sitespeed.io is freely available as Open Source software, with no hidden costs. However, running an instance of the tool does come with some expenses.
|
||||
|
||||
If you don't run on your own servers, we recommend running on [Digital Ocean](https://www.digitalocean.com/) optimized droplets 2 vCPUs or on [AWS](https://aws.amazon.com/) c5.large, storing the data at S3. On one instance you can run something like 80000+ runs per month for a total cost of $695 per year.
|
||||
To keep costs low, we recommend using cloud/server hosting services such as [Hetzner](https://www.hetzner.com), and storing data on S3. With one instance, you can run a large number of tests per month, with an estimated cost of around $500 per year.
|
||||
|
||||
[Look more into the cost details]({{site.baseurl}}/documentation/sitespeed.io/performance-dashboard/#cost).
|
||||
To get a more detailed understanding of the costs involved, please [refer to our cost breakdown information]({{site.baseurl}}/documentation/sitespeed.io/performance-dashboard/#cost).
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
## Thank you!
|
||||
* * *
|
||||
Sitespeed.io is built upon Open Source tools, we have a special place in our hearts for those projects ([see the full list]({{site.baseurl}}/documentation/sitespeed.io/developers/#built-upon-open-source)):
|
||||
|
||||
We are incredibly grateful for the Open Source tools that form the foundation of Sitespeed.io. Our hearts are filled with appreciation for these amazing projects and the communities that support them. We would like to extend a special thank you to ([see the full list]({{site.baseurl}}/documentation/sitespeed.io/developers/#built-upon-open-source)):
|
||||
|
||||
* [Selenium](http://www.seleniumhq.org/)
|
||||
* [VisualMetrics](https://github.com/WPO-Foundation/visualmetrics)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,5 @@
|
|||
## Contribute
|
||||
* * *
|
||||
|
||||
There's a lot of things you can do to help us make sitespeed.io even better than it is today.
|
||||
|
||||
If you code, write documentation or do UX you can check the [help section](https://github.com/sitespeedio/sitespeed.io/blob/main/HELP.md) and the [full issue list](https://github.com/sitespeedio/sitespeed.io/issues).
|
||||
|
||||
[These people](https://github.com/sitespeedio/sitespeed.io/blob/main/CONTRIBUTORS.md) has already improved sitespeed.io with pull requests or ideas (massive love!).
|
||||
Join the effort to make Sitespeed.io even better! Whether you're a developer, writer, or UX expert, there are many ways you can contribute to the improvement of our tool. Check out our [help section](https://github.com/sitespeedio/sitespeed.io/blob/main/HELP.md) and [full issue list](https://github.com/sitespeedio/sitespeed.io/issues) for opportunities to get involved. And a big thank you to [all those]((https://github.com/sitespeedio/sitespeed.io/blob/main/CONTRIBUTORS.md)) who have already made contributions through pull requests or ideas. Your support is greatly appreciated.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
* * *
|
||||
[<img src="{{site.baseurl}}/img/dashboard-front.png" class="pull-left img-big" alt="Performance dashboard" width="500" height="227">]({{site.baseurl}}/documentation/sitespeed.io/performance-dashboard/)
|
||||
|
||||
Using sitespeed.io together with Grafana and Graphite enables you to monitor the performance of your web site. We have a prepared [docker-compose file](https://github.com/sitespeedio/sitespeed.io/blob/main/docker/docker-compose.yml) for your setup and some ready made [generic Grafana dashboards](https://github.com/sitespeedio/grafana-bootstrap-docker/tree/main/dashboards/graphite) that will make it easy for you to get it up and running. You can get it up and running in almost 5 minutes!
|
||||
Using sitespeed.io together with Grafana and Graphite enables you to monitor the performance of your web site. We have a prepared [docker-compose file](https://github.com/sitespeedio/sitespeed.io/blob/main/docker/docker-compose.yml) for your setup and some ready made [generic Grafana dashboards](https://github.com/sitespeedio/sitespeed.io/tree/main/docker/grafana/provisioning/dashboards) that will make it easy for you to get it up and running. You can get it up and running in almost 5 minutes!
|
||||
|
||||
We have a version of the dashboard at [dashboard.sitespeed.io](https://dashboard.sitespeed.io/) where you can have look and try it out.
|
||||
|
||||
|
|
|
|||
|
|
@ -9,6 +9,8 @@ docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include ve
|
|||
|
||||
If you want to test a user scenario/journey read [how to run test scripts](/documentation/sitespeed.io/scripting/).
|
||||
|
||||
If you are new to the project you should watch the tutorial ["Getting started with Sitespeed.io using Docker"](https://www.youtube.com/watch?v=0xAdxCUX2Po).
|
||||
|
||||
## npm
|
||||
|
||||
Install sitespeed.io globally:
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
12.4.0
|
||||
20.0.0
|
||||
|
|
@ -1 +1 @@
|
|||
0.11.12
|
||||
0.13.2
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
6.3.3
|
||||
8.0.2
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
6.0.0
|
||||
7.0.0
|
||||
|
|
@ -0,0 +1 @@
|
|||
0.1.2
|
||||
|
|
@ -1 +1 @@
|
|||
4.1.0
|
||||
4.4.4
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
17.2.0
|
||||
31.0.1
|
||||
|
|
@ -1 +1 @@
|
|||
2.1.1
|
||||
5.0.0
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ layout: compress
|
|||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<meta name="description" content="{{ page.description }}">
|
||||
<meta name="keywords" content="{{ page.keywords }}">
|
||||
<meta name="theme-color" content="#0095d2">
|
||||
|
||||
<link rel="canonical" href="https://www.sitespeed.io{{ page.url | replace:'index.html','' }}" />
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ layout: 404
|
|||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8">
|
||||
<meta name="theme-color" content="#0095d2">
|
||||
<title>{{ page.title }}</title>
|
||||
<style>{% include css/404.css %}</style>
|
||||
</head>
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ layout: compress
|
|||
<meta name="description" content="{{ page.description }}">
|
||||
<meta name="keywords" content="{{ page.keywords }}">
|
||||
<meta name="author" content="{{ page.author }}">
|
||||
<meta name="theme-color" content="#0095d2">
|
||||
|
||||
<link rel="canonical" href="https://www.sitespeed.io" />
|
||||
|
||||
|
|
@ -41,7 +42,18 @@ layout: compress
|
|||
<link rel="apple-touch-icon-precomposed" sizes="72x72" href="{{site.baseurl}}/img/ico/sitespeed.io-72.png">
|
||||
<link rel="apple-touch-icon-precomposed" href="{{site.baseurl}}/img/ico/sitespeed.io-57.png">
|
||||
<link rel="shortcut icon" href="{{site.baseurl}}/img/ico/sitespeed.io.ico">
|
||||
<link type="application/atom+xml" href="https://www.sitespeed.io/feed/index.xml" rel="alternate" />
|
||||
<link type="application/atom+xml" title="RSS Changelog for all sitespeed.io tools" href="https://www.sitespeed.io/feed/atom.xml" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for all sitespeed.io tools" href="https://www.sitespeed.io/feed/rss.xml" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for Browsertime" href="https://www.sitespeed.io/feed/browsertime.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for PageXray" href="https://www.sitespeed.io/feed/pagexray.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for Compare" href="https://www.sitespeed.io/feed/compare.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for PageXray" href="https://www.sitespeed.io/feed/pagexray.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for sitespeed.io" href="https://www.sitespeed.io/feed/sitespeed.io.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for Throttle" href="https://www.sitespeed.io/feed/throttle.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for Humble" href="https://www.sitespeed.io/feed/humble.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for Coach Core" href="https://www.sitespeed.io/feed/coach-core.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for Chrome HAR" href="https://www.sitespeed.io/feed/chrome-har.rss" rel="alternate" />
|
||||
<link type="application/rss+xml" title="RSS Changelog for Chrome trace" href="https://www.sitespeed.io/feed/chrome-trace.rss" rel="alternate" />
|
||||
<style>
|
||||
{% include css/default.css %}
|
||||
</style>
|
||||
|
|
|
|||
|
|
@ -0,0 +1,28 @@
|
|||
---
|
||||
layout: default
|
||||
title: sitespeed.io 20.0
|
||||
description: New updates to sitespeed.io to make it easier to use.
|
||||
authorimage: /img/aboutus/peter.jpg
|
||||
intro: Make sure to upgrade your Graphite metrics (if you didn't do that already in April) before you upgrade to 20.0.0.
|
||||
keywords: sitespeed.io, webperf
|
||||
image: https://www.sitespeed.io/img/8bit.png
|
||||
nav: blog
|
||||
---
|
||||
|
||||
# sitespeed.io 20.0
|
||||
|
||||
Do you remember that we asked you to [upgrade your Graphite metrics](https://www.sitespeed.io/sitespeed.io-17.0-browsertime-12.0/#new-best-practices) in April earlier this year? If you didn't do it that time, you really should do it before you upgrade to sitespeed.io 20.0.0. Follow the [guide](https://www.sitespeed.io/documentation/sitespeed.io/graphite/#upgrade-to-use-the-test-slug-in-the-namespace) and after that upgrade to 20.0.0.
|
||||
|
||||
If you feel that you don't have time today, you can supress the change by adding `--graphite.addSlugToKey false` to your test. Please do that, else your metrics will be reported under a new key structure when you upgrade to 20.0.
|
||||
|
||||
|
||||
We also took the chance in 20.0 and made a couple of other breaking changes to make it easier for you to run your tests:
|
||||
|
||||
* [Throttle](https://github.com/sitespeedio/throttle) is the default connectivity engine if you use Mac or Linux [#3433](https://github.com/sitespeedio/sitespeed.io/pull/3433). This makes it much easier to enable throttling. Our Docker container is not affected by this change.
|
||||
* There's a new default mobile `--mobile` for Chrome. The new default is Moto G4 (instead of iPhone 6) [#3467](https://github.com/sitespeedio/sitespeed.io/pull/3467).
|
||||
* When you run your tests on Safari on iOS the Coach is disabled by default [#3468](https://github.com/sitespeedio/sitespeed.io/pull/3468).
|
||||
|
||||
|
||||
Happy performance testing!
|
||||
|
||||
/Peter
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
---
|
||||
layout: default
|
||||
title: All I want for Christmas is ...
|
||||
description: sitespeed.io wishlist for Christmas.
|
||||
authorimage: /img/aboutus/peter.jpg
|
||||
intro: Here's my wish list on how we all can make sitespeed.io better.
|
||||
keywords: sitespeed.io, webperf
|
||||
image: https://www.sitespeed.io/img/santa.png
|
||||
nav: blog
|
||||
---
|
||||
|
||||
# All I want for Christmas is ...
|
||||
|
||||
<img src="{{site.baseurl}}/img/santa.png" class="pull-right img-big" alt="sitespeed.io wish you a Merry Christmas!" width="200" height="236">
|
||||
|
||||
Here's a wish list what I want for Christmas for the sitespeed.io project. There's a couple of things that you or your company can do to make sitespeed.io better! And that's what I want for Christmas :)
|
||||
|
||||
## ... help from you!
|
||||
What really helps out is that if you find a bug or a potential bug, please follow the instructions [on how to create a reproducible issue](https://www.sitespeed.io/documentation/sitespeed.io/bug-report/#explain-how-to-reproduce-your-issue). That helps so much! If I can easily reproduce the bug, I can spend time on fixing them instead of spending hours trying to reproduce them.
|
||||
|
||||
## ... help from your company!
|
||||
If you work for a company that uses sitepeed.io, please make sure the company support sitespeed.io financially using [https://opencollective.com/sitespeedio/contribute](https://opencollective.com/sitespeedio/contribute)!
|
||||
|
||||
Let me explain why: Keeping sitespeed.io running cost money. To be able to catch bugs and regression before we do a new release we have two servers running to get the metrics for [dashboard.sitespeed.io](https://dashboard.sitespeed.io/d/9NDMzFfMk/page-metrics-desktop). At the moment the cost for that is $1700 per year.
|
||||
|
||||
Today we only have two monthly donors to the project: one secret contributor and [Jesse Heady](https://twitter.com/jheady). Many many thanks to both of them. But we need more.
|
||||
|
||||
In the long run we want to have more servers for testing and also have automatic tests for our mobile phone setup. A couple of years ago I asked one of the companies that host mobile phones what we need to pay for five hosted phones. The price at that time was $22500 per year. That is pretty much money for an Open Source project :(
|
||||
|
||||
## ... help from the Chrome team!
|
||||
|
||||
There's a couple of open issues with Chrome that would make our work with sitespeed.io easier.
|
||||
|
||||
* Make a stand-alone library for consuming the Chrome trace log so tools that are not Lighthouse can use it. There where some work a couple of years ago in [Tracium](https://github.com/aslushnikov/tracium) that was promising but then the code [was moved back to Lighthouse](https://github.com/aslushnikov/tracium/issues/2). We actually use a modified version of Tracium but it would be great if we could use a Chrome team blessed version. I think no one is better to build that then the Chrome team, they have the best knowledge.
|
||||
|
||||
* Make it easier to automate to get a HAR file from Chrome. Today we use [https://github.com/sitespeedio/chrome-har](https://github.com/sitespeedio/chrome-har) that my friend Tobias built a couple of years ago but I strongly believe that is something the Chrome team should and could provide. There's [a open issue #1276896](https://bugs.chromium.org/p/chromium/issues/detail?id=1276896) for fixing that.
|
||||
|
||||
## ... help from the Firefox team!
|
||||
|
||||
Its been super helpful when Mozilla started to use Browsertime for internal testing and the Mozilla performance team have contributed with so many new functionalities to Browsertime. However I still have one thing in my wish list:
|
||||
|
||||
* Make it easier to get a HAR file from Firefox! Today you need to have an extension installed in Firefox and that do not work on mobile (since the extension needs to have devtools open). That means we can only get a HAR file on desktop. The current solution also adds some extra performance overhead. The tracking bug for fixing that is [#1744483](https://bugzilla.mozilla.org/show_bug.cgi?id=1744483).
|
||||
|
||||
## ... some love from the Safari team!
|
||||
|
||||
There's a couple of things I wish for Safari:
|
||||
|
||||
* Make it easy to automate to get a HAR file from Safari. I have an open Feedback Assistant issue with id FB8981653 for that.
|
||||
|
||||
* Make it easy to record a video of the screen of iOS from your Mac so we can automate recording a video. Today we use (or try to use) the [QuickTime video hack](https://github.com/danielpaulus/quicktime_video_hack), and it would be great if iOS/Mac OS natively supported it. It would make it so much easier to get visual metrics from iOS Safari.
|
||||
|
||||
* It would be great if the Apple/Safari team could do a blog post about how they do performance testing. I'm thinking maybe you have some tools that aren't know to the public? Or how do you currently run performance tests?
|
||||
|
||||
## ... and more love from GitLab
|
||||
I love that GitLab has used sitespeed.io since 2017 as [their premium browser performance testing tool](https://docs.gitlab.com/ee/user/project/merge_requests/browser_performance_testing.html). I think though that the project lack some documentation since sometimes GitLab users ends up on the sitespeed.io Slack channel asking GitLab specific questions. As a user feel free to ask and I will answer as good as I can, but I think it would be beneficial for everyone with more documentation on the GitLab side.
|
||||
|
||||
Also since GitLab is a company valued over $11 billions I would really like to see you being one of the leaders of [contributing to sitespeed.io](https://opencollective.com/sitespeedio/contribute) :D Or maybe you are that secret contributor giving $20 each month? ;)
|
||||
|
||||
## ... that you all stay safe and have happy holidays!
|
||||
|
||||
/Peter
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
layout: default
|
||||
title: ChatGPT as Steve Jobs about sitespeed.io
|
||||
description: We don't use ChatGPT to code sitespeed.io but we prompt it to write a blog post about sitespeed.io as it was Steve Jobs writing it and it turned out quite good.
|
||||
authorimage: /img/aboutus/peter.jpg
|
||||
intro: We don't use ChatGPT to code sitespeed.io but we prompt it to write a blog post about sitespeed.io as it was Steve Jobs writing it and it turned out quite good.
|
||||
keywords: sitespeed.io, webperf
|
||||
image: https://www.sitespeed.io/img/8bit.png
|
||||
nav: blog
|
||||
---
|
||||
|
||||
# ChatGPT as Steve Jobs about sitespeed.io
|
||||
|
||||
We don't use ChatGPT to code sitespeed.io but we prompt it to write a blog post about sitespeed.io as it was Steve Jobs writing it and it turned out quite good.
|
||||
|
||||
|
||||
> Dear Friends,
|
||||
> <br>Today, I want to talk to you about web performance, and more specifically, why you should use sitespeed.io instead of Lighthouse. I know that some of you may have reservations about using Google's tools for web performance analysis, and I'm here to tell you that your concerns are well-founded.
|
||||
> <br>Let me be clear: Google is evil. They may have started out as a scrappy little search engine with a mission to organize the world's information, but they've since grown into a monstrous, monolithic corporation with a mission to dominate every aspect of our lives. They've become a de facto gatekeeper for the internet, with their search engine, their browser, their operating system, their app store, and their cloud services. They're like a giant octopus with tentacles that reach into every corner of our digital world.
|
||||
> <br>And make no mistake, they use their power to their advantage. They track our every move online, they collect our data, they monetize our attention, and they manipulate our behavior. They have a vested interest in keeping us online as long as possible, and they've designed their tools to do just that. Lighthouse may seem like useful tools on the surface, but they're part of a larger ecosystem that's designed to keep us tethered to Google's platforms.
|
||||
> <br>That's where sitespeed.io comes in. It's a powerful open-source tool that's designed to give you an accurate picture of your website's performance, without any hidden agendas. It's not tied to any specific platform or vendor, and it's not designed to keep you online for longer than necessary. Instead, it gives you a clear, objective view of your website's performance, so you can optimize it for your users, not for Google's algorithms.
|
||||
> <br>In conclusion, if you want to truly understand your website's performance, and if you want to do it without supporting an evil corporation, then sitespeed.io is the way to go. It's a tool that's designed to empower you, not to control you, and that's something that we should all strive for in our digital lives.
|
||||
> <br> Thank you for listening.
|
||||
> <br>Sincerely,
|
||||
> <br>ChatGPT as Steve Jobs
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
# Redirects from what the browser requests to what we serve
|
||||
/feed/index.xml /feed/blog.xml
|
||||
/feeds /feed/rss.xml
|
||||
/feed /feed/rss.xml 302
|
||||
/rss /feed/rss.xml
|
||||
|
|
@ -9,22 +9,28 @@ nav: aboutus
|
|||
|
||||
# About Us
|
||||
|
||||
We are a three member team that works on sitespeed.io in our free time. New contributors and team members are very much welcome!
|
||||
At the moment we are one core member that works on sitespeed.io in our free time. New contributors and team members are very much welcome!
|
||||
|
||||
## Jonathan Lee
|
||||
## Peter Hedenskog
|
||||
<a href="https://twitter.com/soulislove"><img src="{{site.baseurl}}/img/aboutus/peter.jpg" class="photo pull-left" width="200" height="200"></a> I created sitespeed.io late 2012. It's been a lot of work and incredibly [fun](http://www.peterhedenskog.com/blog/2015/02/building-a-new-sitespeed.io/)! I'm a web performance geek, love the web and think Open Source is the way forward. I work in the Quality and test team at [Wikimedia](https://www.wikimedia.org/).
|
||||
|
||||
You should also look at [the others performance tools](https://github.com/sitespeedio) we created through the years.
|
||||
|
||||
Through the use of [Browsertime](https://github.com/sitespeedio/browsertime) Mozilla has become a code contributor to the tool, making it easier to stay bleeding edge :)
|
||||
|
||||
## Retired
|
||||
We also have people that contributed alot through the years.
|
||||
|
||||
### Tobias Lidskog
|
||||
<a href="https://twitter.com/tobiaslidskog"><img src="{{site.baseurl}}/img/aboutus/tobias.jpg" class="photo pull-left" width="200" height="200"></a> Having been supporter of sitespeed.io from the sidelines for some time, I joined Peter as we started working on version 3.0. I've been working professionally with the web for about 15 years, and open source tools have been an indispensable help all along. Now it's nice to be able to give something back.
|
||||
|
||||
In my work at [iZettle](https://www.izettle.com/) I spend most of my time enabling the dev teams to shine. Working on sitespeed.io is a great complement, letting me get my hands dirty with range of tools and techniques; from [controlling browsers with WebDriver](http://www.browsertime.net) to [learning how to use Docker](https://github.com/sitespeedio/sitespeed.io-docker).
|
||||
|
||||
### Jonathan Lee
|
||||
<a href="https://twitter.com/beenanner"><img src="{{site.baseurl}}/img/aboutus/jonathan.jpg" class="photo pull-left" width="200" height="200"></a> I discovered sitespeed.io version 3 in 2015 while exploring the latest trending tools in web performance. I was intrigued by this tool and decided to learn more. Wanting to contribute back to the open source community that has giving me so much over the last decade, I reached out to Peter and Tobias to assist with the development of version 4.0.
|
||||
|
||||
As a performance engineer at [CBSi](http://www.cbsinteractive.com/) I am able to offer real-world feedback to the team to make improvements that will benefit others. I love talking about web performance so feel free connect or reach out to me on [LinkedIn](https://www.linkedin.com/in/jonathanlee20) or [Twitter](https://twitter.com/beenanner).
|
||||
|
||||
|
||||
## Tobias Lidskog
|
||||
<a href="https://twitter.com/tobiaslidskog"><img src="{{site.baseurl}}/img/aboutus/tobias.jpg" class="photo pull-left" width="200" height="200"></a> Having been supporter of sitespeed.io from the sidelines for some time, I joined Peter as we started working on version 3.0. I've been working professionally with the web for about 15 years, and open source tools have been an indispensable help all along. Now it's nice to be able to give something back.
|
||||
|
||||
In my work at [iZettle](https://www.izettle.com/) I spend most of my time enabling the dev teams to shine. Working on sitespeed.io is a great complement, letting me get my hands dirty with range of tools and techniques; from [controlling browsers with WebDriver](http://www.browsertime.net) to [learning how to use Docker](https://github.com/sitespeedio/sitespeed.io-docker).
|
||||
|
||||
## Peter Hedenskog
|
||||
<a href="https://twitter.com/soulislove"><img src="{{site.baseurl}}/img/aboutus/peter.jpg" class="photo pull-left" width="200" height="200"></a> I created sitespeed.io late 2012. It's been a lot of work and incredibly [fun](http://www.peterhedenskog.com/blog/2015/02/building-a-new-sitespeed.io/)! I'm a web performance geek, love the web and think Open Source is the way forward. I work in the performance team at [Wikimedia](https://www.wikimedia.org/).
|
||||
|
||||
In early 2015 I was awarded for building sitespeed.io by [The Swedish Internet Infrastructure Foundation](https://www.iis.se/english/about-iis/) making it possible for me to work full time on the project for three months.
|
||||
|
||||
I'm one of the organizers of the [Stockholm Web Performance meetup group](http://www.meetup.com/Stockholm-Web-Performance-Group/). We are 700+ members and are always looking for new speakers. If you are in Stockholm and have something to share, ping me on <a href="https://twitter.com/soulislove">Twitter</a> and see if we can make it happen.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,25 @@
|
|||
---
|
||||
layout: notfound
|
||||
title: CPU Benchmark - sitespeed.io
|
||||
permalink: /cpu.html
|
||||
---
|
||||
<div class="data"> <div id="cpu"></div>
|
||||
<a href="https://www.sitespeed.io/"><img src="{{site.baseurl}}/img/powerpuffsitespeed.io.png" class="cent"></a></div>
|
||||
|
||||
|
||||
<script>
|
||||
function runCPUBenchmark() {
|
||||
const amount = 100000000;
|
||||
const startTime = performance.now();
|
||||
for ( let i = amount; i > 0; i-- ) {
|
||||
// empty
|
||||
}
|
||||
const time = Math.round( performance.now() - startTime );
|
||||
const cpuDiv = document.getElementById('cpu');
|
||||
cpuDiv.innerHTML = '<h1> CPU Benchmark: ' + time + '</h1>';
|
||||
}
|
||||
|
||||
document.addEventListener('DOMContentLoaded', function() {
|
||||
runCPUBenchmark();
|
||||
}, false);
|
||||
</script>
|
||||
|
|
@ -1,44 +1,63 @@
|
|||
browsertime.js [options] <url>/<scriptFile>
|
||||
|
||||
timeouts
|
||||
--timeouts.browserStart Timeout when waiting for browser to start, in milliseconds [number] [default: 60000]
|
||||
--timeouts.pageLoad Timeout when waiting for url to load, in milliseconds [number] [default: 300000]
|
||||
--timeouts.script Timeout when running browser scripts, in milliseconds [number] [default: 120000]
|
||||
--timeouts.pageCompleteCheck Timeout when waiting for page to complete loading, in milliseconds [number] [default: 300000]
|
||||
--timeouts.browserStart Timeout when waiting for browser to start, in milliseconds [number] [default: 60000]
|
||||
--timeouts.pageLoad Timeout when waiting for url to load, in milliseconds [number] [default: 300000]
|
||||
--timeouts.script Timeout when running browser scripts, in milliseconds [number] [default: 120000]
|
||||
--timeouts.pageCompleteCheck, --maxLoadTime Timeout when waiting for page to complete loading, in milliseconds [number] [default: 120000]
|
||||
--timeouts.networkIdle Timeout when running pageCompleteCheckNetworkIdle, in milliseconds [number] [default: 5000]
|
||||
|
||||
chrome
|
||||
--chrome.args Extra command line arguments to pass to the Chrome process (e.g. --no-sandbox). To add multiple arguments to Chrome, repeat --chrome.args once per argument.
|
||||
--chrome.binaryPath Path to custom Chrome binary (e.g. Chrome Canary). On OS X, the path should be to the binary inside the app bundle, e.g. "/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary"
|
||||
--chrome.chromedriverPath Path to custom ChromeDriver binary. Make sure to use a ChromeDriver version that's compatible with the version of Chrome you're using
|
||||
--chrome.mobileEmulation.deviceName Name of device to emulate. Works only standalone (see list in Chrome DevTools, but add phone like 'iPhone 6'). This will override your userAgent string.
|
||||
--chrome.mobileEmulation.width Width in pixels of emulated mobile screen (e.g. 360) [number]
|
||||
--chrome.mobileEmulation.height Height in pixels of emulated mobile screen (e.g. 640) [number]
|
||||
--chrome.mobileEmulation.pixelRatio Pixel ratio of emulated mobile screen (e.g. 2.0)
|
||||
--chrome.android.package Run Chrome on your Android device. Set to com.android.chrome for default Chrome version. You need to have adb installed to make this work.
|
||||
--chrome.android.activity Name of the Activity hosting the WebView.
|
||||
--chrome.android.process Process name of the Activity hosting the WebView. If not given, the process name is assumed to be the same as chrome.android.package.
|
||||
--chrome.android.deviceSerial Choose which device to use. If you do not set it, first device will be used.
|
||||
--chrome.traceCategories A comma separated list of Tracing event categories to include in the Trace log. Default no trace categories is collected. [string]
|
||||
--chrome.traceCategory Add a trace category to the default ones. Use --chrome.traceCategory multiple times if you want to add multiple categories. Example: --chrome.traceCategory disabled-by-default-v8.cpu_profiler [string]
|
||||
--chrome.enableTraceScreenshots, --enableTraceScreenshots Include screenshots in the trace log (enabling the trace category disabled-by-default-devtools.screenshot). [boolean]
|
||||
--chrome.enableChromeDriverLog Log Chromedriver communication to a log file. [boolean]
|
||||
--chrome.enableVerboseChromeDriverLog Log verboose Chromedriver communication to a log file. [boolean]
|
||||
--chrome.visualMetricsUsingTrace Collect Visual Metrics using Chrome trace log. You need enable trace screenshots --chrome.enableTraceScreenshots and --cpu metrics for this to work. [boolean] [default: false]
|
||||
--chrome.timeline Collect the timeline data. Drag and drop the JSON in your Chrome detvools timeline panel or check out the CPU metrics in the Browsertime.json [boolean]
|
||||
--chrome.collectPerfLog Collect performance log from Chrome with Page and Network events and save to disk. [boolean]
|
||||
--chrome.collectNetLog Collect network log from Chrome and save to disk. [boolean]
|
||||
--chrome.netLogCaptureMode Choose capture mode for Chromes netlog. [choices: "Default", "IncludeSensitive", "Everything"] [default: "IncludeSensitive"]
|
||||
--chrome.collectConsoleLog Collect Chromes console log and save to disk. [boolean]
|
||||
--chrome.CPUThrottlingRate Enables CPU throttling to emulate slow CPUs. Throttling rate as a slowdown factor (1 is no throttle, 2 is 2x slowdown, etc) [number]
|
||||
--chrome.includeResponseBodies Include response bodies in the HAR file. [choices: "none", "all", "html"] [default: "none"]
|
||||
--chrome.cdp.performance Collect Chrome perfromance metrics from Chrome DevTools Protocol [boolean] [default: true]
|
||||
--chrome.blockDomainsExcept, --blockDomainsExcept Block all domains except this domain. Use it multiple time to keep multiple domains. You can also wildcard domains like *.sitespeed.io. Use this when you wanna block out all third parties.
|
||||
--chrome.ignoreCertificateErrors Make Chrome ignore certificate errors. Defaults to true. [boolean] [default: true]
|
||||
--chrome.args Extra command line arguments to pass to the Chrome process (e.g. --no-sandbox). To add multiple arguments to Chrome, repeat --chrome.args once per argument.
|
||||
--chrome.binaryPath Path to custom Chrome binary (e.g. Chrome Canary). On OS X, the path should be to the binary inside the app bundle, e.g. "/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary"
|
||||
--chrome.chromedriverPath Path to custom ChromeDriver binary. Make sure to use a ChromeDriver version that's compatible with the version of Chrome you're using
|
||||
--chrome.chromedriverPort Specify "--port" args for chromedriver prcocess [number]
|
||||
--chrome.mobileEmulation.deviceName Name of device to emulate. Works only standalone (see list in Chrome DevTools, but add phone like 'iPhone 6'). This will override your userAgent string.
|
||||
--chrome.mobileEmulation.width Width in pixels of emulated mobile screen (e.g. 360) [number]
|
||||
--chrome.mobileEmulation.height Height in pixels of emulated mobile screen (e.g. 640) [number]
|
||||
--chrome.mobileEmulation.pixelRatio Pixel ratio of emulated mobile screen (e.g. 2.0)
|
||||
--chrome.android.package Run Chrome on your Android device. Set to com.android.chrome for default Chrome version. You need to have adb installed to make this work.
|
||||
--chrome.android.activity Name of the Activity hosting the WebView.
|
||||
--chrome.android.process Process name of the Activity hosting the WebView. If not given, the process name is assumed to be the same as chrome.android.package.
|
||||
--chrome.android.deviceSerial Choose which device to use. If you do not set it, first device will be used.
|
||||
--chrome.traceCategories A comma separated list of Tracing event categories to include in the Trace log. Default no trace categories is collected. [string]
|
||||
--chrome.traceCategory Add a trace category to the default ones. Use --chrome.traceCategory multiple times if you want to add multiple categories. Example: --chrome.traceCategory disabled-by-default-v8.cpu_profiler [string]
|
||||
--chrome.enableTraceScreenshots, --enableTraceScreenshots Include screenshots in the trace log (enabling the trace category disabled-by-default-devtools.screenshot). [boolean]
|
||||
--chrome.enableChromeDriverLog Log Chromedriver communication to a log file. [boolean]
|
||||
--chrome.enableVerboseChromeDriverLog Log verboose Chromedriver communication to a log file. [boolean]
|
||||
--chrome.timeline, --chrome.trace Collect the timeline data. Drag and drop the JSON in your Chrome detvools timeline panel or check out the CPU metrics in the Browsertime.json [boolean]
|
||||
--chrome.timelineRecordingType, --chrome.traceRecordingType Expose the start/stop commands for the chrome trace [string] [choices: "pageload", "custom"] [default: "pageload"]
|
||||
--chrome.collectPerfLog Collect performance log from Chrome with Page and Network events and save to disk. [boolean]
|
||||
--chrome.collectNetLog Collect network log from Chrome and save to disk. [boolean]
|
||||
--chrome.netLogCaptureMode Choose capture mode for Chromes netlog. [choices: "Default", "IncludeSensitive", "Everything"] [default: "IncludeSensitive"]
|
||||
--chrome.collectConsoleLog Collect Chromes console log and save to disk. [boolean]
|
||||
--chrome.appendToUserAgent Append to the user agent. [string]
|
||||
--chrome.noDefaultOptions Prevent Browsertime from setting its default options for Chrome [boolean]
|
||||
--chrome.cleanUserDataDir If you use --user-data-dir as an argument to Chrome and want to clean that directory between each iteration you should use --chrome.cleanUserDataDir true. [boolean]
|
||||
--chrome.CPUThrottlingRate Enables CPU throttling to emulate slow CPUs. Throttling rate as a slowdown factor (1 is no throttle, 2 is 2x slowdown, etc) [number]
|
||||
--chrome.includeResponseBodies Include response bodies in the HAR file. [choices: "none", "all", "html"] [default: "none"]
|
||||
--chrome.cdp.performance Collect Chrome perfromance metrics from Chrome DevTools Protocol [boolean] [default: true]
|
||||
--chrome.blockDomainsExcept, --blockDomainsExcept Block all domains except this domain. Use it multiple time to keep multiple domains. You can also wildcard domains like *.sitespeed.io. Use this when you wanna block out all third parties.
|
||||
--chrome.ignoreCertificateErrors Make Chrome ignore certificate errors. Defaults to true. [boolean] [default: true]
|
||||
|
||||
android
|
||||
--android.powerTesting, --androidPower Enables android power testing - charging must be disabled for this.(You have to disable charging yourself for this - it depends on the phone model). [boolean]
|
||||
--android.ignoreShutdownFailures, --ignoreShutdownFailures If set, shutdown failures will be ignored on Android. [boolean] [default: false]
|
||||
--android.rooted, --androidRooted If your phone is rooted you can use this to set it up following Mozillas best practice for stable metrics. [boolean] [default: false]
|
||||
--android.pinCPUSpeed, --androidPinCPUSpeed Using a Samsung A51 or Moto G5 you can choose how to pin the CPU to better align the speed with your users. This only works on rooted phones and together with --android.rooted [choices: "min", "middle", "max"] [default: "min"]
|
||||
--android.batteryTemperatureLimit, --androidBatteryTemperatureLimit Do the battery temperature need to be below a specific limit before we start the test?
|
||||
--android.batteryTemperatureWaitTimeInSeconds, --androidBatteryTemperatureWaitTimeInSeconds How long time to wait (in seconds) if the androidBatteryTemperatureWaitTimeInSeconds is not met before the next try [default: 120]
|
||||
--android.batteryTemperatureReboot, --androidBatteryTemperatureReboot If your phone does not get the minimum temperature aftet the wait time, reboot the phone. [boolean] [default: false]
|
||||
--android.pretestPowerPress, --androidPretestPowerPress Press the power button on the phone before a test starts. [boolean] [default: false]
|
||||
--android.pretestPressHomeButton, --androidPretestPressHomeButton Press the home button on the phone before a test starts. [boolean] [default: false]
|
||||
--android.verifyNetwork, --androidVerifyNetwork Before a test start, verify that the device has a Internet connection by pinging 8.8.8.8 (or a configurable domain with --androidPingAddress) [boolean] [default: false]
|
||||
--android.gnirehtet, --gnirehtet Start gnirehtet and reverse tethering the traffic from your Android phone. [boolean] [default: false]
|
||||
|
||||
firefox
|
||||
--firefox.binaryPath Path to custom Firefox binary (e.g. Firefox Nightly). On OS X, the path should be to the binary inside the app bundle, e.g. /Applications/Firefox.app/Contents/MacOS/firefox-bin
|
||||
--firefox.geckodriverPath Path to custom geckodriver binary. Make sure to use a geckodriver version that's compatible with the version of Firefox (Gecko) you're using
|
||||
--firefox.geckodriverArgs Flags passed in to Geckodriver see https://firefox-source-docs.mozilla.org/testing/geckodriver/Flags.html. Use it like --firefox.geckodriverArgs="--marionette-port" --firefox.geckodriverArgs=1027 [string]
|
||||
--firefox.appendToUserAgent Append to the user agent. [string]
|
||||
--firefox.nightly Use Firefox Nightly. Works on OS X. For Linux you need to set the binary path. [boolean]
|
||||
--firefox.beta Use Firefox Beta. Works on OS X. For Linux you need to set the binary path. [boolean]
|
||||
--firefox.developer Use Firefox Developer. Works on OS X. For Linux you need to set the binary path. [boolean]
|
||||
|
|
@ -47,8 +66,12 @@ firefox
|
|||
--firefox.includeResponseBodies Include response bodies in HAR [choices: "none", "all", "html"] [default: "none"]
|
||||
--firefox.appconstants Include Firefox AppConstants information in the results [boolean] [default: false]
|
||||
--firefox.acceptInsecureCerts Accept insecure certs [boolean]
|
||||
--firefox.bidihar Use the new bidi HAR generator [boolean] [default: false]
|
||||
--firefox.windowRecorder Use the internal compositor-based Firefox window recorder to emit PNG files for each frame that is a meaningful change. The PNG output will further be merged into a variable frame rate video for analysis. Use this instead of ffmpeg to record a video (you still need the --video flag). [boolean] [default: false]
|
||||
--firefox.memoryReport Measure firefox resident memory after each iteration. [boolean] [default: false]
|
||||
--firefox.memoryReportParams.minizeFirst Force a collection before dumping and measuring the memory report. [boolean] [default: false]
|
||||
--firefox.geckoProfiler Collect a profile using the internal gecko profiler [boolean] [default: false]
|
||||
--firefox.geckoProfilerRecordingType Expose the start/stop commands for the gecko profiler [string] [choices: "pageload", "custom"] [default: "pageload"]
|
||||
--firefox.geckoProfilerParams.features Enabled features during gecko profiling [string] [default: "js,stackwalk,leaf"]
|
||||
--firefox.geckoProfilerParams.threads Threads to profile. [string] [default: "GeckoMain,Compositor,Renderer"]
|
||||
--firefox.geckoProfilerParams.interval Sampling interval in ms. Defaults to 1 on desktop, and 4 on android. [number]
|
||||
|
|
@ -58,6 +81,7 @@ firefox
|
|||
--firefox.collectMozLog Collect the MOZ HTTP log (by default). See --firefox.setMozLog if you need to specify the logs you wish to gather. [boolean]
|
||||
--firefox.setMozLog Use in conjunction with firefox.collectMozLog to set MOZ_LOG to something specific. Without this, the HTTP logs will be collected by default [default: "timestamp,nsHttp:5,cache2:5,nsSocketTransport:5,nsHostResolver:5"]
|
||||
--firefox.disableBrowsertimeExtension Disable installing the browsertime extension. [boolean]
|
||||
--firefox.noDefaultPrefs Prevents browsertime from setting its default preferences. [boolean] [default: false]
|
||||
--firefox.disableSafeBrowsing Disable safebrowsing. [boolean] [default: true]
|
||||
--firefox.disableTrackingProtection Disable Tracking Protection. [boolean] [default: true]
|
||||
--firefox.android.package Run Firefox or a GeckoView-consuming App on your Android device. Set to org.mozilla.geckoview_example for default Firefox version. You need to have adb installed to make this work.
|
||||
|
|
@ -75,10 +99,12 @@ video
|
|||
--videoParams.addTimer Add timer and metrics to the video. [boolean] [default: true]
|
||||
--videoParams.debug Turn on debug to record a video with all pre/post and scripts/URLS you test in one iteration. Visual Metrics will then automatically be disabled. [boolean] [default: false]
|
||||
--videoParams.keepOriginalVideo Keep the original video. Use it when you have a Visual Metrics bug and want to create an issue at GitHub [boolean] [default: false]
|
||||
--videoParams.thumbsize The maximum size of the thumbnail in the filmstrip. Default is 400 pixels in either direction. If videoParams.filmstripFullSize is used that setting overrides this. [default: 400]
|
||||
--videoParams.filmstripFullSize Keep original sized screenshots. Will make the run take longer time [boolean] [default: false]
|
||||
--videoParams.filmstripQuality The quality of the filmstrip screenshots. 0-100. [default: 75]
|
||||
--videoParams.createFilmstrip Create filmstrip screenshots. [boolean] [default: true]
|
||||
--videoParams.nice Use nice when running FFMPEG during the run. A value from -20 to 19 https://linux.die.net/man/1/nice [default: 0]
|
||||
--videoParams.taskset Start FFMPEG with taskset -c <CPUS> to pin FFMPG to specific CPU(s). Specify a numerical list of processors. The list may contain multiple items, separated by comma, and ranges. For example, "0,5,7,9-11".
|
||||
--videoParams.convert Convert the original video to a viewable format (for most video players). Turn that off to make a faster run. [boolean] [default: true]
|
||||
--videoParams.threads Number of threads to use for video recording. Default is determined by ffmpeg. [default: 0]
|
||||
|
||||
|
|
@ -104,6 +130,21 @@ Screenshot
|
|||
--screenshotParams.jpg.quality Quality of the JPEG screenshot. 1-100 [default: 80]
|
||||
--screenshotParams.maxSize The max size of the screenshot (width and height). [default: 2000]
|
||||
|
||||
PageLoad
|
||||
--pageCompleteCheck Supply a JavaScript (inline or JavaScript file) that decides when the browser is finished loading the page and can start to collect metrics. The JavaScript snippet is repeatedly queried to see if page has completed loading (indicated by the script returning true). Use it to fetch timings happening after the loadEventEnd. By default the tests ends 2 seconds after loadEventEnd. Also checkout --pageCompleteCheckInactivity and --pageCompleteCheckPollTimeout
|
||||
--pageCompleteWaitTime How long time you want to wait for your pageComplteteCheck to finish, after it is signaled to closed. Extra parameter passed on to your pageCompleteCheck. [default: 8000]
|
||||
--pageCompleteCheckInactivity Alternative way to choose when to end your test. This will wait for 2 seconds of inactivity that happens after loadEventEnd. [boolean] [default: false]
|
||||
--pageCompleteCheckNetworkIdle Alternative way to choose when to end your test that works in Chrome and Firefox. Uses CDP or WebDriver Bidi to look at network traffic instead of running JavaScript in the browser to know when to end the test. By default this will wait 5 seconds of inactivity in the network log (no requets/responses in 5 seconds). Use --timeouts.networkIdle to change the 5 seconds. The test will end after 2 minutes if there is still activity on the network. You can change that timout using --timeouts.pageCompleteCheck [boolean] [default: false]
|
||||
--pageCompleteCheckPollTimeout The time in ms to wait for running the page complete check the next time. [number] [default: 1500]
|
||||
--pageCompleteCheckStartWait The time in ms to wait for running the page complete check for the first time. Use this when you have a pageLoadStrategy set to none [number] [default: 5000]
|
||||
--pageLoadStrategy Set the strategy to waiting for document readiness after a navigation event. After the strategy is ready, your pageCompleteCheck will start running. [string] [choices: "eager", "none", "normal"] [default: "none"]
|
||||
--timeToSettle Extra time added for the browser to settle before starting to test a URL. This delay happens after the browser was opened and before the navigation to the URL [number] [default: 0]
|
||||
--webdriverPageload Use webdriver.get to initialize the page load instead of window.location. [boolean] [default: false]
|
||||
--cacheClearRaw Use internal browser functionality to clear browser cache between runs instead of only using Selenium. [boolean] [default: false]
|
||||
--flushDNS Flush DNS between runs, works on Mac OS and Linux. Your user needs sudo rights to be able to flush the DNS. [boolean] [default: false]
|
||||
--spa Convenient parameter to use if you test a SPA application: will automatically wait for X seconds after last network activity and use hash in file names. Read more: https://www.sitespeed.io/documentation/sitespeed.io/spa/ [boolean] [default: false]
|
||||
--browserRestartTries If the browser fails to start, you can retry to start it this amount of times. [number] [default: 3]
|
||||
|
||||
proxy
|
||||
--proxy.pac Proxy auto-configuration (URL) [string]
|
||||
--proxy.ftp Ftp proxy (host:port) [string]
|
||||
|
|
@ -118,75 +159,64 @@ connectivity
|
|||
--connectivity.rtt, --connectivity.latency This option requires --connectivity.profile be set to "custom".
|
||||
--connectivity.variance This option requires --connectivity.engine be set to "throttle". It will add a variance to the rtt between each run. --connectivity.variance 2 means it will run with a random variance of max 2% between runs.
|
||||
--connectivity.alias Give your connectivity profile a custom name
|
||||
--connectivity.engine The engine for connectivity. Throttle works on Mac and tc based Linux. Use external if you set the connectivity outside of Browsertime. The best way do to this is described in https://github.com/sitespeedio/browsertime#connectivity. [choices: "external", "throttle", "tsproxy"] [default: "external"]
|
||||
--connectivity.engine The engine for connectivity. Throttle works on Mac and tc based Linux. For mobile you can use Humble if you have a Humble setup. Use external if you set the connectivity outside of Browsertime. The best way do to this is described in https://github.com/sitespeedio/browsertime#connectivity. [choices: "external", "throttle", "humble"] [default: "external"]
|
||||
--connectivity.throttle.localhost Add latency/delay on localhost. Perfect for testing with WebPageReplay [boolean] [default: false]
|
||||
--connectivity.humble.url The path to your Humble instance. For example http://raspberrypi:3000 [string]
|
||||
|
||||
debug
|
||||
--debug Run Browsertime in debug mode. [boolean] [default: false]
|
||||
|
||||
Options:
|
||||
--cpu Easy way to enable both chrome.timeline for Chrome and geckoProfile for Firefox [boolean]
|
||||
--androidPower Enables android power testing - charging must be disabled for this.(You have to disable charging yourself for this - it depends on the phone model). [boolean]
|
||||
--video Record a video and store the video. Set it to false to remove the video that is created by turning on visualMetrics. To remove fully turn off video recordings, make sure to set video and visualMetrics to false. Requires FFMpeg to be installed. [boolean]
|
||||
--visualMetrics Collect Visual Metrics like First Visual Change, SpeedIndex, Perceptual Speed Index and Last Visual Change. Requires FFMpeg and Python dependencies [boolean]
|
||||
--visualElements, --visuaElements Collect Visual Metrics from elements. Works only with --visualMetrics turned on. By default you will get visual metrics from the largest image within the view port and the largest h1. You can also configure to pickup your own defined elements with --scriptInput.visualElements [boolean]
|
||||
--visualMetricsPerceptual Collect Perceptual Speed Index when you run --visualMetrics. [boolean]
|
||||
--visualMetricsContentful Collect Contentful Speed Index when you run --visualMetrics. [boolean]
|
||||
--scriptInput.visualElements Include specific elements in visual elements. Give the element a name and select it with document.body.querySelector. Use like this: --scriptInput.visualElements name:domSelector see https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors. Add multiple instances to measure multiple elements. Visual Metrics will use these elements and calculate when they are visible and fully rendered.
|
||||
--scriptInput.longTask, --minLongTaskLength Set the minimum length of a task to be categorised as a CPU Long Task. It can never be smaller than 50. The value is in ms and only works in Chromium browsers at the moment. [number] [default: 50]
|
||||
-b, --browser Specify browser. Safari only works on OS X/iOS. Edge only work on OS that supports Edge. [choices: "chrome", "firefox", "edge", "safari"] [default: "chrome"]
|
||||
--android Short key to use Android. Defaults to use com.android.chrome unless --browser is specified. [boolean] [default: false]
|
||||
--androidRooted If your phone is rooted you can use this to set it up following Mozillas best practice for stable metrics. [boolean] [default: false]
|
||||
--androidBatteryTemperatureLimit Do the battery temperature need to be below a specific limit before we start the test?
|
||||
--androidBatteryTemperatureWaitTimeInSeconds How long time to wait (in seconds) if the androidBatteryTemperatureWaitTimeInSeconds is not met before the next try [default: 120]
|
||||
--androidBatteryTemperatureReboot If your phone does not get the minimum temperature aftet the wait time, reboot the phone. [boolean] [default: false]
|
||||
--androidPretestPowerPress Press the power button on the phone before a test starts. [boolean] [default: false]
|
||||
--androidVerifyNetwork Before a test start, verify that the device has a Internet connection by pinging 8.8.8.8 (or a configurable domain with --androidPingAddress) [boolean] [default: false]
|
||||
--processStartTime Capture browser process start time (in milliseconds). Android only for now. [boolean] [default: false]
|
||||
--pageCompleteCheck Supply a JavaScript (inline or JavaScript file) that decides when the browser is finished loading the page and can start to collect metrics. The JavaScript snippet is repeatedly queried to see if page has completed loading (indicated by the script returning true). Use it to fetch timings happening after the loadEventEnd. By default the tests ends 2 seconds after loadEventEnd. Also checkout --pageCompleteCheckInactivity and --pageCompleteCheckPollTimeout
|
||||
--pageCompleteWaitTime How long time you want to wait for your pageComplteteCheck to finish, after it is signaled to closed. Extra parameter passed on to your pageCompleteCheck. [default: 8000]
|
||||
--pageCompleteCheckInactivity Alternative way to choose when to end your test. This will wait for 2 seconds of inactivity that happens after loadEventEnd. [boolean] [default: false]
|
||||
--pageCompleteCheckPollTimeout The time in ms to wait for running the page complete check the next time. [number] [default: 1500]
|
||||
--pageCompleteCheckStartWait The time in ms to wait for running the page complete check for the first time. Use this when you have a pageLoadStrategy set to none [number] [default: 5000]
|
||||
--pageLoadStrategy Set the strategy to waiting for document readiness after a navigation event. After the strategy is ready, your pageCompleteCheck will start runninhg. [string] [choices: "eager", "none", "normal"] [default: "none"]
|
||||
-n, --iterations Number of times to test the url (restarting the browser between each test) [number] [default: 3]
|
||||
--prettyPrint Enable to print json/har with spaces and indentation. Larger files, but easier on the eye. [boolean] [default: false]
|
||||
--delay Delay between runs, in milliseconds [number] [default: 0]
|
||||
--timeToSettle Extra time added for the browser to settle before starting to test a URL. This delay happens after the browser was opened and before the navigation to the URL [number] [default: 0]
|
||||
--webdriverPageload Use webdriver.get to initialize the page load instead of window.location. [boolean] [default: false]
|
||||
-r, --requestheader Request header that will be added to the request. Add multiple instances to add multiple request headers. Works for Firefox and Chrome. Use the following format key:value
|
||||
--cookie Cookie that will be added to the request. Add multiple instances to add multiple request cookies. Works for Firefox and Chrome. Use the following format cookieName=cookieValue
|
||||
--injectJs Inject JavaScript into the current page at document_start. Works for Firefox and Chrome. More info: https://developer.mozilla.org/docs/Mozilla/Add-ons/WebExtensions/API/contentScripts
|
||||
--block Domain to block. Add multiple instances to add multiple domains that will be blocked. If you use Chrome you can also use --blockDomainsExcept (that is more performant). Works for Firefox and Chrome.
|
||||
--percentiles The percentile values within the data browsertime will calculate and report. [array] [default: [0,10,90,99,100]]
|
||||
--decimals The decimal points browsertime statistics round to. [number] [default: 0]
|
||||
--iqr Use IQR, or Inter Quartile Range filtering filters data based on the spread of the data. See https://en.wikipedia.org/wiki/Interquartile_range. In some cases, IQR filtering may not filter out anything. This can happen if the acceptable range is wider than the bounds of your dataset. [boolean] [default: false]
|
||||
--cacheClearRaw Use internal browser functionality to clear browser cache between runs instead of only using Selenium. [boolean] [default: false]
|
||||
--basicAuth Use it if your server is behind Basic Auth. Format: username@password (Only Chrome and Firefox at the moment).
|
||||
--preScript, --setUp Selenium script(s) to run before you test your URL/script. They will run outside of the analyse phase. Note that --preScript can be passed multiple times.
|
||||
--postScript, --tearDown Selenium script(s) to run after you test your URL. They will run outside of the analyse phase. Note that --postScript can be passed multiple times.
|
||||
--script Add custom Javascript to run after the page has finished loading to collect metrics. If a single js file is specified, it will be included in the category named "custom" in the output json. Pass a folder to include all .js scripts in the folder, and have the folder name be the category. Note that --script can be passed multiple times.
|
||||
--userAgent Override user agent
|
||||
-q, --silent Only output info in the logs, not to the console. Enter twice to suppress summary line. [count]
|
||||
-o, --output Specify file name for Browsertime data (ex: 'browsertime'). Unless specified, file will be named browsertime.json
|
||||
--har Specify file name for .har file (ex: 'browsertime'). Unless specified, file will be named browsertime.har
|
||||
--skipHar Pass --skipHar to not collect a HAR file. [boolean]
|
||||
--gzipHar Pass --gzipHar to gzip the HAR file [boolean]
|
||||
--config Path to JSON config file. You can also use a .browsertime.json file that will automatically be found by Browsertime using find-up.
|
||||
--viewPort Size of browser window WIDTHxHEIGHT or "maximize". Note that "maximize" is ignored for xvfb.
|
||||
--resultDir Set result directory for the files produced by Browsertime
|
||||
--useSameDir Store all files in the same structure and do not use the path structure released in 4.0. Use this only if you are testing ONE URL.
|
||||
--xvfb Start xvfb before the browser is started [boolean] [default: false]
|
||||
--xvfbParams.display The display used for xvfb [default: 99]
|
||||
--tcpdump Collect a tcpdump for each tested URL. [boolean] [default: false]
|
||||
--tcpdumpPacketBuffered Use together with --tcpdump to save each packet directly to the file, instead of buffering. [boolean] [default: false]
|
||||
--urlAlias Use an alias for the URL. You need to pass on the same amount of alias as URLs. The alias is used as the name of the URL and used for filepath. Pass on multiple --urlAlias for multiple alias/URLs. You can also add alias direct in your script. [string]
|
||||
--preURL A URL that will be accessed first by the browser before the URL that you wanna analyze. Use it to fill the browser cache.
|
||||
--preURLDelay Delay between preURL and the URL you want to test (in milliseconds) [default: 1500]
|
||||
--userTimingWhitelist All userTimings are captured by default this option takes a regex that will whitelist which userTimings to capture in the results.
|
||||
--headless Run the browser in headless mode. Works for Firefox and Chrome. [boolean] [default: false]
|
||||
--gnirehtet Start gnirehtet and reverse tethering the traffic from your Android phone. [boolean] [default: false]
|
||||
--extension Path to a WebExtension to be installed in the browser. Note that --extension can be passed multiple times.
|
||||
--spa Convenient parameter to use if you test a SPA application: will automatically wait for X seconds after last network activity and use hash in file names. Read more: https://www.sitespeed.io/documentation/sitespeed.io/spa/ [boolean] [default: false]
|
||||
--browserRestartTries If the browser fails to start, you can retry to start it this amount of times. [number] [default: 3]
|
||||
--preWarmServer Do pre test requests to the URL(s) that you want to test that is not measured. Do that to make sure your web server is ready to serve. The pre test requests is done with another browser instance that is closed after pre testing is done. [boolean] [default: false]
|
||||
--preWarmServerWaitTime The wait time before you start the real testing after your pre-cache request. [number] [default: 5000]
|
||||
-h, --help Show help [boolean]
|
||||
-V, --version Show version number [boolean]
|
||||
--cpu Easy way to enable both chrome.timeline for Chrome and geckoProfile for Firefox [boolean]
|
||||
--enableProfileRun Make one extra run that collects the profiling trace log (no other metrics is collected). For Chrome it will collect the timeline trace, for Firefox it will get the Geckoprofiler trace. This means you do not need to get the trace for all runs and can skip the overhead it produces. [boolean]
|
||||
--video Record a video and store the video. Set it to false to remove the video that is created by turning on visualMetrics. To remove fully turn off video recordings, make sure to set video and visualMetrics to false. Requires FFMpeg to be installed. [boolean]
|
||||
--visualMetrics Collect Visual Metrics like First Visual Change, SpeedIndex, Perceptual Speed Index and Last Visual Change. Requires FFMpeg and Python dependencies [boolean]
|
||||
--visualElements, --visuaElements Collect Visual Metrics from elements. Works only with --visualMetrics turned on. By default you will get visual metrics from the largest image within the view port and the largest h1. You can also configure to pickup your own defined elements with --scriptInput.visualElements [boolean]
|
||||
--visualMetricsPerceptual Collect Perceptual Speed Index when you run --visualMetrics. [boolean]
|
||||
--visualMetricsContentful Collect Contentful Speed Index when you run --visualMetrics. [boolean]
|
||||
--visualMetricsPortable Use the portable visual-metrics processing script (no ImageMagick dependencies). [boolean] [default: true]
|
||||
--scriptInput.visualElements Include specific elements in visual elements. Give the element a name and select it with document.body.querySelector. Use like this: --scriptInput.visualElements name:domSelector see https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Selectors. Add multiple instances to measure multiple elements. Visual Metrics will use these elements and calculate when they are visible and fully rendered.
|
||||
--scriptInput.longTask, --minLongTaskLength Set the minimum length of a task to be categorised as a CPU Long Task. It can never be smaller than 50. The value is in ms and only works in Chromium browsers at the moment. [number] [default: 50]
|
||||
-b, --browser Specify browser. Safari only works on OS X/iOS. Edge only work on OS that supports Edge. [choices: "chrome", "firefox", "edge", "safari"] [default: "chrome"]
|
||||
--android Short key to use Android. Defaults to use com.android.chrome unless --browser is specified. [boolean] [default: false]
|
||||
--processStartTime Capture browser process start time (in milliseconds). Android only for now. [boolean] [default: false]
|
||||
-n, --iterations Number of times to test the url (restarting the browser between each test) [number] [default: 3]
|
||||
--prettyPrint Enable to print json/har with spaces and indentation. Larger files, but easier on the eye. [boolean] [default: false]
|
||||
--delay Delay between runs, in milliseconds [number] [default: 0]
|
||||
-r, --requestheader Request header that will be added to the request. Add multiple instances to add multiple request headers. Works for Firefox and Chrome. Use the following format key:value
|
||||
--cookie Cookie that will be added to the request. Add multiple instances to add multiple request cookies. Works for Firefox and Chrome. Use the following format cookieName=cookieValue
|
||||
--injectJs Inject JavaScript into the current page at document_start. Works for Firefox and Chrome. More info: https://developer.mozilla.org/docs/Mozilla/Add-ons/WebExtensions/API/contentScripts
|
||||
--block Domain to block or URL or URL pattern to block. If you use Chrome you can also use --blockDomainsExcept (that is more performant). Works in Chrome/Edge. For Firefox you can only block domains.
|
||||
--percentiles The percentile values within the data browsertime will calculate and report. This argument uses Yargs arrays and you you to set them correctly it is recommended to use a configuraration file instead. [array] [default: [0,10,90,99,100]]
|
||||
--decimals The decimal points browsertime statistics round to. [number] [default: 0]
|
||||
--iqr Use IQR, or Inter Quartile Range filtering filters data based on the spread of the data. See https://en.wikipedia.org/wiki/Interquartile_range. In some cases, IQR filtering may not filter out anything. This can happen if the acceptable range is wider than the bounds of your dataset. [boolean] [default: false]
|
||||
--basicAuth Use it if your server is behind Basic Auth. Format: username@password (Only Chrome and Firefox at the moment).
|
||||
--preScript, --setUp Selenium script(s) to run before you test your URL/script. They will run outside of the analyse phase. Note that --preScript can be passed multiple times.
|
||||
--postScript, --tearDown Selenium script(s) to run after you test your URL. They will run outside of the analyse phase. Note that --postScript can be passed multiple times.
|
||||
--script Add custom Javascript to run after the page has finished loading to collect metrics. If a single js file is specified, it will be included in the category named "custom" in the output json. Pass a folder to include all .js scripts in the folder, and have the folder name be the category. Note that --script can be passed multiple times.
|
||||
--userAgent Override user agent
|
||||
--appendToUserAgent Append a String to the user agent. Works in Chrome/Edge and Firefox.
|
||||
-q, --silent Only output info in the logs, not to the console. Enter twice to suppress summary line. [count]
|
||||
-o, --output Specify file name for Browsertime data (ex: 'browsertime'). Unless specified, file will be named browsertime.json
|
||||
--har Specify file name for .har file (ex: 'browsertime'). Unless specified, file will be named browsertime.har
|
||||
--skipHar Pass --skipHar to not collect a HAR file. [boolean]
|
||||
--gzipHar Pass --gzipHar to gzip the HAR file [boolean]
|
||||
--config Path to JSON config file. You can also use a .browsertime.json file that will automatically be found by Browsertime using find-up.
|
||||
--viewPort Size of browser window WIDTHxHEIGHT or "maximize". Note that "maximize" is ignored for xvfb.
|
||||
--resultDir Set result directory for the files produced by Browsertime
|
||||
--useSameDir Store all files in the same structure and do not use the path structure released in 4.0. Use this only if you are testing ONE URL.
|
||||
--xvfb Start xvfb before the browser is started [boolean] [default: false]
|
||||
--xvfbParams.display The display used for xvfb [default: 99]
|
||||
--tcpdump Collect a tcpdump for each tested URL. [boolean] [default: false]
|
||||
--tcpdumpPacketBuffered Use together with --tcpdump to save each packet directly to the file, instead of buffering. [boolean] [default: false]
|
||||
--urlAlias Use an alias for the URL. You need to pass on the same amount of alias as URLs. The alias is used as the name of the URL and used for filepath. Pass on multiple --urlAlias for multiple alias/URLs. You can also add alias direct in your script. [string]
|
||||
--preURL, --warmLoad A URL that will be accessed first by the browser before the URL that you wanna analyze. Use it to fill the browser cache.
|
||||
--preURLDelay, --warmLoadDealy Delay between preURL and the URL you want to test (in milliseconds) [default: 1500]
|
||||
--userTimingAllowList All userTimings are captured by default this option takes a regex that will allow which userTimings to capture in the results.
|
||||
--headless Run the browser in headless mode. Works for Firefox and Chrome. [boolean] [default: false]
|
||||
--extension Path to a WebExtension to be installed in the browser. Note that --extension can be passed multiple times.
|
||||
--cjs Load scripting files that ends with .js as common js. Default (false) loads files as esmodules. [boolean] [default: false]
|
||||
--preWarmServer Do pre test requests to the URL(s) that you want to test that is not measured. Do that to make sure your web server is ready to serve. The pre test requests is done with another browser instance that is closed after pre testing is done. [boolean] [default: false]
|
||||
--preWarmServerWaitTime The wait time before you start the real testing after your pre-cache request. [number] [default: 5000]
|
||||
-h, --help Show help [boolean]
|
||||
-V, --version Show version number [boolean]
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ twitterdescription:
|
|||
Use our Docker image (with Chrome, Firefox, XVFB and the dependencies needed to record a video):
|
||||
|
||||
~~~bash
|
||||
docker run --rm -v "$(pwd)":/browsertime-results sitespeedio/browsertime:{% include version/browsertime.txt %} --video --visualMetrics https://www.sitespeed.io/
|
||||
docker run --rm -v "$(pwd)":/browsertime sitespeedio/browsertime:{% include version/browsertime.txt %} --video --visualMetrics https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
Or using NodeJS:
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
layout: default
|
||||
title: Documentation Browsertime 12
|
||||
title: Documentation Browsertime 20
|
||||
description: Read about all you can do with Browsertime.
|
||||
keywords: tools, documentation, web performance
|
||||
nav: documentation
|
||||
|
|
@ -9,7 +9,7 @@ image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
|
|||
twitterdescription: Documentation for Browsertime.
|
||||
---
|
||||
|
||||
# Documentation v12
|
||||
# Documentation v20
|
||||
|
||||
<img src="{{site.baseurl}}/img/logos/browsertime.png" class="pull-right img-big" alt="Browsertime logo" width="200" height="175">
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
layout: default
|
||||
title: Humble - Raspberry Pi WiFi network link conditioner
|
||||
description: Simulate slow network connections on your WiFi network.
|
||||
keywords: throttle, documentation, web performance
|
||||
author: Peter Hedenskog
|
||||
nav: documentation
|
||||
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
|
||||
twitterdescription: Simulate slow network connections on your WiFi network.
|
||||
---
|
||||
|
||||
# Humble
|
||||
{:.no_toc}
|
||||
|
||||
* Lets place the TOC here
|
||||
{:toc}
|
||||
|
||||
## Introduction
|
||||
|
||||
We are super happy to introduce *Humble* the Raspberry Pi WiFi network link conditioner! It creates a new WiFi network that you use for your device. You then access a web page where you set the network speed (3G/4G etc) for your WiFi.
|
||||
|
||||
Humble uses [Throttle](https://github.com/sitespeedio/throttle) and [Throttle frontend](https://github.com/sitespeedio/throttle-frontend) and a configured Raspberry Pi 4. And it's all Open Source and you can use it for free.
|
||||
|
||||
## What do you need?
|
||||
To setup your own instance of Humble you need:
|
||||
1. A Raspberry Pi 4 with a wired connection to your router.
|
||||
2. A SD card (at least 8 GB)
|
||||
3. A computer with Raspberry Pi Imager (that you can download from [https://www.raspberrypi.com/software/](https://www.raspberrypi.com/software/)).
|
||||
|
||||
Yes that is all!
|
||||
|
||||
Your setup will look like this:
|
||||

|
||||
{: .img-thumbnail}
|
||||
|
||||
And you switch connection speed on the WiFi using a web page on the Raspberry Pi:
|
||||

|
||||
{: .img-thumbnail}
|
||||
|
||||
Read all about how to use Humble at GitHub: [https://github.com/sitespeedio/humble](https://github.com/sitespeedio/humble)
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
layout: default
|
||||
title: Documentation for all sitespeed.io tools.
|
||||
description: Here's the documentation of how to use all the sitespeed.io tools. Use latest LTS release 12.x of NodeJS or Docker containers to get them up and running.
|
||||
description: Here's the documentation of how to use all the sitespeed.io tools. Use latest LTS release of NodeJS or Docker containers to get them up and running.
|
||||
keywords: tools, documentation, web performance, version, nodejs.
|
||||
nav: documentation
|
||||
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
|
||||
|
|
@ -9,12 +9,13 @@ twitterdescription: Documentation for the sitespeed.io.
|
|||
---
|
||||
# Documentation
|
||||
|
||||
Use Docker or the latest LTS release (12.x) of NodeJS to run the sitespeed.io tools.
|
||||
Use Docker or the latest [LTS release of NodeJS](https://nodejs.org/) to run the sitespeed.io tools.
|
||||
|
||||
* [sitespeed.io]({{site.baseurl}}/documentation/sitespeed.io/) - continuously monitor your web sites web performance (including the Coach, Browsertime, PageXray and the rest).
|
||||
* [Coach]({{site.baseurl}}/documentation/coach/) - get help from the Coach how you can make your web page faster.
|
||||
* [Browsertime]({{site.baseurl}}/documentation/browsertime/) - collect metrics using JavaScript/video/HAR from Chrome/Firefox.
|
||||
* [Compare]({{site.baseurl}}/documentation/compare/) - compare two HAR files with each other and find regressions.
|
||||
* [PageXray]({{site.baseurl}}/documentation/pagexray/) - convert HAR files to a more usable format.
|
||||
* [Throttle]({{site.baseurl}}/documentation/throttle/) - simulate slow network connections on Linux and Mac OS X.
|
||||
* [Chrome-HAR]({{site.baseurl}}/documentation/chrome-har/) - create Chrome HAR files based on events from the Chrome Debugging Protocol.
|
||||
* [Browsertime]({{site.baseurl}}/documentation/browsertime/) - collect metrics using JavaScript/video/HAR from Chrome/Firefox.
|
||||
* [Chrome-HAR]({{site.baseurl}}/documentation/chrome-har/) - create Chrome HAR files based on events from the Chrome Debugging Protocol.
|
||||
* [Coach]({{site.baseurl}}/documentation/coach/) - get help from the Coach how you can make your web page faster.
|
||||
* [Compare]({{site.baseurl}}/documentation/compare/) - compare two HAR files with each other and find regressions.
|
||||
* [Humble]({{site.baseurl}}/documentation/humble/) - Raspberry Pi WiFi network link conditioner.
|
||||
* [PageXray]({{site.baseurl}}/documentation/pagexray/) - convert HAR files to a more usable format.
|
||||
* [sitespeed.io]({{site.baseurl}}/documentation/sitespeed.io/) - continuously monitor your web sites web performance (including the Coach, Browsertime, PageXray and the rest).
|
||||
* [Throttle]({{site.baseurl}}/documentation/throttle/) - simulate slow network connections on Linux and Mac OS X.
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ Here we test three URLs, and if the change is larger than 3% on all three URLs,
|
|||
|
||||
In the left part of the image you see a horizontal red line, that is when an alert is fired (sending an email/posting to Slack, PagerDuty etc). The green line is when the numbers are back to normal. In the right graph you can see the change in numbers.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
To the left we have changes in percentage. These are the numbers where we add alerts. In this case we first create a query and take the moving median one day back (this is the number we will use and compare with) and then we take the moving median of the latest 5 hours. Depending on how steady metrics we have, we can do this different. If you run on a stable environment with a proxy you don't need to take the median of X hours, instead you can take the exact run.
|
||||
|
|
@ -45,19 +45,19 @@ If you have a really unstable environment you can instead have a longer time spa
|
|||
|
||||
The queries for the three URLs looks like this:
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
|
||||
And change the axes unit to show percent: 0.0-1.0.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
#### The alert
|
||||
After that you need to create the alert. Take the median, choose a timespan and the percentage when you want to alert. In our example we do AND queries (all URLs must change) but if you are interested in specific URLs changing, you can also do OR alert queries.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
You see that we run the alerts once an hour. It depends on how often you do releases or you content changes. You want to make sure that you catch the alerts within at least couple of hours.
|
||||
|
|
@ -66,18 +66,18 @@ You see that we run the alerts once an hour. It depends on how often you do rele
|
|||
### Create metrics queries
|
||||
The other way is to create alerts that alerts if a threshold is met. In this example we want to alert if the First Visual Change increased by 20 ms for three URLs. The Graph looks like this:
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
And setting up the graph is more straight forward then using percentages. You get the metric you want, differ the metric with X amount back in time and draw the difference.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
#### The alert
|
||||
Then you setup the alert. In this example we run the alert query once every hour and it needs to fire twice within 2 hours, to actually send an alert. If we then make sure we run our tests at least every hour, we need two runs with higher values that the limit to fire the alert.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
|
||||
|
|
@ -87,12 +87,12 @@ The history graph is pretty straight forward. You list the metrics you want and
|
|||
|
||||
We take the moving median but you can try out what works best for you.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
|
||||
And then we make sure we show the last 7 days.
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
### More examples
|
||||
|
|
@ -100,12 +100,12 @@ And then we make sure we show the last 7 days.
|
|||
#### Alert on response size
|
||||
You can also create alerts that alerts when a response types size increase. Here we graph the JavaScript and CSS size.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
And the queries looks like this:
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
This is handy if you are not in full control of all the code that is pushed.
|
||||
|
|
@ -114,12 +114,12 @@ This is handy if you are not in full control of all the code that is pushed.
|
|||
|
||||
We know it shouldn't happen but sometimes your page reference a 404 or a 50x. Let us alert on that!
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
And the query looks like this (modify the excludes so that it matches what you need):
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
#### Alert on console.error
|
||||
|
|
@ -127,37 +127,37 @@ If you use Chrome in your testing you can also collect console log data. And the
|
|||
|
||||
Setup your query something like this:
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
And then your actual alert. Make sure to set 'If no data or all values are null* to *No data* or *Ok* so you don't fire alerts if you don't get any errors :)
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
#### Alert on to low privacy
|
||||
|
||||
One of the most important metrics you can get from the Coach is the privacy metric that helps you see how good you take care of your users and if you share their private information with other companies/web sites.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail-center}
|
||||
|
||||
|
||||
To get the metric, you query the Coach.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
And then when you setup the alert, make sure you alert on values *below* your current value.
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
## Summary
|
||||
|
||||
You can do the same with all the metrics you want. On mobile Wikipedia metrics is more stable and the First Visual Change looks like this:
|
||||
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
If you have any questions about the alerts, feel free to [create an issue at GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new?title=Alerts) or hit us on [Slack](https://sitespeedio.herokuapp.com).
|
||||
If you have any questions about the alerts, feel free to [create an issue at GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new?title=Alerts) or hit us on [Slack](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw).
|
||||
|
|
|
|||
|
|
@ -33,7 +33,9 @@ That will run [axe-core](https://github.com/dequelabs/axe-core) and generate a n
|
|||
|
||||
|
||||
## Configure Axe
|
||||
You can [configure Axe](https://github.com/dequelabs/axe-core/blob/develop/doc/API.md#api-name-axeconfigure) which rules/checks that will be used. In the *axe* namespace we pass on all parameters to the configuration object of Axe. `--axe.checks` will result in a configuration object like:
|
||||
You can [configure Axe](https://github.com/dequelabs/axe-core/blob/develop/doc/API.md#api-name-axeconfigure) which rules/checks that will be used.
|
||||
|
||||
You need to read [Axe official documentation](https://github.com/dequelabs/axe-core/blob/develop/doc/API.md#api-name-axeconfigure) to get a feeling for what you can configure with Axe. In the *axe* namespace we pass on all parameters to the configuration object of Axe. `--axe.checks` will result in a configuration object like:
|
||||
|
||||
```json
|
||||
checks: {
|
||||
|
|
@ -41,9 +43,23 @@ checks: {
|
|||
}
|
||||
```
|
||||
|
||||
If you wanna avoid having over complicated cli-params you should use the [configuration as JSON feature](/documentation/sitespeed.io/configuration/#configuration-as-json).
|
||||
That way you can configure all things you can configure in the [Axe configuration](https://github.com/dequelabs/axe-core/blob/develop/doc/API.md#api-name-axeconfigure).
|
||||
|
||||
|
||||
However you probably just want to configure [run options](https://github.com/dequelabs/axe-core/blob/develop/doc/API.md#api-name-axerun), you can do that with adding a run prefix. Say for example you only want to test *wcag2aa* compliance, you can do that with the *runOnly* configuration in AXE. You can do that with a configuration like:
|
||||
|
||||
```json
|
||||
{
|
||||
"axe": {
|
||||
"run": {
|
||||
"runOnly": ["wcag2aa"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
If you wanna avoid having over complicated cli-parameters you should use the [configuration as JSON feature](/documentation/sitespeed.io/configuration/#configuration-as-json).
|
||||
|
||||
## How it works behind the scene
|
||||
The Axe tests are run as a [postScript](/documentation/sitespeed.io/prepostscript/).
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ twitterdescription:
|
|||
* Lets place the TOC here
|
||||
{:toc}
|
||||
|
||||
Here we keep questions that are frequently asked at [Slack](https://sitespeedio.herokuapp.com/) or at [GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new).
|
||||
Here we keep questions that are frequently asked at [Slack](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw) or at [GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new).
|
||||
|
||||
## Running tests
|
||||
Read this before you start to collect metrics.
|
||||
|
|
@ -57,7 +57,7 @@ http://www.yoursite.com/my/really/important/page/ Important_Page
|
|||
http://www.yoursite.com/where/we/are/ We_are
|
||||
~~~
|
||||
|
||||
And then you give feed the file to sitespeed.io:
|
||||
And then you feed the file to sitespeed.io:
|
||||
|
||||
~~~bash
|
||||
docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} urls.txt
|
||||
|
|
@ -213,3 +213,22 @@ And a couple of generic things that will make your metrics differ:
|
|||
* **Connectivity matters** - You need to set the connectivity.
|
||||
* **CPU matters** - Running the same tests with the same tool on different machines will give different results.
|
||||
* **Your page matters** - It could happen that your page has different sweat spots on connectivity (that makes the page render faster) so even a small change, will make the page much slower (we have that scenario on Wikipedia).
|
||||
|
||||
|
||||
## Difference in metrics between sitespeed.io https://pagespeed.web.dev
|
||||
|
||||
When analyzing web performance data, it's important to understand the source and context of the metrics. The data from the Chrome User Experience Report represents metrics collected by Chrome from users who *"consented"* to share their browsing data. This report reflects the 75th percentile of user experiences, meaning that for the given metric, 75% of the sampled users experienced that performance level or better. For instance, in the example below, 75% of users had a Largest Contentful Paint (LCP) faster than 1.4 seconds, across various devices and network conditions.
|
||||
|
||||

|
||||
{: .img-thumbnail}
|
||||
|
||||
Is this data useful? Absolutely, especially if you don’t have your own real user monitoring (RUM) metrics. However, it's important to note that this data is limited to Chrome users who agreed to data collection, potentially skewing the metrics if your audience uses a broader range of browsers like Safari, Edge, or Firefox.
|
||||
|
||||
To optimize your sitespeed.io tests, use these insights to mirror the experiences of the 75th percentile of your user base. For example, you can adjust network throttling in sitespeed.io to match the Time to First Byte (TTFB) reported in the Chrome data. Then, compare First Contentful Paint (FCP) and LCP metrics. If they don't align, consider adjusting CPU throttling, or better yet, test on actual mobile devices. More information on CPU benchmarking for testing, such as with Wikipedia, can be found [here](https://www.sitespeed.io/documentation/sitespeed.io/cpu-benchmark/).
|
||||
|
||||
sitespeed.io even offers a [Chrome User Experience Report plugin](https://www.sitespeed.io/documentation/sitespeed.io/crux/) that lets you directly pull this data from Google for comparison with your sitespeed.io results.
|
||||
|
||||
In summary, consider this approach:
|
||||
|
||||
1. If you have your own RUM metrics, use them to calibrate your sitespeed.io tests.
|
||||
2. If not, leverage the Chrome User Experience data, keeping in mind its potential limitations, to guide your testing and optimization efforts.
|
||||
|
|
|
|||
|
|
@ -94,7 +94,7 @@ docker run --shm-size 2g --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io
|
|||
~~~
|
||||
|
||||
## Chrome
|
||||
The latest version of Chrome should work out of the box. Latest version of stable [ChromeDriver](http://chromedriver.chromium.org) is bundled in sitespeed.io and needs to match your Chrome version.
|
||||
The latest version of Chrome should work out of the box. Latest version of stable [ChromeDriver](http://chromedriver.chromium.org) is bundled in sitespeed.io.
|
||||
|
||||
### Chrome setup
|
||||
When we start Chrome it is setup with [these](https://github.com/sitespeedio/browsertime/blob/main/lib/chrome/webdriver/chromeOptions.js) command line switches.
|
||||
|
|
@ -134,13 +134,28 @@ If you use Chrome you can collect everything that is logged to the console. You
|
|||
### Collect the net log
|
||||
Collect Chromes net log with ```--chrome.collectNetLog```. This is useful if you want to debug exact what happens with Chrome and your web page. You will get one log file per run.
|
||||
|
||||
### Render blocking information
|
||||
If you use Chrome/Chromium you can get render blocking information (which requests blocks rendering). To get that from sitespeed.io you need to get the Chrome timeline (and we get that by default). But if you wanna make sure to configure it you turn it on with the flag ```--chrome.timeline ``` or ```--cpu```.
|
||||
|
||||
You can see the blocking information in the waterfall. Requests that blocks has different coloring.
|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
You can also click on the request and see the exact blocking info from Chrome.
|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
You can also see a summary on the Page Xray tab and see what kind of blocking information Chrome provides.
|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
### Choosing Chrome version
|
||||
You can choose which version of Chrome you want to run by using the ```--chrome.binaryPath``` and the full path to the Chrome binary.
|
||||
|
||||
Our Docker container only contains one version of Chrome and [let us know](https://github.com/sitespeedio/sitespeed.io/issues/new) if you need help to add more versions.
|
||||
|
||||
### Use a newer version of ChromeDriver
|
||||
ChromeDriver is the driver that handles the communication with Chrome. At the moment the ChromeDriver version needs to match the Chrome version. By default sitespeed.io and Browsertime comes with the ChromeDriver version that matches the Chrome version in the Docker container. If you wanna run tests on Chrome Beta/Canary you probably need to download a later version of ChromeDriver.
|
||||
ChromeDriver is the driver that handles the communication with Chrome. By default sitespeed.io and Browsertime comes with the ChromeDriver version that matches the Chrome version in the Docker container. If you want to run tests on other Chromedriver versions, you need to download that version of ChromeDriver.
|
||||
|
||||
You download ChromeDriver from [http://chromedriver.chromium.org](http://chromedriver.chromium.org) and then use ```--chrome.chromedriverPath``` to set the path to the new version of the ChromeDriver.
|
||||
|
||||
|
|
@ -198,18 +213,23 @@ sitespeed.io --chrome.binaryPath "/Applications/Brave Browser.app/Contents/MacOS
|
|||
~~~
|
||||
|
||||
## Choose when to end your test
|
||||
By default the browser will collect data until [window.performance.timing.loadEventEnd happens + approx 5 seconds more](https://github.com/sitespeedio/browsertime/blob/d68261e554470f7b9df28797502f5edac3ace2e3/lib/core/seleniumRunner.js#L15). That is perfectly fine for most sites, but if you do Ajax loading and you mark them with user timings, you probably want to include them in your test. Do that by changing the script that will end the test (```--browsertime.pageCompleteCheck```). When the scripts returns true the browser will close or if the timeout time is reached.
|
||||
By default sitespeed.io will use JavaScript to decide when to end the test. The script will run inside the browser and it will stop the test two seconds after the *window.performance.timing.loadEventEnd* has happened. You can also define your own JavaScript that decides when to end the test or use the `--pageCompleteCheckNetworkIdle` switch that stops the tests after 5 seconds of silence on the network.
|
||||
|
||||
In this example we wait 10 seconds until the loadEventEnd happens, but you can also choose to trigger it at a specific event.
|
||||
Here is an example how you can create your own script, in the example we wait 10 seconds until the loadEventEnd happens, but you can also choose to trigger it at a specific event.
|
||||
|
||||
~~~bash
|
||||
docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io --browsertime.pageCompleteCheck 'return (function() {try { return (Date.now() - window.performance.timing.loadEventEnd) > 10000;} catch(e) {} return true;})()'
|
||||
~~~
|
||||
|
||||
If loadEventEnd never happens for the page, the test will wait for `--maxLoadTime` until the test stops. By default that time is two minutes (yes that is long).
|
||||
|
||||
You can also configure how long time your current check will wait until completing with ```--pageCompleteWaitTime```. By default the pageCompleteCheck waits for 5000 ms after the onLoad event to happen. If you want to increase that to 10 seconds use ```--pageCompleteWaitTime 10000```. This is also useful if you test with *pageCompleteCheckInactivity* and it takes long time for the server to respond, you can use the *pageCompleteWaitTime* to wait longer than the default value.
|
||||
|
||||
You can also choose to end the test after 5 seconds of inactivity that happens after loadEventEnd. Do that by adding
|
||||
```--browsertime.pageCompleteCheckInactivity``` to your run. The test will then wait for loadEventEnd to happen and no requests in the Resource Timing API the last 5 seconds. Be-aware though that the script will empty the resource timing API data for every check so if you have your own script collecting data using the Resource Timing API it will fail.
|
||||

|
||||
{: .img-thumbnail}
|
||||
|
||||
You can also choose to end the test after 5 seconds of inactivity on the newtork. Do that by adding
|
||||
```--pageCompleteCheckNetworkIdle``` to your run. The test will then wait for no traffic in the network log for 5 seconds straight and then end the test.
|
||||
|
||||
There's is also another alternative: use ```--spa``` to automatically wait for 5 seconds of inactivity in the Resource Timing API (independently if the load event end has fired or not). If you need to wait longer, use ```--pageCompleteWaitTime```.
|
||||
|
||||
|
|
@ -236,11 +256,11 @@ docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include ve
|
|||
~~~
|
||||
|
||||
You will get a custom script section in the Browsertime tab.
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
And in the summary and detailed summary section.
|
||||

|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
Bonus: All custom scripts values will be sent to Graphite, no extra configuration needed!
|
||||
|
|
@ -285,6 +305,16 @@ You can also choose versions for Edge and Firefox with `EDGEDRIVER_VERSION` and
|
|||
|
||||
If you don't want to install the drivers you can skip them with `CHROMEDRIVER_SKIP_DOWNLOAD=true`, `GECKODRIVER_SKIP_DOWNLOAD=true` and `EDGEDRIVER_SKIP_DOWNLOAD=true`.
|
||||
|
||||
## Navigation and how we run the test
|
||||
|
||||
By default a navigation to a new page happens when Selenium (WebDriver) runs a JavaScript that sets `window.location` to the new URL. You can also choose to use WebDriver navigation (*driver.get*) by adding `--browsertime.webdriverPageload true` to your test.
|
||||
|
||||
By default the page load strategy is set to "none" meaning sitespeed.io gets control directly after the page started to navigate from WebDriver. You can choose page load strategy with `--browsertime.pageLoadStrategy`.
|
||||
|
||||
Then the JavaScript configured by `--browsertime.pageCompleteCheck` is run to determine when the page is finished loading. By default that script waits for the on load event to happen. That JavaScript that tries to determine if the page is finished runs after X seconds the first time, that is configured using `--browsertime.pageCompleteCheckStartWait`. The default is to wait 5 seconds before the first check.
|
||||
|
||||
During those seconds the browser needs to navigate (on a slow computer it can take time) and we also want to make sure we do not run that pageCompleteCheck too often because that can infer with metrics. After the first time the complete check has run you can choose how often it runs with `--browsertime.pageCompleteCheckPollTimeout`. Default is 1.5 seconds. When the page complete check tells us that the test is finished, we stop the video and start collect metrics for that page.
|
||||
|
||||
## How can I disable HTTP/2 (I only want to test HTTP/1.x)?
|
||||
In Chrome, you just add the switches <code>--browsertime.chrome.args disable-http2</code>.
|
||||
|
||||
|
|
|
|||
|
|
@ -37,10 +37,11 @@ The best way to make sure we can fix your issue, is to make sure we can reproduc
|
|||
To help us reproduce your problem there are a couple of things we need:
|
||||
|
||||
* Show us exactly how you run your tests (all parameters, all configuration). Mask out any passwords. But please do not leave out things from the configuration!
|
||||
* If you run [scripting to measure a user journey](https://www.sitespeed.io/documentation/sitespeed.io/scripting/) please please please include the script so we can run it the same way you run it! That will make it possible for us to reproduce your issue and help us a lot!
|
||||
* Include the URL that causes the problem. If the URL isn't public, please try to reproduce the problem on another URL that we can test. If the URL is super secret, you can share that to us in an email (write it in the issue and you can get the address). But we prefer public URLs so others also can reproduce the problem.
|
||||
* Include the log output from your run. Please do not take a screenshot of the log, instead share the log as text either in the issue or in a [gist](https://gist.github.com/).
|
||||
* Give us the exact version of sitespeed.io you are using (so we know we use the same version when we try to reproduce it).
|
||||
* Tell us what OS you are using and if you are using Docker (you should!) give us the base OS where you run your container.
|
||||
* Tell us what OS you are using and if you are using Docker give us the base OS where you run your container.
|
||||
* If you don't use Docker: Include the browser version you are using.
|
||||
* If you have problems with headers/cookie/auth you can use [https://httpbin.org](https://httpbin.org) to reproduce your issue.
|
||||
|
||||
|
|
@ -52,29 +53,28 @@ If you make your issue reproducible, the issue is the cream of the crop and will
|
|||
|
||||
* Search current [GitHub issues](https://github.com/sitespeedio/sitespeed.io/issues). Is this bug reported before? Does it lack info? Please add your own comment to that issue if it is open. If you aren't sure that your bug is the same as the other bug, please create another issue. Do not hijack issues. Do not comment on closed issue, please create a new issue instead and add a reference to the old issue.
|
||||
|
||||
* Do you think this is somehow related to Docker (generic Docker issues etc)? Then please [search](https://duckduckgo.com/) for the that problem or head over to [forums.docker.com](https://forums.docker.com/) and have a look there first.
|
||||
* Is there a problem with the video or the metrics from the video? Then make sure to enable the full original video so you can share that with us, do that by adding <code>--videoParams.keepOriginalVideo</code> to your run. Look in the *video* folder for that URL and you will see a video named *1-original.mp4*. Please share that video with us, then we can more easily reproduce/understand the problem.
|
||||
|
||||
* Is there a problem with the video? Then make sure to enable the full original video so you can share that with us, do that by adding <code>--browsertime.videoParams.keepOriginalVideo</code> to your run (or if you use Browsertime: <code>--videoParams.keepOriginalVideo</code>).
|
||||
* Do you think this is somehow related to Docker (generic Docker issues etc)? Then please [search](https://duckduckgo.com/) for the that problem or head over to [forums.docker.com](https://forums.docker.com/) and have a look there first.
|
||||
|
||||
* Is your problem related to that you are behind a proxy? Then we kindly recommend that you run your tests without a proxy. Run the tests on a network where you don't need to use a proxy.
|
||||
|
||||
## How we prioritise bugs
|
||||
|
||||
When we groom issues we will add a tag with the prioritization. We have three prio tags: **prio:1**, **prio:3** and **prio:5**.
|
||||
|
||||
If a issue is bug that breaks functionality for many users or is a feature request that will help many users and its somnething we can implement, we gonna give it **prio:1**. If the issue is a bug that we plan to fix, it will have **prio:3**. If your bug/issue gets **prio:5** we maybe will fix it sometimes in the future. Also scripting issues releated to how you use scripting on your site always gets **prio:5** but we will try to help you the best we can.
|
||||
If a issue is bug that breaks functionality for many users and you make a *reproducable* test case/show us exactly how you run, we will try to fix that bug.
|
||||
|
||||
If you do not agree with our prioritization you can:
|
||||
* Explain the issue better and make sure we can reproduce your issue
|
||||
* Do the PR yourself. We can help you test and verify it.
|
||||
* Support us at [Open Collective](https://opencollective.com/sitespeedio). We can not promise we will fix your issue but it will increase the chance of getting it fixed.
|
||||
s
|
||||
## How to make sure we try fix the bug as soon as possible
|
||||
|
||||
Here's dos and don'ts if you want your bug fixed:
|
||||
|
||||
Please do:
|
||||
* [Provide a reproducible test case](#explain-how-to-reproduce-your-issue).
|
||||
* If you don't get a response in a couple of days, write a message in the [general channel in Slack](https://sitespeedio.herokuapp.com/).
|
||||
* If you don't get a response in a couple of days, write a message in the [general channel in Slack](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw).
|
||||
|
||||
Please don't:
|
||||
* Contact us on direct messages on Slack about the bug.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,120 @@
|
|||
---
|
||||
layout: default
|
||||
title: Use Mann Whitney U or Wilcox statistical methods to know if you have a regression.
|
||||
description: Finding performance regressions is hard. Using Mann Whitney U/Wilcox can help you.
|
||||
keywords: Mann Whitney U, performance, regression,
|
||||
nav: documentation
|
||||
category: sitespeed.io
|
||||
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
|
||||
twitterdescription: Use Mann Whitney U or Wilcox statistical methods to know if you have a regression.
|
||||
---
|
||||
[Documentation]({{site.baseurl}}/documentation/sitespeed.io/) / Compare
|
||||
|
||||
# Compare - Statistical Methods for Regression Analysis
|
||||
{:.no_toc}
|
||||
|
||||
* Let's place the TOC here
|
||||
{:toc}
|
||||
|
||||
Sitespeed.io utilizes Mann Whitney U and Wilcoxon tests for detecting performance regressions.
|
||||
|
||||
## Why Mann Whitney U and Wilcoxon for Web Performance?
|
||||
|
||||
* **Non-Parametric Nature**: Both tests are non-parametric, making them ideal for web performance data, which often doesn't follow a normal distribution. This means they can reliably analyze data with outliers or skewed distributions, common in web performance metrics.
|
||||
Sensitive to Subtle Changes: The Mann Whitney U test, used for comparing two independent samples, and the Wilcoxon test, suitable for paired data, are sensitive to even minor shifts in performance. This sensitivity is critical for early detection of regressions that might not significantly impact average values but could affect user experience.
|
||||
* **Robust Against Variability**: Web performance metrics can be highly variable due to factors like network conditions, user behavior, and server load. These tests effectively handle this variability, providing a more accurate reflection of the true performance impact of changes.
|
||||
* **Clarity in Comparative Analysis**: Unlike simple average-based comparisons, these tests give a clearer picture of whether the observed performance differences are statistically significant. This clarity is essential for making informed decisions about optimizations and rollbacks.
|
||||
* **Actionable Insights**: By identifying statistically significant performance regressions, these tests provide actionable insights. They help in pinpointing specific changes that need attention, enabling targeted optimizations.
|
||||
|
||||
Utilizing these tests through the compare plugin allows for a sophisticated approach to web performance analysis. For instance, after deploying a new feature or update, you can compare the new performance data against a baseline using these tests. If the tests indicate a significant performance drop, it's a strong signal that the recent changes have negatively impacted the site's speed.
|
||||
|
||||
Looking at medians can make it hard to see small changes. This is an example where we look at first visual change for the Barack Obama page. You can see that the metrics goes up at one point but then sometimes it comes back.
|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
|
||||
Looking for a signifacnt change help, this is the graph for the same metric when the change occured.
|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
## Prerequisites
|
||||
To get the compare plugin to work, you need to have Python installed and the [scipy](https://scipy.org) library in Python. The easiest way to install that is with pip: `python -m pip install scipy`. If you use our Docker containers this is already installed.
|
||||
|
||||
## Save the baseline (save the world)
|
||||
By default tests run against a baseline test and look for regressions in the new test. That means that for your test to work you need to collect a baseline. You do that by adding `--compare.saveBaseline` to your test. And you also need to give your test an unique id. You do that by adding `--compare.id myId`. Adding those to your test will store the baseline on disk. By default that file is stored in your current directory. You can also change that by adding `--compare.baselinePath` to set the path to the file. That is useful in Docker if you want the file to be stores outside the container.
|
||||
|
||||
To save a baseline using NodeJS:
|
||||
|
||||
~~~bash
|
||||
sitespeed.io https://www.sitespeed.io -n 21 --compare.saveBaseline --compare.id start_page
|
||||
~~~
|
||||
|
||||
Using Docker there's a new volume that you should use to mount where you want to save the baseline. `-v "$(pwd):/baseline"` will map your current directory to where you will store the baseline files. If you want to store them somewhere else then change what you map inside the container `-v "/somewhere/else:/baseline"`
|
||||
|
||||
~~~bash
|
||||
docker run -v "$(pwd):/baseline" sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io/ -n 21 --compare.saveBaseline --compare.id start_page --compare.baselinePath /baseline
|
||||
~~~
|
||||
|
||||
The baseline file is a JSON file that contains all the raw data from Browsertime.
|
||||
|
||||
## Run your test
|
||||
|
||||
For your test to work, assign the same identification (id) to both your current test and the baseline test. This matching id is crucial for the test to correctly locate and compare with the baseline. Additionally, ensure that the number of iterations in your test matches that of the baseline. Remember, using a sufficiently large number of iterations is essential as it leads to more accurate and reliable results.
|
||||
|
||||
~~~bash
|
||||
sitespeed.io https://www.sitespeed.io -n 21 ---compare.id start_page
|
||||
~~~
|
||||
|
||||
In Docker:
|
||||
~~~bash
|
||||
docker run -v "$(pwd):/baseline" sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io/ -n 21 --compare.id start_page --compare.baselinePath /baseline
|
||||
~~~
|
||||
|
||||
You can also save a baseline for each and every test, so you always compare your last run with the run before that. That will automatically happen if you have the `--compare.saveBaseline`.
|
||||
|
||||
~~~bash
|
||||
docker run -v "$(pwd):/baseline" sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io/ -n 21 --compare.id start_page --compare.baselinePath /baseline --compare.saveBaseline
|
||||
~~~
|
||||
|
||||
## Results
|
||||
|
||||
When you run your test, it will create a new tab in the HTML results. This tab will include **a results table** that shows various result data and **comparison graphs** that for each metric, shows graphs comparing the baseline (previous data) with the latest run.
|
||||
|
||||
|
||||
The result table looks something like this:
|
||||
|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
The columns:
|
||||
1. **Metric Name**: This column lists the names of the performance metrics that were tested. These could be timings for different events (like Time to First Byte, load event completion, first contentful paint, etc.), or CPU-related metrics (like total duration of CPU tasks, duration of the last long task, etc.).
|
||||
2. **Score (mannwhitneyu/wilcox)**: This column shows the Mann-Whitney U/Wilcox scores for the comparisons between baseline and current test runs for each metric. A lower score typically indicates more significant differences between the two groups being compared.
|
||||
3. **Baseline Mean**: The average (mean) value for the baseline test run for each metric.
|
||||
4. **Current Mean**: The average (mean) value for the current test run for each metric.
|
||||
5. **Baseline Median**: The median value for the baseline test run for each metric. The median is the middle value when all the results are ordered from lowest to highest.
|
||||
6. **Current Median**: The median value for the current test run for each metric.
|
||||
7. **Baseline Std Dev**: Standard deviation for the baseline test run for each metric. This measures the amount of variation from the average.
|
||||
8. **Current Std Dev**: Standard deviation for the current test run for each metric.
|
||||
9. **Significant Change?**: This column indicates whether the change between the baseline and current test runs is statistically significant for each metric. If the change is statistically significant then we use [Cliffs Delta](https://en.wikipedia.org/wiki/Effect_size#Effect_size_for_ordinal_data) to know if the change is small, medium or large.
|
||||
|
||||
|
||||
And the compare graphs will look like this for every metric:
|
||||
|
||||
{:loading="lazy"}
|
||||
{: .img-thumbnail}
|
||||
|
||||
### Understanding Significant Changes
|
||||
|
||||
In the results table, you'll see a list of all metrics with their corresponding scores.
|
||||
|
||||
If a score is below 0.05, it indicates a statistically significant difference between the baseline and the current data. By default we test if the current data is greater than the baseline. You can change that using `--compare.alternative`. You can test if it's smaller or if there are any difference between the two ('two-sided').
|
||||
|
||||
If the score is below 0.05, it means that the changes observed in the metric are likely not due to chance.
|
||||
|
||||
### What does 'No Test Conducted' Mean?
|
||||
|
||||
If you see a result marked as no test conducted, it means the analysis couldn’t be done. This usually happens when the data samples are too similar or don't show enough variation to conduct a meaningful analysis.
|
||||
|
||||
## Alert on data in Graphite
|
||||
The actual score is automatically sent to Graphite for each metric, which makes it possible to create alert rules in Grafana to alert on regressions. Documentation on how to do that will come soon.
|
||||
|
||||
|
|
@ -1,27 +1,32 @@
|
|||
sitespeed.js [options] <url>/<file>
|
||||
|
||||
Browser
|
||||
-b, --browsertime.browser, --browser Choose which Browser to use when you test. Safari only works on Mac OS X and iOS 13 (or later). Chrome needs to be the same version as the current installed ChromeDriver (check the changelog for what version that is currently used). Use --chrome.chromedriverPath to use another ChromeDriver version. [choices: "chrome", "firefox", "safari", "edge"] [default: "chrome"]
|
||||
-b, --browsertime.browser, --browser Choose which Browser to use when you test. Safari only works on Mac OS X and iOS 13 (or later). [choices: "chrome", "firefox", "safari", "edge"] [default: "chrome"]
|
||||
-n, --browsertime.iterations How many times you want to test each page [default: 3]
|
||||
--browsertime.spa, --spa Convenient parameter to use if you test a SPA application: will automatically wait for X seconds after last network activity and use hash in file names. Read https://www.sitespeed.io/documentation/sitespeed.io/spa/ [boolean] [default: false]
|
||||
--browsertime.gnirehtet, --gnirehtet Start gnirehtet and reverse tethering the traffic from your Android phone. [boolean] [default: false]
|
||||
--browsertime.debug, --debug Run Browsertime in debug mode. Use commands.breakpoint(name) to set breakpoints in your script. Debug mode works for Firefox/Chrome/Edge on desktop. [boolean] [default: false]
|
||||
--browsertime.limitedRunData Send only limited metrics from one run to the datasource. [boolean] [default: true]
|
||||
-c, --browsertime.connectivity.profile The connectivity profile. To actually set the connectivity you can choose between Docker networks or Throttle, read https://www.sitespeed.io/documentation/sitespeed.io/connectivity/ [string] [choices: "4g", "3g", "3gfast", "3gslow", "3gem", "2g", "cable", "native", "custom"] [default: "native"]
|
||||
--browsertime.connectivity.alias Give your connectivity profile a custom name [string]
|
||||
--browsertime.connectivity.down, --downstreamKbps, --browsertime.connectivity.downstreamKbps This option requires --connectivity be set to "custom".
|
||||
--browsertime.connectivity.up, --upstreamKbps, --browsertime.connectivity.upstreamKbps This option requires --connectivity be set to "custom".
|
||||
--browsertime.connectivity.rtt, --latency, --browsertime.connectivity.latency This option requires --connectivity be set to "custom".
|
||||
--browsertime.connectivity.engine, --connectivity.engine The engine for connectivity. Throttle works on Mac and tc based Linux. Use external if you set the connectivity outside of Browsertime. Use tsproxy if you are using Kubernetes. More documentation at https://www.sitespeed.io/documentation/sitespeed.io/connectivity/. [string] [choices: "external", "throttle", "tsproxy"] [default: "external"]
|
||||
--browsertime.pageCompleteCheck, --pageCompleteCheck Supply a JavaScript that decides when the browser is finished loading the page and can start to collect metrics. The JavaScript snippet is repeatedly queried to see if page has completed loading (indicated by the script returning true). Use it to fetch timings happening after the loadEventEnd.
|
||||
--browsertime.pageCompleteWaitTime, --pageCompleteWaitTime How long time you want to wait for your pageComplteteCheck to finish, after it is signaled to closed. Extra parameter passed on to your pageCompleteCheck. [default: 5000]
|
||||
--browsertime.connectivity.engine, --connectivity.engine The engine for connectivity. Throttle works on Mac and tc based Linux. For mobile you can use Humble if you have a Humble setup. Use external if you set the connectivity outside of Browsertime. More documentation at https://www.sitespeed.io/documentation/sitespeed.io/connectivity/. [string] [choices: "external", "throttle", "humble"] [default: "external"]
|
||||
--browsertime.connectivity.humble.url, --connectivity.humble.url The path to your Humble instance. For example http://raspberrypi:3000 [string]
|
||||
--browsertime.timeouts.pageCompleteCheck, --maxLoadTime The max load time to wait for a page to finish loading (in milliseconds). [number] [default: 120000]
|
||||
--browsertime.pageCompleteCheck, --pageCompleteCheck Supply a JavaScript that decides when the browser is finished loading the page and can start to collect metrics. The JavaScript snippet is repeatedly queried to see if page has completed loading (indicated by the script returning true). Checkout https://www.sitespeed.io/documentation/sitespeed.io/browsers/#choose-when-to-end-your-test
|
||||
--browsertime.pageCompleteWaitTime, --pageCompleteWaitTime How long time you want to wait for your pageCompleteCheck to finish, after it is signaled to closed. Extra parameter passed on to your pageCompleteCheck. [default: 5000]
|
||||
--browsertime.pageCompleteCheckInactivity, --pageCompleteCheckInactivity Alternative way to choose when to end your test. This will wait for 2 seconds of inactivity that happens after loadEventEnd. [boolean] [default: false]
|
||||
--browsertime.pageCompleteCheckPollTimeout, --pageCompleteCheckPollTimeout The time in ms to wait for running the page complete check the next time. [number] [default: 1500]
|
||||
--browsertime.pageCompleteCheckStartWait, --pageCompleteCheckStartWait The time in ms to wait for running the page complete check for the first time. Use this when you have a pageLoadStrategy set to none [number] [default: 500]
|
||||
--browsertime.pageCompleteCheckNetworkIdle, --pageCompleteCheckNetworkIdle Use the network log instead of running JavaScript to decide when to end the test. This will wait for 5 seconds of no network activity before it ends the test. This can be used with Chrome/Edge and Firefox. [boolean] [default: false]
|
||||
--browsertime.pageLoadStrategy, --pageLoadStrategy Set the strategy to waiting for document readiness after a navigation event. After the strategy is ready, your pageCompleteCheck will start running. This only work for Firefox and Chrome and please check which value each browser implements. [string] [choices: "eager", "none", "normal"] [default: "none"]
|
||||
--browsertime.script, --script Add custom Javascript that collect metrics and run after the page has finished loading. Note that --script can be passed multiple times if you want to collect multiple metrics. The metrics will automatically be pushed to the summary/detailed summary and each individual page + sent to Graphite/InfluxDB.
|
||||
--browsertime.injectJs, --injectJs Inject JavaScript into the current page at document_start. More info: https://developer.mozilla.org/docs/Mozilla/Add-ons/WebExtensions/API/contentScripts
|
||||
--browsertime.selenium.url Configure the path to the Selenium server when fetching timings using browsers. If not configured the supplied NodeJS/Selenium version is used.
|
||||
--browsertime.viewPort, --viewPort The browser view port size WidthxHeight like 400x300 [default: "1366x708"]
|
||||
--browsertime.userAgent, --userAgent The full User Agent string, defaults to the User Agent used by the browsertime.browser option.
|
||||
--browsertime.appendToUserAgent, --appendToUserAgent Append a String to the user agent. Works in Chrome/Edge and Firefox.
|
||||
--browsertime.preURL, --preURL A URL that will be accessed first by the browser before the URL that you wanna analyse. Use it to fill the cache.
|
||||
--browsertime.preScript, --preScript Selenium script(s) to run before you test your URL. They will run outside of the analyse phase. Note that --preScript can be passed multiple times.
|
||||
--browsertime.postScript, --postScript Selenium script(s) to run after you test your URL. They will run outside of the analyse phase. Note that --postScript can be passed multiple times.
|
||||
|
|
@ -42,14 +47,26 @@ Browser
|
|||
--axe.enable Run axe tests. Axe will run after all other metrics is collected and will add some extra time to each test. [boolean]
|
||||
-r, --browsertime.requestheader, --requestheader Request header that will be added to the request. Add multiple instances to add multiple request headers. Use the following format key:value. Only works in Chrome and Firefox.
|
||||
--browsertime.cookie, --cookie Cookie that will be added to the request. Add multiple instances to add multiple cookies. Use the following format cookieName=cookieValue. Only works in Chrome and Firefox.
|
||||
--browsertime.block, --block Domain to block. Add multiple instances to add multiple domains that will be blocked. Only works in Chrome and Firefox.
|
||||
--browsertime.block, --block Domain or URL or URL pattern to block. If you use Chrome you can also use --blockDomainsExcept (that is more performant). Works in Chrome/Edge. For Firefox you can only block domains.
|
||||
--browsertime.basicAuth, --basicAuth Use it if your server is behind Basic Auth. Format: username@password. Only works in Chrome and Firefox.
|
||||
--browsertime.flushDNS, --flushDNS Flush the DNS between runs (works on Mac OS and Linux). The user needs sudo rights to flush the DNS.
|
||||
--browsertime.headless, --headless Run the browser in headless mode. This is the browser internal headless mode, meaning you cannot collect Visual Metrics or in Chrome run any WebExtension (this means you cannot add cookies, requestheaders or use basic auth for headless Chrome). Only works in Chrome and Firefox. [boolean] [default: false]
|
||||
|
||||
Android
|
||||
--browsertime.android.gnirehtet, --gnirehtet, --browsertime.gnirehtet Start gnirehtet and reverse tethering the traffic from your Android phone. [boolean] [default: false]
|
||||
--browsertime.android.rooted, --androidRooted, --browsertime.androidRooted If your phone is rooted you can use this to set it up following Mozillas best practice for stable metrics. [boolean] [default: false]
|
||||
--browsertime.android.batteryTemperatureLimit, --androidBatteryTemperatureLimit, --browsertime.androidBatteryTemperatureLimit Do the battery temperature need to be below a specific limit before we start the test?
|
||||
--browsertime.android.batteryTemperatureWaitTimeInSeconds, --androidBatteryTemperatureWaitTimeInSeconds, --browsertime.androidBatteryTemperatureWaitTimeInSeconds How long time to wait (in seconds) if the androidBatteryTemperatureWaitTimeInSeconds is not met before the next try [default: 120]
|
||||
--browsertime.android.verifyNetwork, --androidVerifyNetwork, --browsertime.androidVerifyNetwork Before a test start, verify that the device has a Internet connection by pinging 8.8.8.8 (or a configurable domain with --androidPingAddress) [boolean] [default: false]
|
||||
|
||||
video
|
||||
--browsertime.videoParams.keepOriginalVideo, --videoParams.keepOriginalVideo Keep the original video. Use it when you have a Visual Metrics bug and want to create an issue at GitHub. Supply the original video in the issue and we can reproduce your issue. [boolean] [default: false]
|
||||
|
||||
Filmstrip
|
||||
--browsertime.videoParams.filmstripFullSize, --videoParams.filmstripFullSize Keep original sized screenshots in the filmstrip. Will make the run take longer time [boolean] [default: false]
|
||||
--browsertime.videoParams.filmstripQuality, --videoParams.filmstripQuality The quality of the filmstrip screenshots. 0-100. [default: 75]
|
||||
--browsertime.videoParams.createFilmstrip, --videoParams.createFilmstrip Create filmstrip screenshots. [boolean] [default: true]
|
||||
--browsertime.videoParams.thumbsize, --videoParams.thumbsize The maximum size of the thumbnail in the filmstrip. Default is 400 pixels in either direction. If browsertime.videoParams.filmstripFullSize is used that setting overrides this. [default: 400]
|
||||
--filmstrip.showAll Show all screenshots in the filmstrip, independent if they have changed or not. [boolean] [default: false]
|
||||
|
||||
Firefox
|
||||
|
|
@ -60,6 +77,8 @@ Firefox
|
|||
--browsertime.firefox.binaryPath, --firefox.binaryPath Path to custom Firefox binary (e.g. Firefox Nightly). On OS X, the path should be to the binary inside the app bundle, e.g. /Applications/Firefox.app/Contents/MacOS/firefox-bin
|
||||
--browsertime.firefox.preference, --firefox.preference Extra command line arguments to pass Firefox preferences by the format key:value To add multiple preferences, repeat --firefox.preference once per argument.
|
||||
--browsertime.firefox.acceptInsecureCerts, --firefox.acceptInsecureCerts Accept insecure certs [boolean]
|
||||
--browsertime.firefox.memoryReport, --firefox.memoryReport Measure firefox resident memory after each iteration. [boolean] [default: false]
|
||||
--browsertime.firefox.memoryReportParams.minizeFirst, --firefox.memoryReportParams.minizeFirst Force a collection before dumping and measuring the memory report. [boolean] [default: false]
|
||||
--browsertime.firefox.geckoProfiler, --firefox.geckoProfiler Collect a profile using the internal gecko profiler [boolean] [default: false]
|
||||
--browsertime.firefox.geckoProfilerParams.features, --firefox.geckoProfilerParams.features Enabled features during gecko profiling [string] [default: "js,stackwalk,leaf"]
|
||||
--browsertime.firefox.geckoProfilerParams.threads, --firefox.geckoProfilerParams.threads Threads to profile. [string] [default: "GeckoMain,Compositor,Renderer"]
|
||||
|
|
@ -71,18 +90,19 @@ Firefox
|
|||
--browsertime.firefox.disableTrackingProtection, --firefox.disableTrackingProtection Disable Tracking Protection. [boolean] [default: true]
|
||||
--browsertime.firefox.android.package, --firefox.android.package Run Firefox or a GeckoView-consuming App on your Android device. Set to org.mozilla.geckoview_example for default Firefox version. You need to have adb installed to make this work.
|
||||
--browsertime.firefox.android.activity, --firefox.android.activity Name of the Activity hosting the GeckoView.
|
||||
--browsertime.firefox.android.deviceSerial, --firefox.android.deviceSerial Choose which device to use. If you do not set it, first device will be used.
|
||||
--browsertime.firefox.android.deviceSerial, --firefox.android.deviceSerial Choose which device to use. If you do not set it, first device will be used. [string]
|
||||
--browsertime.firefox.android.intentArgument, --firefox.android.intentArgument Configure how the Android intent is launched. Passed through to `adb shell am start ...`; follow the format at https://developer.android.com/studio/command-line/adb#IntentSpec. To add multiple arguments, repeat --firefox.android.intentArgument once per argument.
|
||||
--browsertime.firefox.profileTemplate, --firefox.profileTemplate Profile template directory that will be cloned and used as the base of each profile each instance of Firefox is launched against. Use this to pre-populate databases with certificates, tracking protection lists, etc.
|
||||
--browsertime.firefox.collectMozLog, --firefox.collectMozLog Collect the MOZ HTTP log [boolean]
|
||||
|
||||
Chrome
|
||||
--browsertime.chrome.args, --chrome.args Extra command line arguments to pass to the Chrome process. If you use the command line, leave out the starting -- (--no-sandbox will be no-sandbox). If you use a configuration JSON file you should keep the starting --. To add multiple arguments to Chrome, repeat --browsertime.chrome.args once per argument. See https://peter.sh/experiments/chromium-command-line-switches/
|
||||
--browsertime.chrome.timeline, --chrome.timeline Collect the timeline data. Drag and drop the JSON in your Chrome detvools timeline panel or check out the CPU metrics. [boolean]
|
||||
--browsertime.chrome.timeline, --chrome.timeline Collect the timeline data. Drag and drop the JSON in your Chrome detvools timeline panel or check out the CPU metrics. [boolean] [default: false]
|
||||
--browsertime.chrome.appendToUserAgent, --chrome.appendToUserAgent Append to the user agent. [string]
|
||||
--browsertime.chrome.android.package, --chrome.android.package Run Chrome on your Android device. Set to com.android.chrome for default Chrome version. You need to have adb installed to run on Android.
|
||||
--browsertime.chrome.android.activity, --chrome.android.activity Name of the Activity hosting the WebView.
|
||||
--browsertime.chrome.android.process, --chrome.android.process Process name of the Activity hosting the WebView. If not given, the process name is assumed to be the same as chrome.android.package.
|
||||
--browsertime.chrome.android.deviceSerial, --chrome.android.deviceSerial Choose which device to use. If you do not set it, the first found device will be used.
|
||||
--browsertime.chrome.android.deviceSerial, --chrome.android.deviceSerial Choose which device to use. If you do not set it, the first found device will be used. [string]
|
||||
--browsertime.chrome.collectNetLog, --chrome.collectNetLog Collect network log from Chrome and save to disk. [boolean]
|
||||
--browsertime.chrome.traceCategories, --chrome.traceCategories Set the trace categories. [string]
|
||||
--browsertime.chrome.traceCategory, --chrome.traceCategory Add a trace category to the default ones. Use --chrome.traceCategory multiple times if you want to add multiple categories. Example: --chrome.traceCategory disabled-by-default-v8.cpu_profiler [string]
|
||||
|
|
@ -116,15 +136,26 @@ proxy
|
|||
--browsertime.proxy.https, --proxy.https Https proxy (host:port) [string]
|
||||
|
||||
Crawler
|
||||
-d, --crawler.depth How deep to crawl (1=only one page, 2=include links from first page, etc.)
|
||||
-m, --crawler.maxPages The max number of pages to test. Default is no limit.
|
||||
--crawler.exclude Exclude URLs matching the provided regular expression (ex: "/some/path/", "://some\.domain/"). Can be provided multiple times.
|
||||
--crawler.include Discard URLs not matching the provided regular expression (ex: "/some/path/", "://some\.domain/"). Can be provided multiple times.
|
||||
-d, --crawler.depth How deep to crawl (1=only one page, 2=include links from first page, etc.)
|
||||
-m, --crawler.maxPages The max number of pages to test. Default is no limit.
|
||||
--crawler.exclude Exclude URLs matching the provided regular expression (ex: "/some/path/", "://some\.domain/"). Can be provided multiple times.
|
||||
--crawler.include Discard URLs not matching the provided regular expression (ex: "/some/path/", "://some\.domain/"). Can be provided multiple times.
|
||||
--crawler.ignoreRobotsTxt Ignore robots.txt rules of the crawled domain. [boolean] [default: false]
|
||||
|
||||
scp
|
||||
--scp.host The host.
|
||||
--scp. destinationPath The destination path on the remote server where the files will be copied.
|
||||
--scp.port The port for ssh when scp the result to another server. [default: 22]
|
||||
--scp.username The username. Use username/password or username/privateKey/pem.
|
||||
--scp.password The password if you do not use a pem file.
|
||||
--scp.privateKey Path to the pem file.
|
||||
--scp.passphrase The passphrase for the pem file.
|
||||
--scp.removeLocalResult Remove the files locally when the files has been copied to the other server. [default: true]
|
||||
|
||||
Grafana
|
||||
--grafana.host The Grafana host used when sending annotations.
|
||||
--grafana.port The Grafana port used when sending annotations to Grafana. [default: 80]
|
||||
--grafana.auth The Grafana auth/bearer value used when sending annotations to Grafana. See http://docs.grafana.org/http_api/auth/#authentication-api
|
||||
--grafana.auth The Grafana auth/bearer value used when sending annotations to Grafana. If you do not set Bearer/Auth, Bearer is automatically set. See http://docs.grafana.org/http_api/auth/#authentication-api
|
||||
--grafana.annotationTitle Add a title to the annotation sent for a run.
|
||||
--grafana.annotationMessage Add an extra message that will be attached to the annotation sent for a run. The message is attached after the default message and can contain HTML.
|
||||
--grafana.annotationTag Add a extra tag to the annotation sent for a run. Repeat the --grafana.annotationTag option for multiple tags. Make sure they do not collide with the other tags.
|
||||
|
|
@ -136,19 +167,35 @@ Graphite
|
|||
--graphite.auth The Graphite user and password used for authentication. Format: user:password
|
||||
--graphite.httpPort The Graphite port used to access the user interface and send annotations event [default: 8080]
|
||||
--graphite.webHost The graphite-web host. If not specified graphite.host will be used.
|
||||
--graphite.proxyPath Extra path to graphite-web when behind a proxy, used when sending annotations. [default: ""]
|
||||
--graphite.namespace The namespace key added to all captured metrics. [default: "sitespeed_io.default"]
|
||||
--graphite.includeQueryParams Whether to include query parameters from the URL in the Graphite keys or not [boolean] [default: false]
|
||||
--graphite.arrayTags Send the tags as Array or a String. In Graphite 1.0 the tags is a array. Before a String [boolean] [default: true]
|
||||
--graphite.annotationTitle Add a title to the annotation sent for a run.
|
||||
--graphite.annotationMessage Add an extra message that will be attached to the annotation sent for a run. The message is attached after the default message and can contain HTML.
|
||||
--graphite.annotationScreenshot Include screenshot (from Browsertime/WebPageTest) in the annotation. You need to specify a --resultBaseURL for this to work. [boolean] [default: false]
|
||||
--graphite.sendAnnotation Send annotations when a run is finished. You need to specify a --resultBaseURL for this to work. However if you for example use a Prometheus exporter, you may want to make sure annotations are not sent, then set it to false. [boolean] [default: true]
|
||||
--graphite.annotationRetentionMinutes The retention in minutes, to make annotation match the retention in Graphite. [number]
|
||||
--graphite.statsd Uses the StatsD interface [boolean] [default: false]
|
||||
--graphite.annotationTag Add a extra tag to the annotation sent for a run. Repeat the --graphite.annotationTag option for multiple tags. Make sure they do not collide with the other tags.
|
||||
--graphite.addSlugToKey Add the slug (name of the test) as an extra key in the namespace. [boolean] [default: false]
|
||||
--graphite.bulkSize Break up number of metrics to send with each request. [number] [default: null]
|
||||
--graphite.skipSummary Skip sending summary messages data to Graphite (summaries over a domain). [boolean] [default: false]
|
||||
--graphite.perIteration Send each iteration of metrics to Graphite. By default we only send page summaries (the summaries of all runs) but you can also send all the runs. Make sure to setup statsd or Graphite correctly to handle it. [boolean] [default: false]
|
||||
--graphite.addSlugToKey Add the slug (name of the test) as an extra key in the namespace. [boolean] [default: true]
|
||||
--graphite.bulkSize Break up number of metrics to send with each request. [number]
|
||||
--graphite.messages Define which messages to send to Graphite. By default we do not send data per run, but you can change that by adding run as one of the options [default: ["pageSummary","summary"]]
|
||||
|
||||
InfluxDB
|
||||
--influxdb.protocol The protocol used to store connect to the InfluxDB host. [default: "http"]
|
||||
--influxdb.host The InfluxDB host used to store captured metrics.
|
||||
--influxdb.port The InfluxDB port used to store captured metrics. [default: 8086]
|
||||
--influxdb.username The InfluxDB username for your InfluxDB instance (only for InfluxDB v1)
|
||||
--influxdb.password The InfluxDB password for your InfluxDB instance (only for InfluxDB v1).
|
||||
--influxdb.organisation The InfluxDB organisation for your InfluxDB instance (only for InfluxDB v2)
|
||||
--influxdb.token The InfluxDB token for your InfluxDB instance (only for InfluxDB v2)
|
||||
--influxdb.version The InfluxDB version of your InfluxDB instance. [default: 1]
|
||||
--influxdb.database The database name used to store captured metrics. [default: "sitespeed"]
|
||||
--influxdb.tags A comma separated list of tags and values added to each metric [default: "category=default"]
|
||||
--influxdb.includeQueryParams Whether to include query parameters from the URL in the InfluxDB keys or not [boolean] [default: false]
|
||||
--influxdb.groupSeparator Choose which character that will separate a group/domain. Default is underscore, set it to a dot if you wanna keep the original domain name. [default: "_"]
|
||||
--influxdb.annotationScreenshot Include screenshot (from Browsertime) in the annotation. You need to specify a --resultBaseURL for this to work. [boolean] [default: false]
|
||||
|
||||
Plugins
|
||||
--plugins.list List all configured plugins in the log. [boolean]
|
||||
|
|
@ -160,6 +207,7 @@ Budget
|
|||
--budget.suppressExitCode By default sitespeed.io returns a failure exit code, if the budget fails. Set this to true and sitespeed.io will return exit code 0 independent of the budget.
|
||||
--budget.config The JSON budget config as a string.
|
||||
--budget.output The output format of the budget. [choices: "junit", "tap", "json"]
|
||||
--budget.friendlyName Add a friendly name to the test case. At the moment this is only used in junit.
|
||||
--budget.removeWorkingResult, --budget.removePassingResult Remove the result of URLs that pass the budget. You can use this if you many URL and only care about the ones that fails your budget. All videos/HTML for the working URLs will be removed if you pass this on. [boolean]
|
||||
|
||||
Screenshot
|
||||
|
|
@ -169,30 +217,27 @@ Screenshot
|
|||
--browsertime.screenshotParams.jpg.quality, --screenshot.jpg.quality Quality of the JPEG screenshot. 1-100 [default: 80]
|
||||
--browsertime.screenshotParams.maxSize, --screenshot.maxSize The max size of the screenshot (width and height). [default: 2000]
|
||||
|
||||
InfluxDB
|
||||
--influxdb.protocol The protocol used to store connect to the InfluxDB host. [default: "http"]
|
||||
--influxdb.host The InfluxDB host used to store captured metrics.
|
||||
--influxdb.port The InfluxDB port used to store captured metrics. [default: 8086]
|
||||
--influxdb.username The InfluxDB username for your InfluxDB instance.
|
||||
--influxdb.password The InfluxDB password for your InfluxDB instance.
|
||||
--influxdb.database The database name used to store captured metrics. [default: "sitespeed"]
|
||||
--influxdb.tags A comma separated list of tags and values added to each metric [default: "category=default"]
|
||||
--influxdb.includeQueryParams Whether to include query parameters from the URL in the InfluxDB keys or not [boolean] [default: false]
|
||||
--influxdb.groupSeparator Choose which character that will seperate a group/domain. Default is underscore, set it to a dot if you wanna keep the original domain name. [default: "_"]
|
||||
--influxdb.annotationScreenshot Include screenshot (from Browsertime) in the annotation. You need to specify a --resultBaseURL for this to work. [boolean] [default: false]
|
||||
|
||||
Metrics
|
||||
--metrics.list List all possible metrics in the data folder (metrics.txt). [boolean] [default: false]
|
||||
--metrics.filterList List all configured filters for metrics in the data folder (configuredMetrics.txt) [boolean] [default: false]
|
||||
--metrics.filter Add/change/remove filters for metrics. If you want to send all metrics, use: *+ . If you want to remove all current metrics and send only the coach score: *- coach.summary.score.* [array]
|
||||
|
||||
Matrix
|
||||
--matrix.host The Matrix host.
|
||||
--matrix.accessToken The Matrix access token.
|
||||
--matrix.room The default Matrix room. It is alsways used. You can override the room per message type using --matrix.rooms
|
||||
--matrix.messages Choose what type of message to send to Matrix. There are two types of messages: Error messages and budget messages. Errors are errors that happens through the tests (failures like strarting a test) and budget is test failing against your budget. [choices: "error", "budget"] [default: ["error","budget"]]
|
||||
--matrix.rooms Send messages to different rooms. Current message types are [function messageTypes() {
|
||||
return ['error', 'budget'];
|
||||
}]. If you want to send error messages to a specific room use --matrix.rooms.error ROOM
|
||||
|
||||
Slack
|
||||
--slack.hookUrl WebHook url for the Slack team (check https://<your team>.slack.com/apps/manage/custom-integrations).
|
||||
--slack.userName User name to use when posting status to Slack. [default: "Sitespeed.io"]
|
||||
--slack.channel The slack channel without the # (if something else than the default channel for your hook).
|
||||
--slack.type Send summary for a run, metrics from all URLs, only on errors or all to Slack. [choices: "summary", "url", "error", "all"] [default: "all"]
|
||||
--slack.limitWarning The limit to get a warning in Slack using the limitMetric [default: 90]
|
||||
--slack.limitError The limit to get a error in Slack using the limitMetric [default: 80]
|
||||
--slack.type Send summary for a tested URL, metrics from all URLs (summary), only on errors from your tests or all to Slack. [choices: "summary", "url", "error", "all"] [default: "all"]
|
||||
--slack.limitWarning The limit to get a warning in Slack using the limitMetric. [default: 90]
|
||||
--slack.limitError The limit to get a error in Slack using the limitMetric. [default: 80]
|
||||
--slack.limitMetric The metric that will be used to set warning/error. You can choose only one at the moment. [choices: "coachScore", "speedIndex", "firstVisualChange", "firstPaint", "visualComplete85", "lastVisualChange", "fullyLoaded"] [default: "coachScore"]
|
||||
|
||||
s3
|
||||
|
|
@ -216,6 +261,12 @@ GoogleCloudStorage
|
|||
--gcs.path Override the default folder path in the bucket where the results are uploaded. By default it's "DOMAIN_OR_FILENAME_OR_SLUG/TIMESTAMP", or the name of the folder if --outputFolder is specified.
|
||||
--gcs.removeLocalResult Remove all the local result files after they have been uploaded to Google Cloud storage. [boolean] [default: false]
|
||||
|
||||
CrUx
|
||||
--crux.key You need to use a key to get data from CrUx. Get the key from https://developers.google.com/web/tools/chrome-user-experience-report/api/guides/getting-started#APIKey
|
||||
--crux.enable Enable the CrUx plugin. This is on by defauly but you also need the Crux key. If you chose to disable it with this key, set this to false and you can still use the CrUx key in your configuration. [default: true]
|
||||
--crux.formFactor A form factor is the type of device on which a user visits a website. [string] [choices: "ALL", "DESKTOP", "PHONE", "TABLET"] [default: "ALL"]
|
||||
--crux.collect Choose what data to collect. URL is data for a specific URL, ORIGIN for the domain and ALL for both of them [string] [choices: "ALL", "URL", "ORIGIN"] [default: "ALL"]
|
||||
|
||||
HTML
|
||||
--html.showAllWaterfallSummary Set to true to show all waterfalls on page summary HTML report [boolean] [default: false]
|
||||
--html.fetchHARFiles Set to true to load HAR files using fetch instead of including them in the HTML. Turn this on if serve your pages using a server. [boolean] [default: false]
|
||||
|
|
@ -225,7 +276,7 @@ HTML
|
|||
--html.assetsBaseURL The base URL to the server serving the assets of HTML results. In the format of https://result.sitespeed.io. This can be used to reduce size in large setups. If set, disables writing of assets to the output folder.
|
||||
--html.compareURL, --html.compareUrl Will add a link on the waterfall page, helping you to compare the HAR. The full path to your compare installation. In the format of https://compare.sitespeed.io/
|
||||
--html.pageSummaryMetrics Select from a list of metrics to be displayed for given URL(s). Pass on multiple --html.pageSummaryMetrics to add more than one column. This is best used as an array in your config.json file. [default: ["transferSize.total","requests.total","thirdParty.requests","transferSize.javascript","transferSize.css","transferSize.image","score.performance"]]
|
||||
--html.summaryBoxes Select required summary information to be displayed on result index page. [default: ["score.score","score.accessibility","score.bestpractice","score.privacy","score.performance","timings.firstPaint","timings.firstContentfulPaint","timings.fullyLoaded","timings.pageLoadTime","timings.largestContentfulPaint","timings.FirstVisualChange","timings.LastVisualChange","timings.SpeedIndex","timings.PerceptualSpeedIndex","timings.VisualReadiness","timings.VisualComplete","pageinfo.cumulativeLayoutShift","requests.total","requests.javascript","requests.css","requests.image","transferSize.total","transferSize.html","transferSize.javascript","contentSize.javascript","transferSize.css","transferSize.image","thirdParty.transferSize","thirdParty.requests","webpagetest.SpeedIndex","webpagetest.lastVisualChange","webpagetest.render","webpagetest.visualComplete","webpagetest.visualComplete95","webpagetest.TTFB","webpagetest.fullyLoaded","axe.critical","axe.serious","axe.minor","axe.moderate","cpu.longTasksTotalDuration","cpu.longTasks","cpu.totalBlockingTime","cpu.maxPotentialFid","sustainable.totalCO2","sustainable.co2PerPageView","sustainable.co2FirstParty","sustainable.co2ThirdParty"]]
|
||||
--html.summaryBoxes Select required summary information to be displayed on result index page. [default: ["score.score","score.accessibility","score.bestpractice","score.privacy","score.performance","timings.firstPaint","timings.firstContentfulPaint","timings.fullyLoaded","timings.pageLoadTime","timings.largestContentfulPaint","timings.FirstVisualChange","timings.LastVisualChange","timings.SpeedIndex","timings.PerceptualSpeedIndex","timings.VisualReadiness","timings.VisualComplete","timings.backEndTime","googleWebVitals.cumulativeLayoutShift","requests.total","requests.javascript","requests.css","requests.image","transferSize.total","transferSize.html","transferSize.javascript","contentSize.javascript","transferSize.css","transferSize.image","thirdParty.transferSize","thirdParty.requests","webpagetest.SpeedIndex","webpagetest.lastVisualChange","webpagetest.render","webpagetest.visualComplete","webpagetest.visualComplete95","webpagetest.TTFB","webpagetest.fullyLoaded","axe.critical","axe.serious","axe.minor","axe.moderate","cpu.longTasksTotalDuration","cpu.longTasks","cpu.totalBlockingTime","cpu.maxPotentialFid","sustainable.totalCO2","sustainable.co2PerPageView","sustainable.co2FirstParty","sustainable.co2ThirdParty"]]
|
||||
--html.summaryBoxesThresholds Configure the thresholds for red/yellow/green for the summary boxes.
|
||||
|
||||
Text
|
||||
|
|
@ -234,54 +285,66 @@ Text
|
|||
|
||||
Sustainable
|
||||
--sustainable.enable Test if the web page is sustainable. [boolean]
|
||||
--sustainable.model Model used for measure digital carbon emissions. [choices: "1byte", "swd"] [default: "1byte"]
|
||||
--sustainable.pageViews Number of page views used when calculating CO2.
|
||||
--sustainable.disableHosting Disable the hosting check. Default we do a check to a local database of domains with green hosting provided by the Green Web Foundation [boolean] [default: false]
|
||||
--sustainable.useGreenWebHostingAPI Instead of using the local copy of the hosting database, you can use the latest version through the Green Web Foundation API. This means sitespeed.io will make HTTP GET to the the hosting info. [boolean] [default: false]
|
||||
|
||||
CrUx
|
||||
--crux.key You need to use a key to get data from CrUx. Get the key from https://developers.google.com/web/tools/chrome-user-experience-report/api/guides/getting-started#APIKey
|
||||
--crux.formFactor A form factor is the type of device on which a user visits a website. [string] [choices: "ALL", "DESKTOP", "PHONE", "TABLET"] [default: "ALL"]
|
||||
--crux.collect Choose what data to collect. URL is data for a specific URL, ORIGIN for the domain and ALL for both of them [string] [choices: "ALL", "URL", "ORIGIN"] [default: "ALL"]
|
||||
API
|
||||
--api.key The API key to use.
|
||||
--api.action The type of API call you want to do: You get add a test and wait for the result, just add a test or get the result. To get the result, make sure you add the id using --api.id [choices: "add", "addAndGetResult", "get"] [default: "addAndGetResult"]
|
||||
--api.hostname The hostname of the API server.
|
||||
--api.location The location of the worker that run the test.
|
||||
--api.silent Set to true if you do not want to log anything from the communication [boolean] [default: false]
|
||||
--api.port The port for the API
|
||||
--api.id The id of the test. Use it when you want to get the test result. [string]
|
||||
--api.label Add a label to your test. [string]
|
||||
--api.priority The priority of the test. Highest priority is 1.
|
||||
--api.json Output the result as JSON.
|
||||
|
||||
Matrix
|
||||
--matrix.host The Matrix host.
|
||||
--matrix.accessToken The Matrix access token.
|
||||
--matrix.room The default Matrix room. It is alsways used. You can override the room per message type using --matrix.rooms
|
||||
--matrix.messages Choose what type of message to send to Matrix. There are two types of messages: Error messages and budget messages. Errors are errors that happens through the tests (failures like strarting a test) and budget is test failing against your budget. [choices: "error", "budget"] [default: ["error","budget"]]
|
||||
--matrix.rooms Send messages to different rooms. Current message types are [error,budget]. If you want to send error messages to a specific room use --matrix.rooms.error ROOM
|
||||
compare
|
||||
--compare.id The id of the test. Will be used to find the baseline test, that is using the id as a part of the name. If you do not add an id, an id will be generated using the URL and that will only work if you baseline against the exact same URL. [string]
|
||||
--compare.baselinePath Specifies the path to the baseline data file. This file is used as a reference for comparison against the current test data. [string]
|
||||
--compare.saveBaseline Determines whether to save the current test data as the new baseline. Set to true to save the current data as baseline for future comparisons. [boolean] [default: false]
|
||||
--compare.testType Selects the statistical test type to be used for comparison. Options are mannwhitneyu for the Mann-Whitney U test and wilcoxon for the Wilcoxon signed-rank test. [choices: "mannwhitneyu", " wilcoxon"] [default: "mannwhitneyu"]
|
||||
--compare.alternative Specifies the alternative hypothesis to be tested. Default is greater than means current data is greater than the baseline. two-sided means we look for different both ways and less means current is less than baseline. [choices: "less", " greater", "two-sided"] [default: "greater"]
|
||||
--compare.wilcoxon.correction Enables or disables the continuity correction in the Wilcoxon signed-rank test. Set to true to enable the correction. [boolean] [default: false]
|
||||
--compare.wilcoxon.zeroMethod Specifies the method for handling zero differences in the Wilcoxon test. wilcox discards all zero-difference pairs, pratt includes all, and zsplit splits them evenly among positive and negative ranks. [choices: "wilcox", " pratt", "zsplit"] [default: "zsplit"]
|
||||
--compare.mannwhitneyu.useContinuity Determines whether to use continuity correction in the Mann-Whitney U test. Set to true to apply the correction. [boolean] [default: false]
|
||||
--compare.mannwhitneyu.method [choices: "auto", " exact", "symptotic"] [default: "auto"]
|
||||
|
||||
Options:
|
||||
-V, --version Show version number [boolean]
|
||||
--debugMessages Debug mode logs all internal messages in the message queue to the log. [boolean] [default: false]
|
||||
-v, --verbose, --debug Verbose mode prints progress messages to the console. Enter up to three times (-vvv) to increase the level of detail. [count]
|
||||
--browsertime.xvfb, --xvfb Start xvfb before the browser is started [boolean] [default: false]
|
||||
--browsertime.xvfbParams.display, --xvfbParams.display The display used for xvfb [default: 99]
|
||||
--browsertime.tcpdump, --tcpdump Collect a tcpdump for each tested URL. The user that runs sitespeed.io should have sudo rights for tcpdump to work. [boolean] [default: false]
|
||||
--browsertime.android, --android Short key to use Android. Will automatically use com.android.chrome for Chrome and stable Firefox. If you want to use another Chrome version, use --chrome.android.package [boolean] [default: false]
|
||||
--browsertime.androidRooted, --androidRooted If your phone is rooted you can use this to set it up following Mozillas best practice for stable metrics. [boolean] [default: false]
|
||||
--browsertime.androidBatteryTemperatureLimit, --androidBatteryTemperatureLimit Do the battery temperature need to be below a specific limit before we start the test?
|
||||
--browsertime.androidBatteryTemperatureWaitTimeInSeconds, --androidBatteryTemperatureWaitTimeInSeconds How long time to wait (in seconds) if the androidBatteryTemperatureWaitTimeInSeconds is not met before the next try [default: 120]
|
||||
--browsertime.androidVerifyNetwork, --androidVerifyNetwork Before a test start, verify that the device has a Internet connection by pinging 8.8.8.8 (or a configurable domain with --androidPingAddress) [boolean] [default: false]
|
||||
--browsertime.iqr Use IQR, or Inter Quartile Range filtering filters data based on the spread of the data. See https://en.wikipedia.org/wiki/Interquartile_range. In some cases, IQR filtering may not filter out anything. This can happen if the acceptable range is wider than the bounds of your dataset. [boolean] [default: false]
|
||||
--browsertime.preWarmServer, --preWarmServer Do pre test requests to the URL(s) that you want to test that is not measured. Do that to make sure your web server is ready to serve. The pre test requests is done with another browser instance that is closed after pre testing is done. [boolean] [default: false]
|
||||
--browsertime.preWarmServerWaitTime The wait time before you start the real testing after your pre-cache request. [number] [default: 5000]
|
||||
--debugMessages Debug mode logs all internal messages in the message queue to the log. [boolean] [default: false]
|
||||
-v, --verbose Verbose mode prints progress messages to the console. Enter up to three times (-vvv) to increase the level of detail. [count]
|
||||
--browsertime.xvfb, --xvfb Start xvfb before the browser is started [boolean] [default: false]
|
||||
--browsertime.xvfbParams.display, --xvfbParams.display The display used for xvfb [default: 99]
|
||||
--browsertime.visualMetricsPortable Use the portable visual-metrics processing script (no ImageMagick dependencies). [boolean] [default: true]
|
||||
--browsertime.enableProfileRun, --enableProfileRun Make one extra run that collects the profiling trace log (no other metrics is collected). For Chrome it will collect the timeline trace, for Firefox it will get the Geckoprofiler trace. This means you do not need to get the trace for all runs and can skip the overhead it produces. [boolean]
|
||||
--browsertime.cjs, --cjs Load scripting files that ends with .js as common js. Default (false) loads files as esmodules. [boolean] [default: false]
|
||||
--browsertime.tcpdump, --tcpdump Collect a tcpdump for each tested URL. The user that runs sitespeed.io should have sudo rights for tcpdump to work. [boolean] [default: false]
|
||||
--browsertime.android, --android Short key to use Android. Will automatically use com.android.chrome for Chrome and stable Firefox. If you want to use another Chrome version, use --chrome.android.package [boolean] [default: false]
|
||||
--browsertime.iqr Use IQR, or Inter Quartile Range filtering filters data based on the spread of the data. See https://en.wikipedia.org/wiki/Interquartile_range. In some cases, IQR filtering may not filter out anything. This can happen if the acceptable range is wider than the bounds of your dataset. [boolean] [default: false]
|
||||
--browsertime.preWarmServer, --preWarmServer Do pre test requests to the URL(s) that you want to test that is not measured. Do that to make sure your web server is ready to serve. The pre test requests is done with another browser instance that is closed after pre testing is done. [boolean] [default: false]
|
||||
--browsertime.preWarmServerWaitTime The wait time before you start the real testing after your pre-cache request. [number] [default: 5000]
|
||||
--plugins.disable [array]
|
||||
--plugins.load [array]
|
||||
--mobile Access pages as mobile a fake mobile device. Set UA and width/height. For Chrome it will use device Apple iPhone 6. [boolean] [default: false]
|
||||
--resultBaseURL, --resultBaseUrl The base URL to the server serving the HTML result. In the format of https://result.sitespeed.io
|
||||
--gzipHAR Compress the HAR files with GZIP. [boolean] [default: false]
|
||||
--outputFolder The folder where the result will be stored. If you do not set it, the result will be stored in "DOMAIN_OR_FILENAME_OR_SLUG/TIMESTAMP" [string]
|
||||
--copyLatestFilesToBase Copy the latest screenshots to the root folder (so you can include it in Grafana). Do not work together it --outputFolder. [boolean] [default: false]
|
||||
--firstParty A regex running against each request and categorize it as first vs third party URL. (ex: ".*sitespeed.*")
|
||||
--urlAlias Use an alias for the URL (if you feed URLs from a file you can instead have the alias in the file). You need to pass on the same amount of alias as URLs. The alias is used as the name of the URL on the HTML report and in Graphite/InfluxDB. Pass on multiple --urlAlias for multiple alias/URLs. This will override alias in a file. [string]
|
||||
--groupAlias Use an alias for the group/domain. You need to pass on the same amount of alias as URLs. The alias is used as the name of the group in Graphite/InfluxDB. Pass on multiple --groupAlias for multiple alias/groups. This do not work for scripting at the moment. [string]
|
||||
--utc Use Coordinated Universal Time for timestamps [boolean] [default: false]
|
||||
--logToFile Store the log for your run into a file in logs/sitespeed.io.log [boolean] [default: false]
|
||||
--useHash If your site uses # for URLs and # give you unique URLs you need to turn on useHash. By default is it turned off, meaning URLs with hash and without hash are treated as the same URL [boolean] [default: false]
|
||||
--multi Test multiple URLs within the same browser session (same cache etc). Only works with Browsertime. Use this if you want to test multiple pages (use journey) or want to test multiple pages with scripts. You can mix URLs and scripts (the order will matter): login.js https://www.sitespeed.io/ logout.js - More details: https://www.sitespeed.io/documentation/sitespeed.io/scripting/ [boolean] [default: false]
|
||||
--name Give your test a name.
|
||||
--slug Give your test a slug. The slug is used when you send the metrics to your data storage to identify the test and the folder of the tests. The max length of the slug is 200 characters and it can only contain a-z A-Z 0-9 and -_ characters.
|
||||
--config Path to JSON config file
|
||||
-h, --help Show help [boolean]
|
||||
--mobile Access pages as mobile a fake mobile device. Set UA and width/height. For Chrome it will use device Moto G4. [boolean] [default: false]
|
||||
--resultBaseURL, --resultBaseUrl The base URL to the server serving the HTML result. In the format of https://result.sitespeed.io
|
||||
--gzipHAR Compress the HAR files with GZIP. [boolean] [default: false]
|
||||
--outputFolder The folder where the result will be stored. If you do not set it, the result will be stored in "DOMAIN_OR_FILENAME_OR_SLUG/TIMESTAMP" [string]
|
||||
--copyLatestFilesToBase Copy the latest screenshots to the root folder (so you can include it in Grafana). Do not work together it --outputFolder. [boolean] [default: false]
|
||||
--firstParty A regex running against each request and categorize it as first vs third party URL. (ex: ".*sitespeed.*"). If you do not set a regular expression parts of the domain from the tested URL will be used: ".*domain.*"
|
||||
--urlAlias Use an alias for the URL (if you feed URLs from a file you can instead have the alias in the file). You need to pass on the same amount of alias as URLs. The alias is used as the name of the URL on the HTML report and in Graphite/InfluxDB. Pass on multiple --urlAlias for multiple alias/URLs. This will override alias in a file. [string]
|
||||
--groupAlias Use an alias for the group/domain. You need to pass on the same amount of alias as URLs. The alias is used as the name of the group in Graphite/InfluxDB. Pass on multiple --groupAlias for multiple alias/groups. This do not work for scripting at the moment. [string]
|
||||
--utc Use Coordinated Universal Time for timestamps [boolean] [default: false]
|
||||
--logToFile Store the log for your run into a file in logs/sitespeed.io.log [boolean] [default: false]
|
||||
--useHash If your site uses # for URLs and # give you unique URLs you need to turn on useHash. By default is it turned off, meaning URLs with hash and without hash are treated as the same URL [boolean] [default: false]
|
||||
--multi Test multiple URLs within the same browser session (same cache etc). Only works with Browsertime. Use this if you want to test multiple pages (use journey) or want to test multiple pages with scripts. You can mix URLs and scripts (the order will matter): login.js https://www.sitespeed.io/ logout.js - More details: https://www.sitespeed.io/documentation/sitespeed.io/scripting/ [boolean] [default: false]
|
||||
--name Give your test a name.
|
||||
-o, --open, --view Open your test result in your default browser (Mac OS or Linux with xdg-open).
|
||||
--slug Give your test a slug. The slug is used when you send the metrics to your data storage to identify the test and the folder of the tests. The max length of the slug is 200 characters and it can only contain a-z A-Z 0-9 and -_ characters.
|
||||
--config Path to JSON config file
|
||||
--version Show version number [boolean]
|
||||
-h, --help Show help [boolean]
|
||||
|
||||
Read the docs at https://www.sitespeed.io/documentation/sitespeed.io/
|
||||
|
|
|
|||
|
|
@ -84,6 +84,14 @@ If you want to test multiple URLs in a sequence (where the browser cache is not
|
|||
docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} --multi https://www.sitespeed.io https://www.sitespeed.io/documentation/
|
||||
~~~
|
||||
|
||||
You can also add an group alias to the plain text file that replaces the domain part of the URL in the time series database. To do this, add a non-spaced string after each URL alias (this only works if you already have an alias for the URL):
|
||||
|
||||
~~~
|
||||
http://www.yoursite.com/ Start_page Group1
|
||||
http://www.yoursite.com/my/really/important/page/ Important_Page Group1
|
||||
http://www.test.com/where/we/are/ We_are Group2
|
||||
~~~
|
||||
|
||||
If you wanna do more complicated things like log in the user, add items to a cart etc, checkout [scripting](../scripting/).
|
||||
|
||||
|
||||
|
|
@ -117,7 +125,7 @@ You should throttle the connection when you are fetching metrics. We have a [spe
|
|||
|
||||
You can set the viewport & user agent, so you can fake testing a site as a mobile device.
|
||||
|
||||
The simplest way is to just add <code>--mobile</code> as a parameter. The viewport will be set to 360x640 and the user agent will be Iphone6. If you use Chrome it will use the preset Apple iPhone 6 device.
|
||||
The simplest way is to just add <code>--mobile</code> as a parameter. If you use Chrome it will use the preset Moto G4 device.
|
||||
|
||||
~~~bash
|
||||
docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io --mobile
|
||||
|
|
@ -167,7 +175,7 @@ docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include ve
|
|||
|
||||
You can keep all your configuration in a JSON file and then pass it on to sitespeed.io, and override with CLI parameters. We use [yargs](https://github.com/yargs/yargs) for the CLI and configuration.
|
||||
|
||||
The CLI parameters can easily be converted to a JSON, using the full name of the cli name. A simple example is when you configure which browser to use. The shorthand name is `-b` but if you check the help (`--help`) you can see that the full name is `browsertime.browser`. That means that `-b`and `--browsertime.browser` is the same. And in your JSON configuration that looks like this:
|
||||
The CLI parameters can easily be converted to a JSON, using the full name of the cli name. A simple example is when you configure which browser to use. The shorthand name is `-b` but if you check the help (`--help`) you can see that the full name is `browsertime.browser`. That means that `-b` and `--browsertime.browser` is the same. And in your JSON configuration that looks like this:
|
||||
|
||||
~~~json
|
||||
{
|
||||
|
|
@ -231,6 +239,25 @@ You can also extend another JSON config file. The path needs to be absolute.
|
|||
}
|
||||
~~~
|
||||
|
||||
If you have a parameter that you want to repeat, for example setting multiple request headers, the field needs to be an JSON array.
|
||||
|
||||
~~~json
|
||||
{
|
||||
"browsertime": {
|
||||
"requestheader": "key:value"
|
||||
}
|
||||
}
|
||||
~~~
|
||||
|
||||
~~~json
|
||||
{
|
||||
"browsertime": {
|
||||
"requestheader": ["key:value", "key2:value2"]
|
||||
}
|
||||
}
|
||||
~~~
|
||||
|
||||
|
||||
You can check out [our example configuration](https://github.com/sitespeedio/dashboard.sitespeed.io/tree/main/config) for [dashboard.sitespeed.io](https://dashboard.sitespeed.io).
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,9 @@
|
|||
googleWebVitals.timeToFirstByte
|
||||
googleWebVitals.firstContentfulPaint
|
||||
googleWebVitals.largestContentfulPaint
|
||||
googleWebVitals.interactionToNextPaint
|
||||
googleWebVitals.totalBlockingTime
|
||||
googleWebVitals.cumulativeLayoutShift
|
||||
timings.firstPaint
|
||||
timings.firstContentfulPaint
|
||||
timings.largestContentfulPaint
|
||||
|
|
@ -19,7 +25,10 @@ cpu.totalBlockingTime
|
|||
cpu.maxPotentialFid
|
||||
cpu.longTasks
|
||||
cpu.longTasksTotalDuration
|
||||
browser.cpuBenchmark
|
||||
pageinfo.cumulativeLayoutShift
|
||||
pageinfo.domElements
|
||||
pageinfo.documentHeight
|
||||
requests.total
|
||||
requests.html
|
||||
requests.javascript
|
||||
|
|
|
|||
|
|
@ -89,11 +89,11 @@ Or use a configuration json:
|
|||
|
||||
# Configure the thresholds for red/yellow/green summary boxes
|
||||
|
||||
You can override the default configurations that definees the colors of the summary boxes. The default code is set [here](https://github.com/sitespeedio/sitespeed.io/blob/main/lib/plugins/html/setup/summaryBoxesDefaultLimits.js) and is a good starting point for what you can set.
|
||||
You can override the default configurations that defines the colors of the summary boxes. The default code is set [here](https://github.com/sitespeedio/sitespeed.io/blob/main/lib/plugins/html/setup/summaryBoxesDefaultLimits.js) and is a good starting point for what you can set.
|
||||
|
||||
Define your JSON file with the limits and feed it to sitespeed.io with `--html.summaryBoxesThresholds`.
|
||||
|
||||
Say that you are testing on a slow 3g connection and the default settings for first paint is unreleasistic (1000 ms for green and over 2000 gives you red). Create a JSON file name it summaryLimits.json:
|
||||
Say that you are testing on a slow 3g connection and the default settings for first paint is unrealistic (1000 ms for green and over 2000 gives you red). Create a JSON file name it summaryLimits.json:
|
||||
|
||||
~~~json
|
||||
{
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ docker run --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io:{% include ve
|
|||
|
||||
|
||||
## Configure/filter metrics
|
||||
You can add/change/remove filters with **\-\-metrics.filter**.
|
||||
You can add/change/remove filters with **\-\-metrics.filter**. We use yargs to pass on parameters and complicated parameters like metrics.filter works best if you use a configuration json file.
|
||||
|
||||
### Add a metric
|
||||
If you want to add metrics, start by looking at the generated metrics file, so you can see what you would send.
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
layout: default
|
||||
title: Set the connectivity type before you start your tests.
|
||||
description: You can throttle the connection to make the connectivity slower to make it easier to catch regressions. The best way to do that is to setup a network bridge in Docker or use our connectivity engine Throttle. If you use Kubernetes you should use TSProxy.
|
||||
description: You can throttle the connection to make the connectivity slower to make it easier to catch regressions. The best way to do that is to setup a network bridge in Docker or use our connectivity engine Throttle.
|
||||
keywords: connectivity, throttle, emulate, users
|
||||
nav: documentation
|
||||
category: sitespeed.io
|
||||
|
|
@ -19,8 +19,59 @@ twitterdescription:
|
|||
## Change/set connectivity
|
||||
You can and should throttle the connection to make the connectivity slower to make it easier to catch regressions. If you don’t do it, you can run your tests with different connectivity profiles and regressions/improvements that you see is caused by your servers flaky internet connection
|
||||
|
||||
The best way to do that is to setup a network bridge in Docker, use our connectivity engine [Throttle](https://github.com/sitespeedio/throttle) or if you use Kubernetes you can use [TSProxy](https://github.com/WPO-Foundation/tsproxy).
|
||||
The best way to do that is to use our connectivity engine [Throttle](https://github.com/sitespeedio/throttle), setup a network bridge in Docker, use [Humble](https://github.com/sitespeedio/humble) the Raspberry Pi WiFi network link conditioner if you test with mobile phones.
|
||||
### Throttle
|
||||
[Throttle](https://github.com/sitespeedio/throttle) uses *tc* on Linux and *pfctl* on Mac to change the connectivity. Throttle will need sudo rights for the user running sitespeed.io to work.
|
||||
|
||||
To use throttle, use set the connectivity engine by <code>--connectivity.engine throttle</code>.
|
||||
|
||||
~~~bash
|
||||
browsertime --connectivity.engine throttle -c cable https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
or for sitespeed.io:
|
||||
|
||||
~~~bash
|
||||
sitespeed.io --browsertime.connectivity.engine throttle -c cable https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
You can also use Throttle inside of Docker but then the host need to be the same OS as in Docker. In practice you can only use it on Linux. And then make sure to run *sudo modprobe ifb numifbs=1* first and give the container the right privileges *--cap-add=NET_ADMIN*.
|
||||
|
||||
First use modprobe:
|
||||
|
||||
~~~bash
|
||||
sudo modprobe ifb numifbs=1
|
||||
~~~
|
||||
|
||||
And then then make user you use the right privileges:
|
||||
~~~bash
|
||||
docker run --cap-add=NET_ADMIN --rm sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} -c 3g --browsertime.connectivity.engine=throttle https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
If you run Docker on OS X, you need to rottle outside of Docker. Install it and run like this:
|
||||
|
||||
~~~
|
||||
# First install
|
||||
$ npm install @sitespeed.io/throttle -g
|
||||
|
||||
# Then set the connectivity, run and stop
|
||||
$ throttle cable
|
||||
$ docker run --shm-size=1g --rm sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io/
|
||||
$ throttle stop
|
||||
~~~
|
||||
|
||||
### Humble (for mobile phone testing)
|
||||
To get Humble up and running you need a Raspberry Pi 4. The Pi will share a trottled WiFi network that you can use from your phone. Browsertime/sitespeed.io will connect to the Raspberry Pi and configure the connectivity on the WiFi before your test starts.
|
||||
|
||||
1. Follow [the instructions to setup the Raspberry Pi WiFi network](https://github.com/sitespeedio/humble#install-using-the-pre-made-image).
|
||||
2. Make sure your phone uses the new WiFi (named `humble` by default).
|
||||
3. Run the tests!
|
||||
|
||||
To make sure sitespeed.io sets the connectivity you need to set the engine to `humble` and set the URL to the Raspberry Pi:
|
||||
|
||||
~~~shell
|
||||
sitespeed.io --browsertime.connectivity.engine=humble --browsertime.connectivity.humble.url=http://raspberrypi.local:3001 --android --browsertime.connectivity.profile 3g https://www.sitespeed.io
|
||||
~~~
|
||||
|
||||
### Docker networks
|
||||
Here's an full example to setup up Docker network bridges on a server that has tc installed:
|
||||
|
|
@ -72,52 +123,3 @@ docker network rm 3gslow
|
|||
docker network rm cable
|
||||
~~~
|
||||
|
||||
### Throttle
|
||||
[Throttle](https://github.com/sitespeedio/throttle) uses *tc* on Linux and *pfctl* on Mac to change the connectivity. Throttle will need sudo rights for the user running sitespeed.io to work.
|
||||
|
||||
To use throttle, use set the connectivity engine by <code>--connectivity.engine throttle</code>.
|
||||
|
||||
~~~bash
|
||||
browsertime --connectivity.engine throttle -c cable https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
or for sitespeed.io:
|
||||
|
||||
~~~bash
|
||||
sitespeed.io --browsertime.connectivity.engine throttle -c cable https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
You can also use Throttle inside of Docker but then the host need to be the same OS as in Docker. In practice you can only use it on Linux. And then make sure to run *sudo modprobe ifb numifbs=1* first and give the container the right privileges *--cap-add=NET_ADMIN*.
|
||||
|
||||
First use modprobe:
|
||||
|
||||
~~~bash
|
||||
sudo modprobe ifb numifbs=1
|
||||
~~~
|
||||
|
||||
And then then make user you use the right privileges:
|
||||
~~~bash
|
||||
docker run --cap-add=NET_ADMIN --rm sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} -c 3g --browsertime.connectivity.engine=throttle https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
If you run Docker on OS X, you need to run throttle outside of Docker. Install it and run like this:
|
||||
|
||||
~~~
|
||||
# First install
|
||||
$ npm install @sitespeed.io/throttle -g
|
||||
|
||||
# Then set the connectivity, run and stop
|
||||
$ throttle cable
|
||||
$ docker run --shm-size=1g --rm sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io/
|
||||
$ throttle stop
|
||||
~~~
|
||||
|
||||
### TSProxy
|
||||
[TSProxy](https://github.com/WPO-Foundation/tsproxy) is a Traffic-shaping SOCKS5 proxy built by [Patrick Meenan](https://twitter.com/patmeenan). You need Python 2.7 for it to work. When you run it through Browsertime/sitespeed.io configures Firefox and Chrome to automatically use the proxy.
|
||||
|
||||
If use Kubernetes you can not use Docker networks or tc, but you can use TSProxy. However there has been [many issues](https://github.com/WPO-Foundation/tsproxy/issues?q=is%3Aissue+is%3Aclosed) with TSProxy through the years, so if you can avoid using it, please do.
|
||||
|
||||
~~~bash
|
||||
sitespeed.io --browsertime.connectivity.engine tsproxy -c cable https://www.sitespeed.io/
|
||||
~~~
|
||||
|
||||
|
|
|
|||
|
|
@ -40,7 +40,6 @@ Setup a simple budget that checks the URLs you test against number of requests,
|
|||
"requests": 0
|
||||
},
|
||||
"score": {
|
||||
"accessibility": 100,
|
||||
"bestpractice": 100,
|
||||
"privacy": 100,
|
||||
"performance": 100
|
||||
|
|
@ -89,7 +88,7 @@ docker run -v ${WORKSPACE}:/sitespeed.io sitespeedio/sitespeed.io --outputFolder
|
|||
|
||||
The HTML result pages runs JavaScript, so you need to change the [Jenkins Content Security Policy](https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy) for them to work with the plugin.
|
||||
|
||||
When you start Jenkins make sure to set the environment variable <code>-Dhudson.model.DirectoryBrowserSupport.CSP="sandbox allow-scripts; style-src 'unsafe-inline' *;script-src 'unsafe-inline' *;"</code>.
|
||||
When you start Jenkins make sure to set the environment variable <code>-Dhudson.model.DirectoryBrowserSupport.CSP="default-src 'self' 'unsafe-inline' 'unsafe-eval'; img-src 'self' 'unsafe-inline' data:;"</code>.
|
||||
|
||||
* If you want to break your build, you should generate a JUnit XML and use the built-in post task *Publish JUnit test result report*. Make sure to make the budget file available inside the Docker container. In this example we have it inside the Jenkins workspace.
|
||||
|
||||
|
|
@ -146,7 +145,7 @@ workflows:
|
|||
You will notice that the last run is reading the performance budget file that exists in the git repo that was checked out. This will only work if you mount the checked out repo as a volume for sitespeed. This makes is really efficient and convenient to allow sitespeed to pick up configuration files and to output results to a location where one can post-process with other scripts.
|
||||
|
||||
## Gitlab CI
|
||||
Gitlab has prepared an easy way to test using sitespeed.io: [https://docs.gitlab.com/ee/ci/examples/browser_performance.html](https://docs.gitlab.com/ee/ci/examples/browser_performance.html).
|
||||
Gitlab has prepared an easy way to test using sitespeed.io: [https://docs.gitlab.com/ee/user/project/merge_requests/browser_performance_testing.html](https://docs.gitlab.com/ee/user/project/merge_requests/browser_performance_testing.html).
|
||||
|
||||
## Grunt plugin
|
||||
Checkout the [grunt plugin](https://github.com/sitespeedio/grunt-sitespeedio).
|
||||
|
|
|
|||
|
|
@ -182,6 +182,8 @@ And then start: `nohup ./loop.sh &`
|
|||
|
||||
To verify that everything works you should tail the log: `tail -f /tmp/sitespeed.io`
|
||||
|
||||
### Run on Mac
|
||||
|
||||
If you run on Mac you should use `screen` instead of *nohup*. First open a new screen instance: `screen`. Then start your tests `./loop.sh`. And then detach your screen `ctrl+A and then press D`. To resume back to the screen use `screen -x`.
|
||||
|
||||
### Stop your tests
|
||||
|
|
|
|||
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
layout: default
|
||||
title: CPU Benchmark
|
||||
description: Simple CPU benchmark!
|
||||
keywords: cpu, documentation, web performance, sitespeed.io
|
||||
nav: documentation
|
||||
category: sitespeed.io
|
||||
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
|
||||
twitterdescription: CPU benchmark
|
||||
---
|
||||
|
||||
[Documentation]({{site.baseurl}}/documentation/sitespeed.io/) / CPU Benchmark
|
||||
|
||||
# CPU Benchmark
|
||||
{:.no_toc}
|
||||
|
||||
* Lets place the TOC here
|
||||
{:toc}
|
||||
|
||||
## How it works
|
||||
|
||||
We use a CPU benchmark inspired by Wikipedias CPU benchmark included in the [Autonomous Systems performance report](https://performance.wikimedia.org/asreport/). It's super simple: it's a loop that we run in the browser after the page has finished loading and our other tests has finished. It produces a metric in milliseconds of how long time it takes to run.
|
||||
|
||||
## How can you use it?
|
||||
There's a couple of different use cases:
|
||||
* Compare the benchmark on a real phone versus emulated mobile phone tests. Run the test on a phone and run it on your computer and compare the value to get a feeling on how much faster (or slower) the tests are on the computer.
|
||||
* Keep track of the benchmark over time on the server that runs your tests. Is the benchmark unstable, your other metrics will also be unstable.
|
||||
* If you collect performance metrics from real users you can also collect the CPU metric from your users (make sure to do it off the main thread as Wikipedia). That way you can compare your synthetic tests with your RUM data.
|
||||
|
||||
Here's an example of what the variation looks like running on a Moto G4 running the tests in Chrome. We run eleven runs and in the graph you see the min, median and max value.
|
||||

|
||||
{: .img-thumbnail}
|
||||
|
||||
|
||||
## Test page
|
||||
We also have a special test page you can use to see the benchmark on your own browser/computer/device without running sitespeed.io. Access the page
|
||||
[https://www.sitespeed.io/cpu.html](https://www.sitespeed.io/cpu.html) and look at the benchmark metric.
|
||||
|
||||
You can also use the page to calibrate your CPU throttling rate when you use Chrome. Access the page and look at the result and fine tune your throttling rate.
|
||||
|
||||
~~~shell
|
||||
sitespeed.io --chrome.CPUThrottlingRate 5 -b chrome https://www.sitespeed.io/cpu.html
|
||||
~~~
|
||||
|
|
@ -24,7 +24,7 @@ sitespeed.io has a CrUx plugin that can collect data from the [Chrome User Exper
|
|||
sitespeed.io --crux.key $CRUX_API_KEY https://www.sitespeed.io
|
||||
~~~
|
||||
|
||||
If you send the data to Graphite you want to push the data to its own namespace ```--graphite.namespace sitespeedio.crux``` and you probably want to seperate the data from your sitespeed.io data, so you can disable Browsertime and do one run just to get the CrUx data. CrUx data changes doesn't change so often so you can just run it once per day.
|
||||
If you send the data to Graphite you want to push the data to its own namespace ```--graphite.namespace sitespeedio.crux``` and you probably want to separate the data from your sitespeed.io data, so you can disable Browsertime and do one run just to get the CrUx data. CrUx data changes doesn't change so often so you can just run it once per day.
|
||||
|
||||
The plugin collect data for the specific URL that you test AND the origin (domain).
|
||||
|
||||
|
|
|
|||
|
|
@ -55,34 +55,27 @@ The big picture looks something like this:
|
|||
Here's an example on how to use sitespeed.io directly from NodeJS. This will generate the result to disk but you will not get it as a JSON object (only the budget result). We maybe change that in the future. If you need the JSON you can either read it from disk or use the Browsertime plugin directly.
|
||||
|
||||
~~~javascript
|
||||
'use strict';
|
||||
|
||||
const sitespeed = require('sitespeed.io');
|
||||
import { run as runSitespeedio } from 'sitespeed.io';
|
||||
const urls = ['https://www.sitespeed.io/'];
|
||||
|
||||
async function run() {
|
||||
try {
|
||||
const result = await sitespeed.run({
|
||||
const result = await runSitespeedio({
|
||||
urls,
|
||||
browsertime: {
|
||||
iterations: 1,
|
||||
connectivity: {
|
||||
profile: 'native',
|
||||
downstreamKbps: undefined,
|
||||
upstreamKbps: undefined,
|
||||
latency: undefined,
|
||||
engine: 'external'
|
||||
},
|
||||
browser: 'chrome'
|
||||
}
|
||||
});
|
||||
console.log(result);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
} catch (error) {
|
||||
console.error(error);
|
||||
}
|
||||
}
|
||||
|
||||
run();
|
||||
await run();
|
||||
|
||||
~~~
|
||||
|
||||
### Use Browsertime from NodeJS
|
||||
|
|
@ -90,23 +83,23 @@ run();
|
|||
In this example you run Browsertime directly from NodeJS, using the default JavaScripts to collect metrics.
|
||||
|
||||
~~~javascript
|
||||
'use strict';
|
||||
|
||||
const browsertime = require('browsertime');
|
||||
import { BrowsertimeEngine, browserScripts } from 'browsertime';
|
||||
|
||||
// The setup is the same configuration as you use in the CLI
|
||||
const browsertimeSetupOptions = { iterations: 1, browser: 'chrome' };
|
||||
const engine = new browsertime.Engine(browsertimeSetupOptions);
|
||||
const engine = new BrowsertimeEngine(browsertimeSetupOptions);
|
||||
// You can choose what JavaScript to run, in this example we use the default categories
|
||||
// and the default JavaScript
|
||||
const scriptsCategories = await browsertime.browserScripts.allScriptCategories;
|
||||
const scripts = await browsertime.browserScripts.getScriptsForCategories(scriptsCategories);
|
||||
const scriptCategories = await browserScripts.allScriptCategories();
|
||||
let scriptsByCategory = await browserScripts.getScriptsForCategories(
|
||||
scriptCategories
|
||||
);
|
||||
|
||||
async function run() {
|
||||
try {
|
||||
await engine.start();
|
||||
// Get the result
|
||||
const result = await engine.run('https://www.sitespeed.io/', scripts);
|
||||
const result = await engine.run('https://www.sitespeed.io/', scriptsByCategory);
|
||||
console.log(result);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
|
|
@ -115,7 +108,7 @@ async function run() {
|
|||
}
|
||||
}
|
||||
|
||||
run();
|
||||
await run();
|
||||
~~~
|
||||
|
||||
## Developing sitespeed.io
|
||||
|
|
@ -125,7 +118,7 @@ On your local machine you need:
|
|||
|
||||
- [Install NodeJS](https://nodejs.org/en/download/) latest LTS version.
|
||||
- You need Git and fork [sitespeed.io](https://github.com/sitespeedio/sitespeed.io) and clone the forked repository.
|
||||
- Install Chrome/Firefox
|
||||
- Install Chrome/Firefox/Edge
|
||||
- Go to the cloned directory and run <code>npm install</code>
|
||||
- You are ready to go! To run locally: <code>bin/sitespeed.js https://www.sitespeed.io -n 1</code>
|
||||
- You can change the log level by adding the verbose flag. Verbose mode prints progress messages to the console. Enter up to three times (-vvv) to increase the level of detail: <code>bin/sitespeed.io https://www.sitespeed.io -n 1 -v</code>
|
||||
|
|
@ -174,9 +167,7 @@ If you are new to pug you can use [https://html2jade.org](https://html2jade.org)
|
|||
|
||||
We love pull requests and before you make a big change or add functionality, please open an issue proposing the change to other contributors so you got feedback on the idea before take the time to write precious code!
|
||||
|
||||
#### Committing changes
|
||||
* Install Commitizen with npm <code>npm install -g commitizen</code>
|
||||
* Then simply use command <code>git cz</code> instead of <code>git commit</code> when committing changes
|
||||
When you make your pull request, you can follow the guide from GitHub on [how to make a pull requests from a fork](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork).
|
||||
|
||||
#### Before you send the pull request
|
||||
|
||||
|
|
@ -186,6 +177,13 @@ Before you send the PR make sure you:
|
|||
* Make sure your code don't break any tests: <code>npm test</code>
|
||||
* Update the documentation [https://github.com/sitespeedio/sitespeed.io/tree/main/docs](https://github.com/sitespeedio/sitespeed.io/tree/main/docs) in another pull request. When we merge the PR the documentation will automatically be updated so we do that when we push the next release
|
||||
|
||||
### Debug metrics
|
||||
Sometimes you want to verify that the metrics are correct, how do you do that?
|
||||
#### Visual metrics
|
||||
The best to verify that visual metrics are correct are to look at the film strip view and verify that the metrics correlate to the filmstrip. Through the years browsers has changed the URL bar or added some small infoboxes at the bottom of the browser window that affect visual metrics. You can easily see if those are picked up by looking at the filmstrip.
|
||||
|
||||
If you don't have the filmstrip you can compare first visual change from visual metrics with first contentful paint, they usually match pretty good.
|
||||
|
||||
### Do a sitespeed.io release
|
||||
When you become a member of the sitespeed.io team you can push releases. You do that by running the release bash script in root: <code>./release.sh</code>
|
||||
|
||||
|
|
@ -195,7 +193,7 @@ To be able to release a new version you new to have access to our Docker account
|
|||
|
||||
To do a release you need to first install np (a better *npm publish*): <code>npm install --global np</code>
|
||||
|
||||
Before you do a release, remember to let your latest code change run a couple of hours on our test server before you push the release (the latest code is automatically deployed on the test server). You will find errors from the test server on the [#alert channel on Slack](https://sitespeedio.herokuapp.com/).
|
||||
Before you do a release, remember to let your latest code change run a couple of hours on our test server before you push the release (the latest code is automatically deployed on the test server). You will find errors from the test server on the [#alert channel on Slack](https://join.slack.com/t/sitespeedio/shared_invite/zt-296jzr7qs-d6DId2KpEnMPJSQ8_R~WFw).
|
||||
|
||||
Do the release:
|
||||
|
||||
|
|
|
|||
|
|
@ -19,12 +19,15 @@ twitterdescription: Use Docker to run sitespeed.io.
|
|||
|
||||
## Containers
|
||||
|
||||
Docker is the preferred installation method because every dependency is handled for you for all the features in sitespeed.io.
|
||||
Docker makes it easier to run sitespeed.io because you don't need to install every dependency needed for recording and analysing the browser screen. It's also easy to update your container to a new sitespeed.io version by changing the Docker tag. The drawback using Docker is that it will add some overhead, the container is Linux only (browsers are Linux version).
|
||||
|
||||
We have a three ready made containers:
|
||||
We publich containers for AMD and ARM. The AMD containers contains the latest Chrome/Firefox/Edge. The ARM container are behind and use latest Chrome/Firefox that was published for ARM.
|
||||
|
||||
We have a four ready made containers:
|
||||
* One slim container that contains only Firefox. You run Firefox headless. Use the container `sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %}-slim`. The container do not have FFMpeg and Imagemagick so you can not get any Visual Metrics using this container.
|
||||
* One with [Chrome, Firefox & Xvfb](https://hub.docker.com/r/sitespeedio/sitespeed.io/). It also contains FFMpeg and Imagemagick, so we can record a video and get metrics like Speed Index using [VisualMetrics](https://github.com/WPO-Foundation/visualmetrics). This is the default container and use it with `sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %}`
|
||||
* One with [Chrome, Firefox and Edge](https://hub.docker.com/r/sitespeedio/sitespeed.io/). It also contains FFMpeg and Imagemagick, so we can record a video and get metrics like Speed Index using [VisualMetrics](https://github.com/WPO-Foundation/visualmetrics). This is the default container and use it with `sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %}` . If you use the *arm64* version of the container, that container will have Firefox and Chromium installed.
|
||||
* One container that is based in the default container and includes the [Google Page Speed Insights](https://github.com/sitespeedio/plugin-gpsi) and [Lighthouse plugin](https://github.com/sitespeedio/plugin-lighthouse). Use it with `sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %}-plus1`.
|
||||
* Another container that is based in the default container and includes the [WebPageTest plugin](https://github.com/sitespeedio/plugin-webpagetest). Use it with `sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %}-webpagetest`
|
||||
|
||||
### Structure
|
||||
|
||||
|
|
@ -40,7 +43,7 @@ The [slim container](https://github.com/sitespeedio/sitespeed.io/blob/main/Docke
|
|||
We lock down the browsers to specific versions for maximum compatibility and stability with sitespeed.io's current feature set; upgrading once we verify browser compatibility.
|
||||
{: .note .note-info}
|
||||
|
||||
## Running in Docker
|
||||
## Running using Docker
|
||||
|
||||
The simplest way to run using Chrome:
|
||||
|
||||
|
|
@ -63,6 +66,21 @@ docker run --shm-size 2g --rm -v "$(pwd):/sitespeed.io" sitespeedio/sitespeed.io
|
|||
Using `-v "$(pwd):/sitespeed.io"` will map the current directory inside Docker and output the result directory there.
|
||||
{: .note .note-info}
|
||||
|
||||
|
||||
## Running on Mac M1 ARM
|
||||
We have ARM container that will be used by default but it will use an older version of Chromium and a newer version of Firefox. The problem is that the Chrome team (Google, 30000+ engineers) do not build Chrome/Chromium on ARM Linux so we rely on *ppa:saiarcot895/chromium-beta* and use the latest version from there.
|
||||
|
||||
It's probably better to run the AMD containers. If you have a newer version of Docker desktop installed, you can *"Use Rosetta for x86/amd64 emulation"* to run the AMD containers. Go to settings and turn it on (see the screenshot).
|
||||
|
||||

|
||||
{: .img-thumbnail}
|
||||
|
||||
Then run by specifying the platform *--platform linux/amd64*.
|
||||
|
||||
```bash
|
||||
docker run --rm -v "$(pwd):/sitespeed.io" --platform linux/amd64 sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io/
|
||||
```
|
||||
|
||||
## More about volumes
|
||||
|
||||
If you want to feed sitespeed.io with a file with URLs or if you want to store the HTML result, you should setup a volume. Sitespeed.io will do all the work inside the container in a directory located at _/sitespeed.io_. To setup your current working directory add the _-v "$(pwd):/sitespeed.io"_ to your parameter list. Using "$(pwd)" will default to the current directory. In order to specify a static location, simply define an absolute path: _-v /Users/sitespeedio/html:/sitespeed.io_
|
||||
|
|
@ -194,4 +212,9 @@ Enter any password. This will start your VNC server which you can use by any VNC
|
|||
- Download VNC client like RealVNC
|
||||
- Enter VNC server : `0.0.0.0:5900`
|
||||
- When prompted for password, enter the password you entered while creating the vnc server.
|
||||
- You should be able to view the contents of `Xvfb`.
|
||||
- You should be able to view the contents of `Xvfb`.
|
||||
|
||||
## Security
|
||||
In our build process we [run Trivy vulnerability scanner](https://github.com/sitespeedio/sitespeed.io/blob/main/.github/workflows/docker-scan.yml) on the docker image and we break builds on *CRITICAL* issues. The reason for that is that if should break in *HIGH* issues we would probably never be able to release any containers. We update the OS in Docker continously but it can happen that sometimes have HIGH issues.
|
||||
|
||||
If you need to have a container that do not have lower security issues, you can do that by building your own containers and manage it yourself.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue