Documentation changes for 11.0 (#2756)

This commit is contained in:
Peter Hedenskog 2019-11-07 09:45:39 +01:00 committed by GitHub
parent a9642711c0
commit 6b2f38d6ba
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
84 changed files with 842 additions and 320 deletions

View File

@ -10,7 +10,7 @@ Please make sure you run the [latest version](https://www.npmjs.com/package/site
If you find a defect, please file a bug report. Include the following:
- Explain the bug/defect and what you where doing.
- OS & versions
- Always add the URL of the page you where analyzing (if it is secret, drop me an email peter**at**soulgalore.com and send me the address).
- Always add the URL of the page you where analysing (if it is secret, drop me an email peter**at**soulgalore.com and send me the address).
- Add a screenshot and clearly point out where the defect is (if applicable)
- Include the content of the sitespeed.io.log file in a [gist](https://gist.github.com/) and attach it to the issue.

49
.spelling Normal file
View File

@ -0,0 +1,49 @@
sitespeed.io
sitespeed.io.
sitespeed_io
Grafana
Browsertime
browsertime
graphite.db
grafana.db
toc
img-thumbnail
no_toc
statsd
WebPageTest
cli
WebPageReplay
localhost
mahimahi
xvfb
img-thumbnail-center
InfluxDB
GitHub
SpeedIndex
VisualMetrics
FirstVisualChange
VisualComplete
NodeJS
npm
crontab
lossless
https
png
jpg
api
plugin
plugins
TSProxy
JUnit
Imagemagick
FFMpeg
WebDriver
GeckoDriver
ChromeDriver
Leaderboard
Homebrew
SafariDriver
DevTools
sudo
PageXray
leaderboard

View File

@ -10,7 +10,7 @@
### Added
* Make it possible to configure which data to show in the columns as in [#200](1https://github.com/sitespeedio/sitespeed.io/issues/2001), fixed in PR [#2711](https://github.com/sitespeedio/sitespeed.io/pull/2711). Thank you [thapasya-m](https://github.com/thapasya-m) for the PR!
* Chrome/Chromedriver 78 and Firefox 70.
* Chrome/ChromeDriver 78 and Firefox 70.
* Use AXE in budget [#2718](https://github.com/sitespeedio/sitespeed.io/pull/2718).
* Upgraded to Axe Core 3.4.0 [#2723](https://github.com/sitespeedio/sitespeed.io/pull/2723).
* Added contentSize to budget [#2721](https://github.com/sitespeedio/sitespeed.io/pull/2721).
@ -101,7 +101,7 @@ to run Axe! [#2676](https://github.com/sitespeedio/sitespeed.io/pull/2676).
## 10.0.1 - 2019-09-12
### Fixed
* Updated Browsetime with stable Chromedriver (instead of beta), do not show First Paint for Safari, and fixing getting long task data if you first navigate and then measure a URL. See the [Browsertime changelog](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md#601---2019-09-12) for all the info.
* Updated Browsetime with stable ChromeDriver (instead of beta), do not show First Paint for Safari, and fixing getting long task data if you first navigate and then measure a URL. See the [Browsertime changelog](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md#601---2019-09-12) for all the info.
## 10.0.0 - 2019-09-11
### Added
@ -151,7 +151,7 @@ to run Axe! [#2676](https://github.com/sitespeedio/sitespeed.io/pull/2676).
## 9.8.0 - 2019-08-01
### Added
* We updated the Docker container to use Chrome 76 and swicthed to Chromedriver 76. We had some issues with Chrome 76 (or Chromedriver) that increased number of times we got errors converting the Chrome trace log because of missing navigationStart events (see [#902](https://github.com/sitespeedio/browsertime/issues/902)) on our test servers. But that seems fixed with [#904](https://github.com/sitespeedio/browsertime/pull/904).
* We updated the Docker container to use Chrome 76 and swicthed to ChromeDriver 76. We had some issues with Chrome 76 (or ChromeDriver) that increased number of times we got errors converting the Chrome trace log because of missing navigationStart events (see [#902](https://github.com/sitespeedio/browsertime/issues/902)) on our test servers. But that seems fixed with [#904](https://github.com/sitespeedio/browsertime/pull/904).
## 9.7.0 - 2019-07-29
@ -228,8 +228,8 @@ In this release we moved functionality for Chrome from our [browser extension](h
## Added
* Upgraded to Chrome 75 and Firefox 67.0.1 in the Docker container.
* Upgraded to use Chromedriver 75.
* Upgraded the Coach that also uses latest Chrome and Chromedriver.
* Upgraded to use ChromeDriver 75.
* Upgraded the Coach that also uses latest Chrome and ChromeDriver.
* New Browsertime:
* Added metric LastMeaningfulPaint that will be there when you collect `--visualElements` [848](https://github.com/sitespeedio/browsertime/pull/848).
* You can get screenshots in your Chrome trace log using `--chrome.enableTraceScreenshots` [#851](https://github.com/sitespeedio/browsertime/pull/851)
@ -325,7 +325,7 @@ Using CPU metrics on Android phones was broken since 9.0.0, fixed in [#844](http
## 8.15.0 - 2019-04-23
### Added
* Use Chrome 74 stable in the Docker container and Chomedriver 74 (you need upgrade to Chrome 74).
* Upgraded Coach to match latest Browsertime version with Chrome and upgraded Browsertime to fix miss matched locked file in npm for Chromedriver.
* Upgraded Coach to match latest Browsertime version with Chrome and upgraded Browsertime to fix miss matched locked file in npm for ChromeDriver.
### Fixed
* We displayed error on the summary page even though we didn't have an error.
@ -344,7 +344,7 @@ Using CPU metrics on Android phones was broken since 9.0.0, fixed in [#844](http
* You can add meta data to your script with `commands.meta.setTitle(title)` and `commands.meta.setDescription(desc)`
* Upgrading to [Browsertime 4.8.0](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md#480---2019-04-23) fixes so errors thrown from your script, holds a usable error message instead of the wrapped Chromedriver error.
* Upgrading to [Browsertime 4.8.0](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md#480---2019-04-23) fixes so errors thrown from your script, holds a usable error message instead of the wrapped ChromeDriver error.
### Fixed
* If a page failed, pug through an error [#2428](https://github.com/sitespeedio/sitespeed.io/pull/2428)
@ -466,7 +466,7 @@ Using CPU metrics on Android phones was broken since 9.0.0, fixed in [#844](http
### Fixed
- In some cases alias wasn't picked up for URLs sent to Graphite/InfluxDB as reported in [#2341](https://github.com/sitespeedio/sitespeed.io/issues/2341) and fixed in [#2373](https://github.com/sitespeedio/sitespeed.io/pull/2373). Thank you [James Leatherman](https://github.com/leathej1) for taking the time to find a reproducable test case!
- In some cases alias wasn't picked up for URLs sent to Graphite/InfluxDB as reported in [#2341](https://github.com/sitespeedio/sitespeed.io/issues/2341) and fixed in [#2373](https://github.com/sitespeedio/sitespeed.io/pull/2373). Thank you [James Leatherman](https://github.com/leathej1) for taking the time to find a reproducible test case!
- Moved to internal UTC support in dayjs [#2370](https://github.com/sitespeedio/sitespeed.io/pull/2370).
## 8.7.3 - 2019-03-07
@ -661,7 +661,7 @@ Using CPU metrics on Android phones was broken since 9.0.0, fixed in [#844](http
- New tab showing the filmstrip (if you record a video and keep the screenshots). We had the screenshots forever but never done anything with them. Inspired by [Stefan Burnickis](https://github.com/sburnicki) work on https://github.com/iteratec/wpt-filmstrip [#2274](https://github.com/sitespeedio/sitespeed.io/pull/2274).
- Show Server Timings in the metric section (if the page uses Server Timing) [#2277](https://github.com/sitespeedio/sitespeed.io/pull/2277).
- Upgraded the Docker container to use Chrome 72 and Firefox 65.
- Upgraded to [Browsertime 4.1](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md#410---2019-01-31) with latest Chromedriver and Geckodriver. There's also a new command `js.runAndWait('')` that makes it possible to run your own JavaScript, click a link and wait on page navigation.
- Upgraded to [Browsertime 4.1](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md#410---2019-01-31) with latest ChromeDriver and Geckodriver. There's also a new command `js.runAndWait('')` that makes it possible to run your own JavaScript, click a link and wait on page navigation.
### Fixed
@ -761,7 +761,7 @@ Read the blog post: [https://www.sitespeed.io/sitespeed.io-8.0-and-browsertime.4
### Fixed
- New Browsertime 3.10.0 with latest Chromedriver and a fix for the bug when you set a cookie and the same time use --cacheClearRaw.
- New Browsertime 3.10.0 with latest ChromeDriver and a fix for the bug when you set a cookie and the same time use --cacheClearRaw.
- Upgraded to Perf Cascade 2.5.5
### Added
@ -814,7 +814,7 @@ Read the blog post: [https://www.sitespeed.io/sitespeed.io-8.0-and-browsertime.4
- We also added a new feature: If you run your own custom script you can now feed it with different input by using `--browsertime.scriptInput.*`. Say you have a script named myScript you can pass on data to it with `--browsertime.scriptInput.myScript 'super-secret-string'`. More about this in the documentation the coming weeks.
- Upgraded to Chromedriver 2.42.0
- Upgraded to ChromeDriver 2.42.0
- You can include screenshots in annotations sent to Graphite/InfluxDB [#2144](https://github.com/sitespeedio/sitespeed.io/pull/2144). This makes it easy that from within Grafana see screenshots from every run.
@ -955,16 +955,16 @@ and Coach 2.0.4.
### Added
- Upgraded to Chrome 67 see [#2069](https://github.com/sitespeedio/sitespeed.io/issues/2069) about possible performance regressions. At least for Wikipedia some URLs are slower on 67 than 66. And since 67 now rolled out to a lot of people, you probably want to test with that version. See https://bugs.chromium.org/p/chromium/issues/detail?id=849108
- Upgraded to Browsertime 3.1.2 with Chromedriver 2.40
- Upgraded to Browsertime 3.1.2 with ChromeDriver 2.40
- Upgraded to Firefox 61 beta13
- Upgraded ADB to work together with Chromedriver > 2.38, making driving Chrome on Android from Ubuntu Docker container work again.
- Upgraded ADB to work together with ChromeDriver > 2.38, making driving Chrome on Android from Ubuntu Docker container work again.
## 7.0.3 - 2018-06-02
### Fixed
- Upgraded to PerfCasacde 2.5.2 that fixes Edge tab bug.
- Upgraded to Browsertime 3.1.0 with new Chromedriver (2.39).
- Upgraded to Browsertime 3.1.0 with new ChromeDriver (2.39).
- Upgraded to Browsertime 3.1.1 with a fix for HTTP2 pushes in Chrome [#2068](https://github.com/sitespeedio/sitespeed.io/issues/2068).
## 7.0.2 - 2018-06-01
@ -1140,7 +1140,7 @@ As a sitespeed.io user there shouldn't be any breaking changes upgrading from 6.
### Fixed
- Upgraded to Browsertime 2.1.4 with [new bug fixes](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md) and newer Chromedriver.
- Upgraded to Browsertime 2.1.4 with [new bug fixes](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md) and newer ChromeDriver.
- Fixed the start script so that you on Ubuntu can run WebPageReplay in the Docker container for your Android phone.
@ -1168,7 +1168,7 @@ As a sitespeed.io user there shouldn't be any breaking changes upgrading from 6.
### Added
- Use Chromedriver 2.34
- Use ChromeDriver 2.34
- Configure the page complete time when you use WebPageReplay. Add -e WAIT 5000 to wait 5000 ms.
### Fixed
@ -1303,7 +1303,7 @@ the url would be treated as a plugin name, and the command would fail.
### Fixed
- Upgraded to Browsertime 1.9.4 with latest Chromedriver that fixes launching Chrome > 61
- Upgraded to Browsertime 1.9.4 with latest ChromeDriver that fixes launching Chrome > 61
- Fixed custom metrics problem with WebPageTest [#1737](https://github.com/sitespeedio/sitespeed.io/issues/1737)
## 5.6.3 2017-10-03
@ -1403,7 +1403,7 @@ the url would be treated as a plugin name, and the command would fail.
- You can now get a list of largest and slowest third party assets [#1613](https://github.com/sitespeedio/sitespeed.io/issues/1613).
- Upgraded to latest Browsertime:
- Upgraded to Geckodriver 0.17.0 seems to fix [#321](https://github.com/sitespeedio/browsertime/issues/321).
- Upgraded Chromedriver 2.30 with a very special hack to fix [#347](https://github.com/sitespeedio/browsertime/pull/347).
- Upgraded ChromeDriver 2.30 with a very special hack to fix [#347](https://github.com/sitespeedio/browsertime/pull/347).
- Pickup metrics from the Paint Timing API [#344](https://github.com/sitespeedio/browsertime/pull/344), will work in Chrome 60.
- Updated the Docker container to Firefox 54 and Chrome 60 (beta) to fix the background color problem. [Chrome bug 727046](https://bugs.chromium.org/p/chromium/issues/detail?id=727046).
- If you run Chrome 60+ you will now see the metrics from the Paint Timing API in the Browsertime tab.
@ -1430,7 +1430,7 @@ the url would be treated as a plugin name, and the command would fail.
### Fixed
- The link in the HTML to the Chrome trace log is not working.
- Upgraded to Browsertime 1.2.7 that downgrades Chromedriver to 2.28 to make collecting trace logs work again.
- Upgraded to Browsertime 1.2.7 that downgrades ChromeDriver to 2.28 to make collecting trace logs work again.
## 5.2.0 2017-05-24
@ -1521,7 +1521,7 @@ There's one change in 5.0 that changes the default behavior: TSProxy isn't defau
### Fixed
- New Chromedriver 2.28.0 that fixes "Cannot get automation extension from unknown error: page could not be found ..."
- New ChromeDriver 2.28.0 that fixes "Cannot get automation extension from unknown error: page could not be found ..."
- The help for budget had wrong example parameter. Use --budget.configPath for path to the config.
## 4.6.0 2017-03-10
@ -1828,9 +1828,9 @@ There's one change in 5.0 that changes the default behavior: TSProxy isn't defau
Version 4.0 is a ground up rewrite for Node.js 6.9.1 and newer. It builds on all our experience since shipping 3.0 in December 2014, the first version to use Node.js.
- We support HTTP/2! In 3.X we used PhantomJS and a modified version of YSlow to analyze best practice rules. We also had BrowserMobProxy in front of our browsers that made it impossible to collect metrics using H2. We now use the coach and Firefox/Chrome without a proxy. That makes it easier for us to adapt to browser changes and changes in best practices.
- We support HTTP/2! In 3.X we used PhantomJS and a modified version of YSlow to analyse best practice rules. We also had BrowserMobProxy in front of our browsers that made it impossible to collect metrics using H2. We now use the coach and Firefox/Chrome without a proxy. That makes it easier for us to adapt to browser changes and changes in best practices.
- We got the feature that people asked about the most: Measure a page as a logged in user. Use --browsertime.preScript to run a selenium task to before the page is analyzed. Documentation is coming soon.
- We got the feature that people asked about the most: Measure a page as a logged in user. Use --browsertime.preScript to run a selenium task to before the page is analysed. Documentation is coming soon.
- New HAR files rock! In the old version we use BrowserMobProxy as a proxy in front of the browser to collect the HAR. In the new version we collect the HAR directly from the browser. For Firefox we use the HAR export trigger and in Chrome we generates it from the performance log.
@ -2050,9 +2050,9 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
- Everything! Rewrite from scratch in progress. This is an alpha release, try it test it but do not upgrade in production yet (https://github.com/sitespeedio/sitespeed.io/issues/945).
- We support HTTP/2! In 3.X we used PhantomJS and a modified version of YSlow to analyze best practice rules. We also had BrowserMobProxy in front of our browsers that made it impossible to collect metrics using H2. We now use [the coach](https://github.com/sitespeedio/coach) and Firefox/Chrome without a proxy. That makes it easier for us to adapt to browser changes and changes in best practices.
- We support HTTP/2! In 3.X we used PhantomJS and a modified version of YSlow to analyse best practice rules. We also had BrowserMobProxy in front of our browsers that made it impossible to collect metrics using H2. We now use [the coach](https://github.com/sitespeedio/coach) and Firefox/Chrome without a proxy. That makes it easier for us to adapt to browser changes and changes in best practices.
- We now support the feature that people asked about the most: Measure a page as a logged in user. Use --browsertime.preTask to run a selenium task to before the page is analyzed. Documentation is coming soon.
- We now support the feature that people asked about the most: Measure a page as a logged in user. Use --browsertime.preTask to run a selenium task to before the page is analysed. Documentation is coming soon.
- New HAR files rock! In the old version we use BrowserMobProxy as a proxy in front of the browser to collect the HAR. In the new version we collect the HAR directly from the browser. For Firefox we use the [HAR export trigger](https://github.com/firebug/har-export-trigger) and in Chrome we generates it from the performance log.
@ -2269,7 +2269,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
## version 3.2.8 - 2015-04-13
- Use --postURL to POST the result of an analyze to a URL
- Use --postURL to POST the result of an analyse to a URL
- Use --processJson to rerun all the post tasks on a result, use it to reconfigure what data to show in the HTML output.
- Bug fix: extra check when generating Graphite keys. #642
@ -2363,11 +2363,11 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
- Enable verbose logging in Browsertime whenever Sitespeed.io runs in verbose mode (--verbose/-v).
- Check that location for WPT always contains location and browser
- Bumped BrowserTime, new version making sure it will not hang when Selenium/Chromedriver has problems.
- Bumped BrowserTime, new version making sure it will not hang when Selenium/ChromeDriver has problems.
## version 3.1.4 - 2015-02-16
- Log the time the analyze of the URL(s) took #578
- Log the time the analyse of the URL(s) took #578
## version 3.1.3 - 2015-02-13
@ -2486,7 +2486,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
## version 2.5.4 - 2014-01-28
- Bug fix: If phantomJS fails, the whole analyze fails (introduced in 2.5.x) #359
- Bug fix: If phantomJS fails, the whole analyse fails (introduced in 2.5.x) #359
- The crawler now handles gziped content #263
## version 2.5.3 - 2014-01-25
@ -2545,7 +2545,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
## version 2.2.2 - 2013-11-14
- Bug fix: User marks named with spaces broke the summary.xml
- Bug fix: Sites with extremely far away last modification time on an asset, could break an analyze
- Bug fix: Sites with extremely far away last modification time on an asset, could break an analyse
- Upgraded Browser Time version to 0.4, getting back custom user measurements.
## version 2.2.1 - 2013-11-12
@ -2561,7 +2561,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
- New BrowserTime version (0.3) including backEndTime & frontEndTime
- Changed default summary page to show backend & frontend time (removed redirectionTime & domInteractiveTime)
- Increased timeout for the crawler for really slow pages
- Bug fix: The fix for removing invalid XML caharcters created by GA, sometimes broke the analyze, now fixed (#304)
- Bug fix: The fix for removing invalid XML caharcters created by GA, sometimes broke the analyse, now fixed (#304)
## version 2.1.1 - 2013-11-05
@ -2578,7 +2578,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
- Output the the input parameters to the error.log so it is easy to reproduce the error
- Centralized the error logging
- Added an easy way of include sitespeed.io in Travis-CI
- Made it possible to analyze a site with non signed certificates
- Made it possible to analyse a site with non signed certificates
- Prepared for HTTP 2.0 rules & renamed the current rulesets, new names: sitespeed.io-desktop & sitespeed.io-mobile
- Also copy the result.xml file to the output dir for sitespeed.io-junit.xml (to be able to create graphs per URL)
- Bug fix: The crawler sometimes picked up URL:s linking to other content types than HTML
@ -2605,7 +2605,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
- Simplified user agent by choosing between iphone, ipad or nexus and a real agent & viewport is set.
- Output as CSV: Choose which column to output and always output ip, start url & date.
- Fix for Windows-users that is having spaces in their path to Java.
- Bug fix: URL:s that returns error (4XX-5XX and that sitespeed can't analyze) is now included in the JUnit xml.
- Bug fix: URL:s that returns error (4XX-5XX and that sitespeed can't analyse) is now included in the JUnit xml.
- Bug fix: The JUnit script can now output files to a relative path.
- Bug fix: User Agent is now correctly set.
@ -2655,7 +2655,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
## version 1.7
- Added check that Java exists before the analyze
- Added check that Java exists before the analyse
- Feed sitespeed with either a url to crawl or a plain text file with a list of URL:s (NOTE: the -f argument is now used for the file, the -c is the new for follow a specific path when crawling)
- Create a junit xml file from the test, new script & new xsl file
- Added new max size of a document, using stats from http archive
@ -2694,7 +2694,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
## version 1.5
- Added support for configuring the crawler (see the dependencies/crawler.properties file).
- Added support for analyze behind proxy (thanks https://github.com/rhulse and https://github.com/samteeeee for reporting and testing it)
- Added support for analyse behind proxy (thanks https://github.com/rhulse and https://github.com/samteeeee for reporting and testing it)
- Added html page that shows url:s that returned errors from the crawl
- Added percentage on summary page
- Added support for setting user agent
@ -2753,7 +2753,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
- New crawler instead of wget that didn't work on some sites with spider options (amazon etc)
- Fix for css in head rule, now only dns lookups are punished, not the number of css
- Crawl by follow a specific path, meaning you can analyze parts of sites
- Crawl by follow a specific path, meaning you can analyse parts of sites
## version 1.0.1
@ -2764,7 +2764,7 @@ And many many more changed. Read about the release https://www.sitespeed.io/site
## version 1.0 - 2012-10-10
- Show full urls in pages & page to easier understand which url that is analyzed
- Show full urls in pages & page to easier understand which url that is analysed
- Show extra data in modals to make it clearer
- Popover & better texts on summary page
- Cleanup & bug fixes in the bash script, it sometimes failed on some sites when yslow outputted content after the xml

View File

@ -1,6 +1,6 @@
# sitespeed.io Github action
# sitespeed.io GitHub action
If you are using [Github Actions](https://github.com/features/actions) beta it's super easy to run sitespeed.io. Remember though that actions are in beta and can change. They are running an small instances at the moment so you shouldn't rely on timing metrics.
If you are using [GitHub Actions](https://github.com/features/actions) beta it's super easy to run sitespeed.io. Remember though that actions are in beta and can change. They are running an small instances at the moment so you shouldn't rely on timing metrics.
Actions works good with a [performance budget](https://www.sitespeed.io/documentation/sitespeed.io/performance-budget/). You should set your budget in a file in the repo that you are testing. In this example we call the file *budget.json* and put it in the *.github* folder in the repo.
@ -28,7 +28,7 @@ Setup a simple budget that check the URLs you test against number of requests, t
}
```
Then you can setup your action either via the Github GUI or using configuration. Make sure to setup your action to the right Docker file: ```docker://sitespeedio/sitespeed.io:8.0.6-action```.
Then you can setup your action either via the GitHub GUI or using configuration. Make sure to setup your action to the right Docker file: ```docker://sitespeedio/sitespeed.io:8.0.6-action```.
A simple setup looks something like this:

View File

@ -4,7 +4,8 @@
<div class="grid small">
<div class="col-1-5 hide-on-mobile">
<p>
<img src="{{site.baseurl}}/img/black-logo-120.png" width="60" height="64" class="img-footer" alt="sitespeed.io logo in black">
<img src="{{site.baseurl}}/img/black-logo-120.png" width="60" height="64" class="img-footer"
alt="sitespeed.io logo in black">
</p>
<p>sitespeed.io</p>
@ -28,7 +29,7 @@
<ul>
<li><a href="https://twitter.com/SiteSpeedio">Twitter</a></li>
<li><a href="https://www.facebook.com/sitespeed.io">Facebook</a></li>
<li><a href="https://github.com/sitespeedio">Github</a></li>
<li><a href="https://github.com/sitespeedio">GitHub</a></li>
</div>
<div class="col-1-5">
<h3>sitespeed.io</h3>
@ -38,6 +39,7 @@
<li><a href="https://dashboard.sitespeed.io/">The dashboard</a></li>
<li><a href="{{site.baseurl}}/logo/">Logos</a></li>
<li><a href="{{site.baseurl}}/privacy-policy/">Privacy Policy</a></li>
<li><a href="{{site.baseurl}}/sponsor/">Sponsor</a></li>
</ul>
</div>
<div class="col-1-5">
@ -52,8 +54,9 @@
</div>
<div class="col-1-1">
<p class="flogo"><a href="https://dashboard.sitespeed.io/">dashboard.sitespeed.io</a> is sponsored by <a href="https://www.digitalocean.com/"><img
src="{{site.baseurl}}/img/digital-ocean.png" class="digitalocean" alt="Digital Ocean Logo"></a></p>
<p class="flogo"><a href="https://dashboard.sitespeed.io/">dashboard.sitespeed.io</a> is sponsored by <a
href="https://www.digitalocean.com/"><img src="{{site.baseurl}}/img/digital-ocean.png"
class="digitalocean" alt="Digital Ocean Logo"></a></p>
<p class="copy">
&copy; Sitespeed.io
{{ site.time | date: '%Y' }}, last updated {{ site.time | date: "%H:%M %d %B %Y" }}

View File

@ -1,10 +1,10 @@
## Get the latest versions
* * *
* [sitespeed.io](/documentation/sitespeed.io/) {% include version/sitespeed.io.txt %} [[Docker](https://hub.docker.com/r/sitespeedio/sitespeed.io/)/[npm](https://www.npmjs.com/package/sitespeed.io)/[changelog](https://github.com/sitespeedio/sitespeed.io/blob/master/CHANGELOG.md)/[rss](https://github.com/sitespeedio/sitespeed.io/releases.atom)]
* [Browsertime](/documentation/browsertime/) {% include version/browsertime.txt %} [[Docker](https://hub.docker.com/r/sitespeedio/browsertime/)/[npm](https://www.npmjs.com/package/browsertime)/[changelog](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md)/[rss](https://github.com/sitespeedio/browsertime/releases.atom)]
* [Coach](/documentation/coach/) {% include version/coach.txt %} [[Docker](https://hub.docker.com/r/sitespeedio/coach/)/[npm](https://www.npmjs.com/package/webcoach)/[changelog](https://github.com/sitespeedio/coach/blob/master/CHANGELOG.md)/[rss](https://github.com/sitespeedio/coach/releases.atom)]
* [PageXray](/documentation/pagexray/) {% include version/pagexray.txt %} [[npm](https://www.npmjs.com/package/pagexray)/[changelog](https://github.com/sitespeedio/pagexray/blob/master/CHANGELOG.md)/[rss](https://github.com/sitespeedio/pagexray/releases.atom)]
* [Compare](https://compare.sitespeed.io/) {% include version/compare.txt %} [[npm](https://www.npmjs.com/package/@sitespeed.io/compare)/[changelog](https://github.com/sitespeedio/compare/blob/master/CHANGELOG.md)/[rss](https://github.com/sitespeedio/compare/releases.atom)]
* [Throttle](/documentation/throttle/) {% include version/throttle.txt %} [[npm](https://www.npmjs.com/package/@sitespeed.io/throttle)/[changelog](https://github.com/sitespeedio/throttle/blob/master/CHANGELOG.md)/[rss](https://github.com/sitespeedio/throttle/releases.atom)]
* [Chrome-HAR](/documentation/chrome-har/) {% include version/chrome-har.txt %} [[npm](https://www.npmjs.com/package/chrome-har)/[changelog](https://github.com/sitespeedio/chrome-har/blob/master/CHANGELOG.md)/[rss](https://github.com/sitespeedio/chrome-har/releases.atom)]
* [sitespeed.io](/documentation/sitespeed.io/) {% include version/sitespeed.io.txt %} [[Docker](https://hub.docker.com/r/sitespeedio/sitespeed.io/)/[npm](https://www.npmjs.com/package/sitespeed.io)/[changelog](https://github.com/sitespeedio/sitespeed.io/blob/master/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/sitespeed.io/releases.atom)]
* [Browsertime](/documentation/browsertime/) {% include version/browsertime.txt %} [[Docker](https://hub.docker.com/r/sitespeedio/browsertime/)/[npm](https://www.npmjs.com/package/browsertime)/[changelog](https://github.com/sitespeedio/browsertime/blob/master/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/browsertime/releases.atom)]
* [Coach](/documentation/coach/) {% include version/coach.txt %} [[Docker](https://hub.docker.com/r/sitespeedio/coach/)/[npm](https://www.npmjs.com/package/webcoach)/[changelog](https://github.com/sitespeedio/coach/blob/master/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/coach/releases.atom)]
* [PageXray](/documentation/pagexray/) {% include version/pagexray.txt %} [[npm](https://www.npmjs.com/package/pagexray)/[changelog](https://github.com/sitespeedio/pagexray/blob/master/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/pagexray/releases.atom)]
* [Compare](https://compare.sitespeed.io/) {% include version/compare.txt %} [[npm](https://www.npmjs.com/package/@sitespeed.io/compare)/[changelog](https://github.com/sitespeedio/compare/blob/master/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/compare/releases.atom)]
* [Throttle](/documentation/throttle/) {% include version/throttle.txt %} [[npm](https://www.npmjs.com/package/@sitespeed.io/throttle)/[changelog](https://github.com/sitespeedio/throttle/blob/master/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/throttle/releases.atom)]
* [Chrome-HAR](/documentation/chrome-har/) {% include version/chrome-har.txt %} [[npm](https://www.npmjs.com/package/chrome-har)/[changelog](https://github.com/sitespeedio/chrome-har/blob/master/CHANGELOG.md)/[RSS](https://github.com/sitespeedio/chrome-har/releases.atom)]

View File

@ -2,6 +2,6 @@
* * *
[<img src="{{site.baseurl}}/img/leaderboard.png" class="pull-left img-big" alt="Performance leaderboard" width="200" height="141">]({{site.baseurl}}/documentation/sitespeed.io/leaderboard/)
Do you want to compare your performance against other web sites? Use the performance leaderboard! You can check out our [example dashboard](https://dashboard.sitespeed.io/dashboard/db/leaderboard) or go directly to the [documention]({{site.baseurl}}/documentation/sitespeed.io/leaderboard/).
Do you want to compare your performance against other web sites? Use the performance leaderboard! You can check out our [example dashboard](https://dashboard.sitespeed.io/dashboard/db/leaderboard) or go directly to the [documentation]({{site.baseurl}}/documentation/sitespeed.io/leaderboard/).
You can compare performance timings, how the page is built, how much CPU the page is using and many many more things. And the leaderboard is also configurable through Grafana, so you can add the metrics that are important to you!

View File

@ -42,7 +42,9 @@ layout: compress
<link rel="shortcut icon" href="{{site.baseurl}}/img/ico/sitespeed.io.ico">
<link type="application/atom+xml" href="https://www.sitespeed.io/feed/index.xml" rel="alternate" />
<style>{% include css/default.css %}</style>
<style>
{% include css/default.css %}
</style>
<link rel="stylesheet" href="{{ "/css/prism-1.15.css" | prepend: site.baseurl }}">
<script type="text/javascript">{% include userTimings.js %}</script>
<script src="{{ "/js/clipboard-2.0.4.min.js" | prepend: site.baseurl }}" defer></script>
@ -51,7 +53,7 @@ layout: compress
<body>
{% include header.html %}
<div class="white">
<div class="grid">
<div class="col-1-1">
@ -62,8 +64,10 @@ layout: compress
<section>
<div class="col-1-1">
<div class="edit">
<a href="{{ page.path | prepend: "https://github.com/sitespeedio/sitespeed.io/edit/master/docs/" }}">Edit on
Github</a>
<a
href="{{ page.path | prepend: "https://github.com/sitespeedio/sitespeed.io/edit/master/docs/" }}">Edit
on
GitHub</a>
</div>
</div>
</section>
@ -78,4 +82,4 @@ layout: compress
{% include youtube.js %}
</body>
</html>
</html>

View File

@ -30,9 +30,9 @@ docker run --privileged --shm-size=1g --network=cable --rm sitespeedio/sitespeed
You have more examples [here]({{site.baseurl}}/documentation/sitespeed.io/browsers/#change-connectivity) and would love feedback and PRs on how to do the same on platforms not supporting tc.
## Get that timeline
You can now turn on the trace log for Chrome when you analyze a page. The trace log will be saved to disk and you can drag and drop it into the Timeline in Chrome. This also works if you run Chrome on your Android phone. We also added support for doing the same with WebPageTest (you could turn on Timeline before but we didn't automatically fetch it).
You can now turn on the trace log for Chrome when you analyse a page. The trace log will be saved to disk and you can drag and drop it into the Timeline in Chrome. This also works if you run Chrome on your Android phone. We also added support for doing the same with WebPageTest (you could turn on Timeline before but we didn't automatically fetch it).
As an extra bonus, there's a Chrome trace message that is passed inside sitespeed.io when the trace is collected so your plugin can collect it and analyze the data. Look out from *browsertime.chrometrace* and *webpagetest.chrometrace* messages to pickup the trace. We are looking forward to the first plugin that will use it :)
As an extra bonus, there's a Chrome trace message that is passed inside sitespeed.io when the trace is collected so your plugin can collect it and analyse the data. Look out from *browsertime.chrometrace* and *webpagetest.chrometrace* messages to pickup the trace. We are looking forward to the first plugin that will use it :)
Turn on the log with <code>--browsertime.chrome.dumpTraceCategoriesLog</code>, unpack the file and drop it in your timeline in dev-tools in Chrome.

View File

@ -25,7 +25,7 @@ We are a [three member team]({{site.baseurl}}/aboutus/), with more PRs (but we w
### Workflow
Let us show exactly what happens when we push code:
1. We commit our code (or merge your PR) to [Github](https://github.com/sitespeedio/sitespeed.io).
1. We commit our code (or merge your PR) to [GitHub](https://github.com/sitespeedio/sitespeed.io).
2. [Travis-CI](https://travis-ci.org/sitespeedio/sitespeed.io) runs a couple of unit tests and a couple of full integration test where we run sitespeed.io from the command line, testing a couple of sites in Chrome/Firefox, and tests our WebPageTest integration. You can find our Travis configuration [here](https://github.com/sitespeedio/sitespeed.io/blob/master/.travis.yml).
3. The commit also builds a new [Docker container at the Docker Hub](https://hub.docker.com/r/sitespeedio/sitespeed.io-autobuild/). Remember: This is not the same image as you use when you run sitespeed.io in production, this one contains the latest and greatest commits.
4. We have a test server on Digital Ocean that runs the latest Docker container (it auto updates when there's a new version of the container). When the next test runs, it will use that latest version. When the test runs, it will upload the HTML to S3 and send the metrics to our Graphite instance.
@ -80,7 +80,7 @@ We constantly trying to improve our releases process and making it as safe as po
* It would be cool if we could check the logs on Travis and if we get an error in the log, just break the build. Today we only break the build when sitespeed.io returns an error code.
If you have ideas on how we can test better, please [create an issue at Github](https://github.com/sitespeedio/sitespeed.io/issues/new) or send us a [tweet](https://twitter.com/sitespeedio)!
If you have ideas on how we can test better, please [create an issue at GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new) or send us a [tweet](https://twitter.com/sitespeedio)!
/Peter

View File

@ -13,7 +13,7 @@ It was almost 6 months ago when we released 4.0 and to get that out was a lot of
But first lets check what we have added in the last months:
* Video with SpeedIndex/firstVisualChange/lastVisualChange and VisualComplete 85%. This is real SpeedIndex where we record a video of the screen and use [VisualMetrics](https://github.com/WPO-Foundation/visualmetrics/) to analyze and get the metrics.
* Video with SpeedIndex/firstVisualChange/lastVisualChange and VisualComplete 85%. This is real SpeedIndex where we record a video of the screen and use [VisualMetrics](https://github.com/WPO-Foundation/visualmetrics/) to analyse and get the metrics.
* Upload the [HTML result to Amazon S3](https://results.sitespeed.io/en.wikipedia.org/2017-04-10-06-00-04/pages/en.wikipedia.org/wiki/Barack_Obama/).
* A better way to [set connectivity using Docker networks]({{site.baseurl}}/documentation/sitespeed.io/browsers/#change-connectivity).
* Cleaner default Grafana dashboard with links to the HTML results.
@ -23,7 +23,7 @@ But first lets check what we have added in the last months:
Before we go on about the new things in 5.0 we wanna tell you about the status of the project:
We have had more 500,000 downloads of sitespeed.io (611k + the ones we had before we moved to NodeJS and Docker)! We have a lot more things we want to add and we need your help more than ever!
We have a Slack channel for developers [that you should join](https://sitespeedio.herokuapp.com/)! This is the place you can get help with building plugins or contribute back to sitespeed.io. If you have questions on how to run sitespeed.io, please use [Github issues](https://github.com/sitespeedio/sitespeed.io/issues/new).
We have a Slack channel for developers [that you should join](https://sitespeedio.herokuapp.com/)! This is the place you can get help with building plugins or contribute back to sitespeed.io. If you have questions on how to run sitespeed.io, please use [GitHub issues](https://github.com/sitespeedio/sitespeed.io/issues/new).
Between the latest 4.7 and now 5.0 we have focused on getting the HTML mean and clean. Let's check out the changed in 5.0.

View File

@ -16,7 +16,7 @@ In this release we upgraded some of the core 3rd party software we use in Browse
* Upgraded to [Geckodriver 0.17.0](https://github.com/mozilla/geckodriver/releases/tag/v0.17.0) seems that seems to fix the problem loading very small pages [#321](https://github.com/sitespeedio/browsertime/issues/321).
* Upgraded to [Chromedriver 2.30](https://chromedriver.storage.googleapis.com/2.30/notes.txt) with a very special hack to fix [#347](https://github.com/sitespeedio/browsertime/pull/347).
* Upgraded to [ChromeDriver 2.30](https://chromedriver.storage.googleapis.com/2.30/notes.txt) with a very special hack to fix [#347](https://github.com/sitespeedio/browsertime/pull/347).
* Updated the Docker container to Firefox 54 and Chrome 60 (beta) to fix the background color problems we've seen for a while: Loading a URL in emulated mode changed the background color to grey before first visual change. And on desktop the background color was changed to the first color of the page (in our case that made the background color orange appearing before first visual change). [Checkout the original Chrome bug #727046](https://bugs.chromium.org/p/chromium/issues/detail?id=727046).

View File

@ -74,7 +74,7 @@ You can run like this:
docker run --cap-add=NET_ADMIN --shm-size=1g --rm -v "$(pwd)":/browsertime -e REPLAY=true -e LATENCY=100 sitespeedio/browsertime https://en.wikipedia.org/wiki/Barack_Obama
</code>
Here are a couple of examples from our real world tests. We test on Digital Ocean Optimized Droplets 4 gb memory with 2 vCPUs. We test both with connectivity set to cable (to try to minimize the impact of flakey internet) and one tests using WebPageReplay. We tests with the same amount of runs on the same machine.
Here are a couple of examples from our real world tests. We test on Digital Ocean Optimized Droplets 4 gb memory with 2 vCPUs. We test both with connectivity set to cable (to try to minimize the impact of flaky internet) and one tests using WebPageReplay. We tests with the same amount of runs on the same machine.
Here's an example from one of the sites we test. Here we test with connectivity set to cable.
![Connectivity example 1]({{site.baseurl}}/img/bt-3.0/connectivity-example-1.png)
@ -189,7 +189,7 @@ We got some breaking changes, please read about them before you upgrade.
* You can now choose what kind of response bodies you want to store in your HAR file. Instead of using --firefox.includeResponseBodies to include all bodies you can now use <code>--firefox.includeResponseBodies</code> [none,all,html][#518](https://github.com/sitespeedio/browsertime/pull/518).
* We cleaned up how you collect trace logs from Chrome. If you want the devtools.timeline log (and CPU spent metrics), just use <code>--chrome.timeline</code>. If you want to configure trace categories yourself, use --chrome.traceCategories
* File names are now based on 1 and not 0 so the first file from the first iteration is named something with -1. [#536](https://github.com/sitespeedio/browsertime/pull/536).
* Store the Chromedriver log in the result directory (before it was stored where you run Browsertime) [#452](https://github.com/sitespeedio/browsertime/pull/452).
* Store the ChromeDriver log in the result directory (before it was stored where you run Browsertime) [#452](https://github.com/sitespeedio/browsertime/pull/452).
* In some cases we leaked Bluebird promises, they are now native promises.
* Running the engine took a promise that eventually became the scripts. Now you need to run with the scripts directly (no promises) to simplify the flow.

View File

@ -71,7 +71,7 @@ You can run like this:
docker run --cap-add=NET_ADMIN --shm-size=1g --rm -v "$(pwd)":/sitespeed.io -e REPLAY=true -e LATENCY=100 sitespeedio/sitespeed.io https://en.wikipedia.org/wiki/Barack_Obama
</code>
Here are a couple of examples from our real world tests. We test on Digital Ocean Optimized Droplets 4 gb memory with 2 vCPUs. We test both with connectivity set to cable (to try to minimize the impact of flakey internet) and one tests using WebPageReplay. We tests with the same amount of runs on the same machine.
Here are a couple of examples from our real world tests. We test on Digital Ocean Optimized Droplets 4 gb memory with 2 vCPUs. We test both with connectivity set to cable (to try to minimize the impact of flaky internet) and one tests using WebPageReplay. We tests with the same amount of runs on the same machine.
Here's an example from one of the sites we test. Here we test with connectivity set to cable.
![Connectivity example 1]({{site.baseurl}}/img/bt-3.0/connectivity-example-1.png)

View File

@ -155,7 +155,7 @@ We have updated our dashboards to include new metrics like Privacy score from th
![Compare to last week]({{site.baseurl}}/img/new-dashboard-8.0.jpg)
{: .img-thumbnail}
You can get the [updated dashboards from Github](https://github.com/sitespeedio/grafana-bootstrap-docker/tree/master/dashboards/graphite) and check them out at [dashboard.sitespeed.io](https://dashboard.sitespeed.io/d/000000044/page-timing-metrics?orgId=1).
You can get the [updated dashboards from GitHub](https://github.com/sitespeedio/grafana-bootstrap-docker/tree/master/dashboards/graphite) and check them out at [dashboard.sitespeed.io](https://dashboard.sitespeed.io/d/000000044/page-timing-metrics?orgId=1).
## New budget configuration
One problem before 8.0 was that it was really hard to configure a performance budget: You needed to use the internal data structure and that sucks. Looking at other tools we could see that configuring a budget is usually hard. That's why we are introducing a new way in 8.0 (if you where using the old configuration pre 8.0, don't worry, that will continue to work).

View File

@ -44,11 +44,11 @@ You will find server timings in the Metrics tab.
## New Chrome and Firefox
The Docker container now uses Chrome 72, Firefox 65 and latest Chromedriver/Geckodriver. It's good to stay updated to the latest versions since your users is auto updating the browser. You want to be on top of that and use the same version.
The Docker container now uses Chrome 72, Firefox 65 and latest ChromeDriver/GeckoDriver. It's good to stay updated to the latest versions since your users is auto updating the browser. You want to be on top of that and use the same version.
## New command: run JavaScript and wait for page complete
There's a new [command](/documentation/sitespeed.io/scripting/#commmands) ```js.runAndWait('')``` that makes it possible to run your own JavaScript, click a link and wait on page navigation. This is super handy if you want to navigate using JavaScript.
There's a new [command](/documentation/sitespeed.io/scripting/#commands) ```js.runAndWait('')``` that makes it possible to run your own JavaScript, click a link and wait on page navigation. This is super handy if you want to navigate using JavaScript.
## Example: Measure shopping/checkout process
I want to highlight that we have an [example section](/documentation/sitespeed.io/scripting/#examples) in the documentation on how you can use the new scripting introduced in 8.0.

View File

@ -15,7 +15,7 @@ When we released 8.0 we pushed the most wanted feature: scripting that makes it
The first release didn't have any fancy error handling, but now you have more alternatives.
You can try/catch failing commands that throw errors. If an error is not catched in your script, it will be catched in sitespeed.io and the error will be logged and reported in the HTML and to your data storage (Graphite/InfluxDb) under the key *browsertime.statistics.errors*.
You can try/catch failing commands that throw errors. If an error is not caught in your script, it will be caught in sitespeed.io and the error will be logged and reported in the HTML and to your data storage (Graphite/InfluxDb) under the key *browsertime.statistics.errors*.
If you do catch the error, you should make sure you report it yourself with the [error command](#error), so you can see that in the HTML report. This is needed for all errors except navigating/measuring a URL. They will automatically be reported (since they are always important).
@ -58,7 +58,7 @@ Remember that when you start your first script, the cache is already cleared and
## Meta data
You can add meta data to your script. The extra data will be visibile in the HTML result page.
You can add meta data to your script. The extra data will be visible in the HTML result page.
Setting meta data like this:

View File

@ -0,0 +1,79 @@
---
layout: default
title: sitespeed.io 11.0
description: Better configurable HTML output, the new Contentful Speed Index metric, Firefox Window recorder and finally no root in Docker.
authorimage: /img/aboutus/peter.jpg
intro: Better configurable HTML output, the new Contentful Speed Index metric, Firefox Window recorder and finally no root in Docker.
keywords: sitespeed.io, browsertime, webperf
nav: blog
---
# sitespeed.io 11.0
We have just shipped Browsertime 7.0 and sitespeed.io 11.0 with some great contributions from outside of the core team!
A lot of love and extra thanks to:
* [Mason Malone](https://github.com/MasonM) - Mason fixed the long annoying problem of when you are running your test on Linux, the result files are stored as the user root. Masons fix instead pick up the owner of the result directory, and uses that owner. Clever!
* [Thapasya Murali](https://github.com/thapasya-m) - Thapasya have made it possible to configure the summary boxes (on the start result HTML page) and the columns of the pages page. This makes it possible for you to choose which metrics you want to see on those pages!
* [Denis Palmeiro](https://github.com/dpalmeiro) - Denis added the new metric Contentful Speed Index and the new Firefox window recorder!
Let us go through the most important changes in the new release:
- [No root in Docker](#no-root-in-docker)
- [Configurable result HTML](#configurable-result-html)
- [Contentful Speed Index](#contentful-speed-index)
- [Firefox Window recorder](#firefox-window-recorder)
- [Small variance in connectivity](#small-variance-in-connectivity)
- [Other fixes](#other-fixes)
- [Sponsor sitespeed.io](#sponsor-sitespeedio)
## No root in Docker
Nothing more to say about this, we should have fixed it long time ago. Thank you again Mason for the fix!
## Configurable result HTML
You can now configure which metrics to see in the columns for all pages:
![Page columns]({{site.baseurl}}/img/pagecolumns.png)
{: .img-thumbnail}
And you can also choose which metrics to see in the summary boxes:
![Summary boxes]({{site.baseurl}}/img/summary-boxes.png)
{: .img-thumbnail}
aWe have [a new documentation page](documentation/sitespeed.io/configure-html/) to show you how to do it!
## Contentful Speed Index
Contentful Speed Index is a new SI metric developed by Bas Schouten at Mozilla which uses edge detection to calculate the amount of "content" that is visible on each frame. It was primarily designed for two main purposes:
* Have a good metric to measure the amount of text that is visible.
* Design a metric that is not easily fooled by the pop up splash/login screens that commonly occur at the end of a page load. These can often disturb the speed index numbers since the last frame that is being used as reference is not accurate.
## Firefox Window recorder
Denis also added the new Firefox built-in window recorder ([bug 1536174](https://bugzilla.mozilla.org/show_bug.cgi?id=1536174)) that is able to dump PNG images of each frame that is painted to the window. The PR introduces a new privileged API that is able to execute JS in the chrome context, as well as support for generating a variable rate MP4 using the output images from the window recorder. The motivation for this work was to introduce a low-overhead video recorder that will not introduce performance disturbances during page loads. The idea is that using it will add smaller overhead than using FFMPeg for the video.
You use this with `--video --firefox.windowRecorder`.
## Small variance in connectivity
There's a new way to set variance on your connectivity. At the moment you can only do that when you are using Throttle as engine. You can try it out with `--connectivity.variance 2` - that means the latency will have a variance of 2% between runs. The original idea comes from Emery Berger.
We haven't studied this yet. The idea is that by introducing a small variance, you will be able to spot if you hit a thresholds that have a big impact on the performance on your page. We will get back later with more information when we had it running for a while.
## Other fixes
We done two important bug fixes:
* Fixed so that you can disable video/visual metrics in your configuration JSON in Docker as reported in [#2692](https://github.com/sitespeedio/sitespeed.io/issues/2692) fixed by PR [#2715](https://github.com/sitespeedio/sitespeed.io/pull/2715).
* Fixed so that running AXE when testing multiple URLs works in scripting (reported in [#2754](https://github.com/sitespeedio/sitespeed.io/issues/2754)). Fixed in [#2755](https://github.com/sitespeedio/sitespeed.io/pull/2755).
Read about all the changes in [the changelog](https://github.com/sitespeedio/sitespeed.io/blob/master/CHANGELOG.md)!
## Sponsor sitespeed.io
We are also launching [GitHub sponsorship](https://github.com/sitespeedio/sitespeed.io/blob/master/CHANGELOG.md) with this release! [Read more about sponsoring sitespeed.io](/sponsor/).
/Peter

View File

@ -9,7 +9,7 @@ timeouts
chrome
--chrome.args Extra command line arguments to pass to the Chrome process (e.g. --no-sandbox). To add multiple arguments to Chrome, repeat --chrome.args once per argument.
--chrome.binaryPath Path to custom Chrome binary (e.g. Chrome Canary). On OS X, the path should be to the binary inside the app bundle, e.g. "/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary"
--chrome.chromedriverPath Path to custom Chromedriver binary. Make sure to use a Chromedriver version that's compatible with the version of Chrome you're using
--chrome.chromedriverPath Path to custom ChromeDriver binary. Make sure to use a ChromeDriver version that's compatible with the version of Chrome you're using
--chrome.mobileEmulation.deviceName Name of device to emulate. Works only standalone (see list in Chrome DevTools, but add phone like 'iPhone 6'). This will override your userAgent string.
--chrome.mobileEmulation.width Width in pixels of emulated mobile screen (e.g. 360) [number]
--chrome.mobileEmulation.height Height in pixels of emulated mobile screen (e.g. 640) [number]
@ -49,7 +49,7 @@ video
--videoParams.crf Constant rate factor see https://trac.ffmpeg.org/wiki/Encode/H.264#crf [default: 23]
--videoParams.addTimer Add timer and metrics to the video. [boolean] [default: true]
--videoParams.debug Turn on debug to record a video with all pre/post and scripts/URLS you test in one iteration. Visual Metrics will then automatically be disabled. [boolean] [default: false]
--videoParams.keepOriginalVideo Keep the original video. Use it when you have a Visual Metrics bug and creates an issue at Github [boolean] [default: false]
--videoParams.keepOriginalVideo Keep the original video. Use it when you have a Visual Metrics bug and creates an issue at GitHub [boolean] [default: false]
--videoParams.filmstripFullSize Keep original sized screenshots. Will make the run take longer time [boolean] [default: false]
--videoParams.filmstripQuality The quality of the filmstrip screenshots. 0-100. [default: 75]
--videoParams.createFilmstrip Create filmstrip screenshots. [boolean] [default: true]
@ -106,8 +106,8 @@ Options:
--decimals The decimal points browsertime statistics round to. [number] [default: 0]
--cacheClearRaw Use internal browser functionality to clear browser cache between runs instead of only using Selenium. [boolean] [default: false]
--basicAuth Use it if your server is behind Basic Auth. Format: username@password (Only Chrome and Firefox at the moment).
--preScript Selenium script(s) to run before you test your URL/script. They will run outside of the analyze phase. Note that --preScript can be passed multiple times.
--postScript Selenium script(s) to run after you test your URL. They will run outside of the analyze phase. Note that --postScript can be passed multiple times.
--preScript Selenium script(s) to run before you test your URL/script. They will run outside of the analyse phase. Note that --preScript can be passed multiple times.
--postScript Selenium script(s) to run after you test your URL. They will run outside of the analyse phase. Note that --postScript can be passed multiple times.
--script Add custom Javascript to run after the page has finished loading to collect metrics. If a single js file is specified, it will be included in the category named "custom" in the output json. Pass a folder to include all .js scripts in the folder, and have the folder name be the category. Note that --script can be passed multiple times.
--userAgent Override user agent
--silent, -q Only output info in the logs, not to the console. Enter twice to suppress summary line. [count]
@ -121,7 +121,7 @@ Options:
--useSameDir Store all files in the same structure and do not use the path structure released in 4.0. Use this only if you are testing ONE URL.
--xvfb Start xvfb before the browser is started [boolean] [default: false]
--xvfbParams.display The display used for xvfb [default: 99]
--preURL A URL that will be accessed first by the browser before the URL that you wanna analyze. Use it to fill the cache.
--preURL A URL that will be accessed first by the browser before the URL that you wanna analyse. Use it to fill the cache.
--preURLDelay Delay between preURL and the URL you want to test (in milliseconds) [default: 1500]
--userTimingWhitelist All userTimings are captured by default this option takes a regex that will whitelist which userTimings to capture in the results.
--headless Run the browser in headless mode. Works for Firefox and Chrome. [boolean] [default: false]

View File

@ -63,7 +63,7 @@ You can throttle the connection to make the connectivity slower to make it easie
## Test on your mobile device
Browsertime supports Chrome on Android: Collecting SpeedIndex, HAR and video! This is still really new, let us know if you find any bugs.
You need to [install adb](https://www.sitespeed.io/documentation/sitespeed.io/mobile-phones/#desktop) and [prepare your phone](https://www.sitespeed.io/documentation/sitespeed.io/mobile-phones/#on-your-phone) before you start.
You need to [install ADB](https://www.sitespeed.io/documentation/sitespeed.io/mobile-phones/#desktop) and [prepare your phone](https://www.sitespeed.io/documentation/sitespeed.io/mobile-phones/#on-your-phone) before you start.
The current version doesn't support Docker so you need to [install the requirements](https://github.com/sitespeedio/docker-visualmetrics-deps/blob/master/Dockerfile) for VisualMetrics yourself on your machine before you start.

View File

@ -1,6 +1,6 @@
---
layout: default
title: Documentation Browsertime 6
title: Documentation Browsertime 7
description: Read about all you can do with Browsertime.
keywords: tools, documentation, web performance
nav: documentation
@ -9,7 +9,7 @@ image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
twitterdescription: Documentation for Browsertime.
---
# Documentation v6
# Documentation v7
<img src="{{site.baseurl}}/img/logos/browsertime.png" class="pull-right img-big" alt="Browsertime logo" width="200" height="175">

View File

@ -36,7 +36,7 @@ It is usually used for two different things:
To understand how Browsertime does these things, let's talk about how it works. Here's an example of what happens when you give Browsertime a URL to test:
1. You give your configuration to Browsertime.
2. Browsertime uses the [WebDriver](https://www.w3.org/TR/webdriver/) (through [Selenium](http://seleniumhq.github.io/selenium/docs/api/javascript/index.html)) to start Firefox and Chrome (the implementations for the Webdriver is [Chromedriver](https://sites.google.com/a/chromium.org/chromedriver/)/[Geckodriver](https://github.com/mozilla/geckodriver/)).
2. Browsertime uses the [WebDriver](https://www.w3.org/TR/webdriver/) (through [Selenium](http://seleniumhq.github.io/selenium/docs/api/javascript/index.html)) to start Firefox and Chrome (the implementations for the WebDriver is [ChromeDriver](https://sites.google.com/a/chromium.org/chromedriver/)/[GeckoDriver](https://github.com/mozilla/geckodriver/)).
3. Browsertime starts FFMPEG to record a video of the browser screen
4. The browser access the URL.
5. When the page is finished loading (you can define yourself when that happens), Browsertime executes the default JavaScript timing metrics and collects:

View File

@ -56,9 +56,9 @@ Does it look familiar? Yep it is almost the same structure as an YSlow rule :)
### DOM vs HAR advice
The coach analyze a page in two steps: First it executes Javascript in the browser to do checks that are a perfect fit for Javascript: examine the rendering path, check if images are scaled in the browser and more.
The coach analyse a page in two steps: First it executes JavaScript in the browser to do checks that are a perfect fit for JavaScript: examine the rendering path, check if images are scaled in the browser and more.
Then the coach take the HAR file generated from the page and analyze that too. The HAR is good if you want the number of responses, response size and check cache headers.
Then the coach take the HAR file generated from the page and analyse that too. The HAR is good if you want the number of responses, response size and check cache headers.
In the last step the coach merges the advice into one advice list and creates an overall score.
@ -192,7 +192,7 @@ Right now all these tests run in https://github.com/sitespeedio/coach/blob/maste
Each test case runs against a specific HTML page located in `test/http-server` Create a suitable HTML page with the structure you want to test. Create the test case in `test/dom` or `test/har` and run it with <code>npm test</code>
## Test your changes against a web page
The coach uses Browsertime as runner for browsers. When you finished with a change, make sure to build a new version of the combined Javascript and then test against a url.
The coach uses Browsertime as runner for browsers. When you finished with a change, make sure to build a new version of the combined Javascript and then test against a URL.
```
npm run combine

View File

@ -71,7 +71,7 @@ This will get you the full JSON, the same as if you integrate the coach into you
### Bookmarklet
We also produce a bookmarklet. The bookmarklet only uses advice that you can run inside the browser (it doesn't have HAR file to analyze even though maybe possible in the future with the Resource Timing API).
We also produce a bookmarklet. The bookmarklet only uses advice that you can run inside the browser (it doesn't have HAR file to analyse even though maybe possible in the future with the Resource Timing API).
The bookmarklet is really rough right now and logs the info to the browser console. Help us make a cool front-end :)
@ -84,7 +84,7 @@ grunt bookmarklet
and then you will find it in the dist folder.
### Include in your own tool
The coach uses Browsertime to start the browser, execute the Javascript and fetch the HAR file. You can use that functionality too inside your tool or you can use the raw scripts if you have your own browser implementation.
The coach uses Browsertime to start the browser, execute the JavaScript and fetch the HAR file. You can use that functionality too inside your tool or you can use the raw scripts if you have your own browser implementation.
#### Use built in browser support
@ -106,7 +106,7 @@ const result = api.run(url, domScript, harScript, options);
#### Use the scripts
Say that your tool run on Windows, you start the browsers yourself and you generate your own HAR file. Create your own wrapper to get the coach to help you.
First you need the Javascript advice, you can get the raw script either by generating it yourself or through the API.
First you need the JavaScript advice, you can get the raw script either by generating it yourself or through the API.
Generate the script
@ -157,8 +157,8 @@ The coach will give you advice on how to make your page better. You will also ge
The coach tests your site in two steps:
* Executes Javascript in your browser and check for performance, accessibility, best practice and collect general info about your page.
* Analyze the [HAR file](http://www.softwareishard.com/blog/har-12-spec/) for your page together with relevant info from the DOM process.
* Executes JavaScript in your browser and check for performance, accessibility, best practice and collect general info about your page.
* analyse the [HAR file](http://www.softwareishard.com/blog/har-12-spec/) for your page together with relevant info from the DOM process.
You can run the different steps standalone but for the best result run them together.

View File

@ -108,8 +108,8 @@ Or you can use it like this: https://compare.sitespeed.io/?config=https://URL_TO
Make sure that your server has correct [CORS settings](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) so that compare.sitespeed.io can get the HAR file.
{: .note .note-warning}
### Github gist
You can also use host your configuration file on a [Github gist](https://gist.github.com/) and use the gist id https://compare.sitespeed.io?gist=GIST_ID to get the configuration file.
### GitHub gist
You can also use host your configuration file on a [GitHub gist](https://gist.github.com/) and use the gist id https://compare.sitespeed.io?gist=GIST_ID to get the configuration file.
You can checkout out our example:
[https://gist.github.com/soulgalore/94e4d997a78e03b32b939fcea63eae8e](https://gist.github.com/soulgalore/94e4d997a78e03b32b939fcea63eae8e)

View File

@ -1,7 +1,7 @@
---
layout: default
title: Documentation for all sitespeed.io tools.
description: Here's the documentation of how to use all the sitespeed.io tools. Use latest LTS release 10.x of NodeJs or Docker containers to get them up and running.
description: Here's the documentation of how to use all the sitespeed.io tools. Use latest LTS release 12.x of NodeJS or Docker containers to get them up and running.
keywords: tools, documentation, web performance, version, nodejs.
nav: documentation
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
@ -9,7 +9,7 @@ twitterdescription: Documentation for the sitespeed.io.
---
# Documentation
Use Docker or the latest LTS release (10.x) of NodeJS to run the sitespeed.io tools.
Use Docker or the latest LTS release (12.x) of NodeJS to run the sitespeed.io tools.
* [sitespeed.io]({{site.baseurl}}/documentation/sitespeed.io/) - continuously monitor your web sites web performance (including the Coach, Browsertime, PageXray and the rest).
* [Coach]({{site.baseurl}}/documentation/coach/) - get help from the Coach how you can make your web page faster.

View File

@ -162,4 +162,4 @@ You can do the same with all the metrics you want. On mobile Wikipedia metrics i
![First visual change]({{site.baseurl}}/img/alerts/first-visual-change2.png)
{: .img-thumbnail}
If you have any questions about the alerts, feel free to [create an issue at Github](https://github.com/sitespeedio/sitespeed.io/issues/new?title=Alerts) or hit us on [Slack](https://sitespeedio.herokuapp.com).
If you have any questions about the alerts, feel free to [create an issue at GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new?title=Alerts) or hit us on [Slack](https://sitespeedio.herokuapp.com).

View File

@ -1,7 +1,7 @@
---
layout: default
title: F.A.Q. and best practice using sitespeed.io
description: Here we keep questions that gets asked on our Slack channel or frequently on Github.
description: Here we keep questions that gets asked on our Slack channel or frequently on GitHub.
keywords: best practice, faq
nav: documentation
category: sitespeed.io
@ -16,7 +16,7 @@ twitterdescription:
* Lets place the TOC here
{:toc}
Here we keep questions that are frequently asked at [Slack](https://sitespeedio.herokuapp.com/) or at [Github](https://github.com/sitespeedio/sitespeed.io/issues/new).
Here we keep questions that are frequently asked at [Slack](https://sitespeedio.herokuapp.com/) or at [GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new).
## Running tests
Read this before you start to collect metrics.
@ -77,7 +77,7 @@ Checkout the [scripting capabilities](../scripting/) that makes it easy to test
We currently don't built in support for changing the CPU. What we do know is that you should not use the built in support in Chrome or try to simulate slow CPUs by running on slow AWS instance. What should do is what WPTAgent do. You can check the code at [https://github.com/WPO-Foundation/wptagent/blob/master/wptagent.py](https://github.com/WPO-Foundation/wptagent/blob/master/wptagent.py) and do the same before you start a run and then remove it after the run.
### Throttle or not throttle your connection?
**PLEASE, YOU NEED TO ALWAYS THROTTLE YOUR CONNECTION!** You should always throttle/limit the connectivity because it will make it easier for you to find regressions. If you don't do it, you can run your tests with different connectivity profiles and regressions/improvements that you see is caused by your servers flakey internet connection. Check out our [connectivity guide]({{site.baseurl}}/documentation/sitespeed.io/connectivity/).
**PLEASE, YOU NEED TO ALWAYS THROTTLE YOUR CONNECTION!** You should always throttle/limit the connectivity because it will make it easier for you to find regressions. If you don't do it, you can run your tests with different connectivity profiles and regressions/improvements that you see is caused by your servers flaky internet connection. Check out our [connectivity guide]({{site.baseurl}}/documentation/sitespeed.io/connectivity/).
### Clear browser cache between runs
By default Browsertime creates a new profile for each iteration you do, meaning the cache is cleared through the webdriver. If you really want to be sure sure everything is cleared between runs you can use our WebExtension to clear the browser cache by adding <code>--browsertime.cacheClearRaw</code>.
@ -200,7 +200,7 @@ When you create your buckets at S3 or GCS, you can configure how long time it wi
We've been trying out alerts in Grafana for a while and it works really good for us. Checkout the [alert section]({{site.baseurl}}/documentation/sitespeed.io/alerts/) in the docs.
## Difference in metrics between WebPageTest and sitespeed.io
Now and then it pops up an issue on Github where users ask why some metrics differs between WebPageTest and sitespeed.io.
Now and then it pops up an issue on GitHub where users ask why some metrics differs between WebPageTest and sitespeed.io.
There's a couple of things to know that differs between WebPageTest and Browsertime/sitespeed.io but first I wanna say that it is wrong to compare between tools, it is right to continuously compare within the same tool to find regressions :)

View File

@ -84,7 +84,7 @@ docker run --shm-size 2g --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io
~~~
## Chrome
The latest version of Chrome should work out of the box. Latest version of stable [Chromedriver](http://chromedriver.chromium.org) is bundled in sitespeed.io and needs to match your Chrome version.
The latest version of Chrome should work out of the box. Latest version of stable [ChromeDriver](http://chromedriver.chromium.org) is bundled in sitespeed.io and needs to match your Chrome version.
### Chrome setup
When we start Chrome it is setup with [these](https://github.com/sitespeedio/browsertime/blob/master/lib/chrome/webdriver/chromeOptions.js) command line switches.
@ -114,14 +114,14 @@ You can choose which version of Chrome you want to run by using the ```--chrome.
Our Docker container only contains one version of Chrome and [let us know](https://github.com/sitespeedio/sitespeed.io/issues/new) if you need help to add more versions.
### Use a newer version of Chromedriver
Chromedriver is the driver that handles the communication with Chrome. At the moment the Chromedriver version needs to match the Chrome version. By default sitespeed.io and Browsertime comes with the Chromedriver version that matches the Chrome version in the Docker container. If you wanna run tests on Chrome Beta/Canary you probably need to download a later version of Chromedriver.
### Use a newer version of ChromeDriver
ChromeDriver is the driver that handles the communication with Chrome. At the moment the ChromeDriver version needs to match the Chrome version. By default sitespeed.io and Browsertime comes with the ChromeDriver version that matches the Chrome version in the Docker container. If you wanna run tests on Chrome Beta/Canary you probably need to download a later version of ChromeDriver.
You download Chromedriver from [http://chromedriver.chromium.org](http://chromedriver.chromium.org) and then use ```--chrome.chromedriverPath``` to set the path to the new version of the Chromedriver.
You download ChromeDriver from [http://chromedriver.chromium.org](http://chromedriver.chromium.org) and then use ```--chrome.chromedriverPath``` to set the path to the new version of the ChromeDriver.
## Safari
You can run Safari on Mac OS X. To run on iOS you need Catalina and iOS 13. To see more what you can do with the Safaridriver you can run `man safaridriver` in your terminal.
You can run Safari on Mac OS X. To run on iOS you need Catalina and iOS 13. To see more what you can do with the SafariDriver you can run `man safaridriver` in your terminal.
### Limitations
We do not support HAR, video, cookies/request headers in Safari at the moment.
@ -145,7 +145,7 @@ There are a couple of different ways to choose which device to use:
* `--safari.useSimulator` if the value of useSimulator is true, safaridriver will only use iOS Simulator hosts. If the value of safari:useSimulator is false, safaridriver will not use iOS Simulator hosts. NOTE: An Xcode installation is required in order to run WebDriver tests on iOS
### Diagnose problems
If you need to file a bug with Safaridriver, you also want to include diagnostics generated by safaridriver. You do that by adding `--safari.diagnose` to your run.
If you need to file a bug with SafariDriver, you also want to include diagnostics generated by SafariDriver. You do that by adding `--safari.diagnose` to your run.
~~~bash
sitespeed.io --safari.ios -b safari --safari.diagnose
@ -173,7 +173,7 @@ If you add your own complete check you can also choose when your check is run. B
## Custom metrics
You can collect your own metrics in the browser by supplying Javascript file(s). By default we collect all metrics inside [these folders](https://github.com/sitespeedio/browsertime/tree/master/browserscripts), but you might have something else you want to collect.
You can collect your own metrics in the browser by supplying JavaScript file(s). By default we collect all metrics inside [these folders](https://github.com/sitespeedio/browsertime/tree/master/browserscripts), but you might have something else you want to collect.
Each javascript file need to return a metric/value which will be picked up and returned in the JSON. If you return a number, statistics will automatically be generated for the value (like median/percentiles etc).

View File

@ -13,7 +13,7 @@ category: sitespeed.io
# How to Write a Good Bug Report
{:.no_toc}
<b>TL;DR - Please create a reproducable bug report!</b>
<b>TL;DR - Please create a reproducible bug report!</b>
* Lets place the TOC here
{:toc}
@ -23,7 +23,7 @@ We love when you create a new issue for sitespeed.io! We really do. New issues h
Sometimes we get a really detailed issue: You describe exactly how you do when you get the problem, you share the log, you write down what you have tested, share screenshots, share videos. You even tries to understand why you get this bug. When we get an issue like that, it always jump to my number one prioritization. If you put down all the time and effort to really describe the issue, we want to put all our effort to fix it.
It also happens (quite often) that we get issues that misses important information, so we need to ask you again and again about the problem (like how to reproduce the issue). Sometimes we need to do that two/three/four times within that issue. Issues that misses vital information takes longer time to fix/close and that makes us spend more time asking questions instead of fixing actual bugs or creating new functionallity.
It also happens (quite often) that we get issues that misses important information, so we need to ask you again and again about the problem (like how to reproduce the issue). Sometimes we need to do that two/three/four times within that issue. Issues that misses vital information takes longer time to fix/close and that makes us spend more time asking questions instead of fixing actual bugs or creating new functionality.
We use a [issue template](https://raw.githubusercontent.com/sitespeedio/sitespeed.io/master/.github/ISSUE_TEMPLATE.md) with a comment of what we need write but it seems that is not the best way so let us instead show you what we need!
@ -32,7 +32,7 @@ Before you start creating a issue, you should make sure you have read through ou
## Explain how to reproduce your issue
The best way to make sure we can fix your issue, is to make sure we can reproduce the problem you have. If we can reproduce the problem, we can verify that we actually have fixed it with our code change.
**Exactly** what do we mean by making it reproducable? We should be able to copy/paste your example CLI parameters and try on our local machine and then get the same problem that you have.
**Exactly** what do we mean by making it reproducible? We should be able to copy/paste your example CLI parameters and try on our local machine and then get the same problem that you have.
To help us reproduce your problem there are a couple of things we need:
@ -50,7 +50,7 @@ If you give us this information we can usually fix your issue faster.
* Best case you can fix the issue and send us a PR with a fix. We love PR for bugs :) But of course that is only best case scenario.
* Search current [Github issues](https://github.com/sitespeedio/sitespeed.io/issues). Is this bug reported before? Does it lack info? Please add your own comment to that issue if it is open. If you aren't sure that your bug is the same as the other bug, please create another issue. Do not hihack issues. Do not comment on closed issue, please create a new issue instead and add a reference to the old issue.
* Search current [GitHub issues](https://github.com/sitespeedio/sitespeed.io/issues). Is this bug reported before? Does it lack info? Please add your own comment to that issue if it is open. If you aren't sure that your bug is the same as the other bug, please create another issue. Do not hijack issues. Do not comment on closed issue, please create a new issue instead and add a reference to the old issue.
* Do you think this is somehow related to Docker (generic Docker issues etc)? Then please [search](https://duckduckgo.com/) for the that problem or head over to [forums.docker.com](https://forums.docker.com/) and have a look there first.
@ -63,7 +63,7 @@ If you give us this information we can usually fix your issue faster.
Here's dos and don'ts if you want your bug fixed:
Please do:
* [Provide a reproducable test case](#explain-how-to-reproduce-your-issue).
* [Provide a reproducible test case](#explain-how-to-reproduce-your-issue).
* If you don't get a response in a couple of days, write a message in the [general channel in Slack](https://sitespeedio.herokuapp.com/).
Please don't:

View File

@ -1,7 +1,7 @@
sitespeed.js [options] <url>/<file>
Browser
--browsertime.browser, -b, --browser Choose which Browser to use when you test. Safari only works on Mac OS X and iOS 13 (or later). Chrome needs to be the same version as the current installed Chromedriver (check the changelog for what version that is currently used). Use --chrome.chromedriverPath to use another Chromedriver version. [choices: "chrome", "firefox", "safari"] [default: "chrome"]
--browsertime.browser, -b, --browser Choose which Browser to use when you test. Safari only works on Mac OS X and iOS 13 (or later). Chrome needs to be the same version as the current installed ChromeDriver (check the changelog for what version that is currently used). Use --chrome.chromedriverPath to use another ChromeDriver version. [choices: "chrome", "firefox", "safari"] [default: "chrome"]
--browsertime.iterations, -n How many times you want to test each page [default: 3]
--browsertime.spa, --spa Convenient parameter to use if you test a SPA application: will automatically wait for X seconds after last network activity and use hash in file names. Read https://www.sitespeed.io/documentation/sitespeed.io/spa/ [boolean] [default: false]
--browsertime.connectivity.profile, -c The connectivity profile. To actually set the connectivity you can choose between Docker networks or Throttle, read https://www.sitespeed.io/documentation/sitespeed.io/connectivity/ [string] [choices: "3g", "3gfast", "3gslow", "3gem", "2g", "cable", "native", "custom"] [default: "native"]
@ -18,9 +18,9 @@ Browser
--browsertime.selenium.url Configure the path to the Selenium server when fetching timings using browsers. If not configured the supplied NodeJS/Selenium version is used.
--browsertime.viewPort, --viewPort The browser view port size WidthxHeight like 400x300 [default: "1366x708"]
--browsertime.userAgent, --userAgent The full User Agent string, defaults to the User Agent used by the browsertime.browser option.
--browsertime.preURL, --preURL A URL that will be accessed first by the browser before the URL that you wanna analyze. Use it to fill the cache.
--browsertime.preScript, --preScript Selenium script(s) to run before you test your URL. They will run outside of the analyze phase. Note that --preScript can be passed multiple times.
--browsertime.postScript, --postScript Selenium script(s) to run after you test your URL. They will run outside of the analyze phase. Note that --postScript can be passed multiple times.
--browsertime.preURL, --preURL A URL that will be accessed first by the browser before the URL that you wanna analyse. Use it to fill the cache.
--browsertime.preScript, --preScript Selenium script(s) to run before you test your URL. They will run outside of the analyse phase. Note that --preScript can be passed multiple times.
--browsertime.postScript, --postScript Selenium script(s) to run after you test your URL. They will run outside of the analyse phase. Note that --postScript can be passed multiple times.
--browsertime.delay, --delay Delay between runs, in milliseconds. Use it if your web server needs to rest between runs :)
--browsertime.pageLoadStrategy, --pageLoadStrategy The Page Load Strategy decides when you have control of the page load. Default is normal meaning you will have control after onload. You can change that to none to get control direct after navigation. [choices: "normal", "none"] [default: "normal"]
--browsertime.visualMetrics, --visualMetrics, --speedIndex Calculate Visual Metrics like SpeedIndex, First Visual Change and Last Visual Change. Requires FFMpeg and Python dependencies [boolean]
@ -65,7 +65,7 @@ Chrome
--browsertime.chrome.enableTraceScreenshots, --chrome.enableTraceScreenshots Include screenshots in the trace log (enabling the trace category disabled-by-default-devtools.screenshot). [boolean]
--browsertime.chrome.collectConsoleLog, --chrome.collectConsoleLog Collect Chromes console log and save to disk. [boolean]
--browsertime.chrome.binaryPath, --chrome.binaryPath Path to custom Chrome binary (e.g. Chrome Canary). On OS X, the path should be to the binary inside the app bundle, e.g. "/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary"
--browsertime.chrome.chromedriverPath, --chrome.chromedriverPath Path to custom Chromedriver binary. Make sure to use a Chromedriver version that's compatible with the version of Chrome you're using
--browsertime.chrome.chromedriverPath, --chrome.chromedriverPath Path to custom ChromeDriver binary. Make sure to use a ChromeDriver version that's compatible with the version of Chrome you're using
--browsertime.chrome.cdp.performance, --chrome.cdp.performance Collect Chrome performance metrics from Chrome DevTools Protocol [boolean] [default: true]
--browsertime.chrome.collectLongTasks, --chrome.collectLongTasks Collect CPU long tasks, using the Long Task API [boolean]
--browsertime.chrome.CPUThrottlingRate, --chrome.CPUThrottlingRate Enables CPU throttling to emulate slow CPUs. Throttling rate as a slowdown factor (1 is no throttle, 2 is 2x slowdown, etc) [number]

View File

@ -0,0 +1,61 @@
timings.firstPaint
timings.fullyLoaded
timings.pageLoadTime
timings.FirstVisualChange
timings.LastVisualChange
timings.SpeedIndex
timings.PerceptualSpeedIndex
timings.VisualReadiness
timings.VisualComplete95
requests.total
requests.html
requests.javascript
requests.css
requests.image
requests.font
requests.httpErrors
transferSize.total
transferSize.html
transferSize.javascript
transferSize.css
transferSize.image
transferSize.font
transferSize.favicon
transferSize.json
transferSize.other
transferSize.plain
transferSize.svg
contentSize.total
contentSize.html
contentSize.javascript
contentSize.css
contentSize.image
contentSize.font
contentSize.favicon
contentSize.json
contentSize.other
contentSize.plain
contentSize.svg
thirdParty.transferSize
thirdParty.requests
score.accessibility
score.bestpractice
score.privacy
score.performance
lighthouse.performance
lighthouse.accessibility
lighthouse.best-practices
lighthouse.seo
lighthouse.pwa
webpagetest.SpeedIndex
webpagetest.lastVisualChange
webpagetest.render
webpagetest.visualComplete
webpagetest.visualComplete95
webpagetest.TTFB
webpagetest.fullyLoaded
gpsi.speedscore
axe.critical
axe.serious
axe.minor
axe.moderate

View File

@ -0,0 +1,115 @@
---
layout: default
title: Configure the HTML output
description: Configure and change the default HTML.
keywords: configuration, html, documentation, web performance, sitespeed.io
nav: documentation
category: sitespeed.io
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
twitterdescription: Configure the HTML output
---
[Documentation]({{site.baseurl}}/documentation/sitespeed.io/) / Configure the HTML output
# Configure the HTML output
{:.no_toc}
* Let's place the TOC here
{:toc}
You can configure some parts of the HTML report that is generated by sitespeed.io
# Configure page metrics
When your test, the pages page is generated where you can compare all the URLs that has been tested. By default we cherry picked a couple metrics that is how in the table but you can also change them. This is useful if there are a specific metric that is your main focus.
![Page columns]({{site.baseurl}}/img/pagecolumns.png)
{: .img-thumbnail}
You can configure which metrics to show in the columns with the `--html.pageSummaryMetrics` cli parameter. Pass it multiple times to add multiple columns or use the configuration file json and create an array with metrics that you want use.
~~~bash
docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} --html.pageSummaryMetrics timings.pageLoadTime --html.pageSummaryMetrics requests.total https://www.sitespeed.io
~~~
Or use a configuration json:
~~~json
"html": {
"pageSummaryMetrics": [
      "transferSize.total",
      "requests.total",
      "thirdParty.requests",
      "transferSize.javascript",
      "transferSize.css",
      "transferSize.image",
      "score.performance"
    ]
}
~~~
Which metric can you use? It is the same setup as when you create a budget file. At the moment you can choose between [these metrics](#configurable-metrics).
[Let us know](https://github.com/sitespeedio/sitespeed.io/issues/new) if there are any metrics that you are missing!
# Configure page summary boxes
The start page with summary boxes are also configurable. You can choose which metrics to show.
![Summary boxes]({{site.baseurl}}/img/summary-boxes.png)
{: .img-thumbnail}
It follows the same pattern as page columns and uses the same friendly names.
~~~bash
docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} --html.summaryBoxes timings.pageLoadTime --html.summaryBoxes requests.total https://www.sitespeed.io
~~~
Or use a configuration json:
~~~json
"html": {
"summaryBoxes": [
      "transferSize.total",
      "requests.total",
      "thirdParty.requests",
      "transferSize.javascript",
      "transferSize.css",
      "transferSize.image",
      "score.performance"
    ]
}
~~~
# Configurable metrics
Here are the different metrics that you can show in the summary boxes or in the page HTML. Any metric missing? [Make a PR](https://github.com/sitespeedio/sitespeed.io/blob/master/lib/support/friendlynames.js) or [create a issue](https://github.com/sitespeedio/sitespeed.io/issues/new)!
~~~
{% include_relative friendlynames.md %}
~~~
# Show your script in the HTML output
If you are running tests using scripting it can sometimes be hard to know what you are actually testing when you look at the HTML result. Then add `--html.showScript` to include a link on the result page.
![Link to scripting]({{site.baseurl}}/img/the-script-link.png)
{: .img-thumbnail}
# Link to open your HAR files in compare
If you push your result HTML pages to S3 or another public server, you can use [https://compare.sitespeed.io](https://compare.sitespeed.io) or your own deployed version of [compare](https://github.com/sitespeedio/compare) to compare your HAR files.
You add the link to the HTML result by `--html.compareURL https://compare.sitespeed.io/` and you will then have a button in your result where you can compare you HAR file.
Make sure that your server has correct [CORS settings](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) so that compare.sitespeed.io (or your own server) can get the HAR file.
![Link to scripting]({{site.baseurl}}/img/compare-button.png)
{: .img-thumbnail}

View File

@ -17,7 +17,7 @@ twitterdescription:
{:toc}
## Change/set connectivity
You can and should throttle the connection to make the connectivity slower to make it easier to catch regressions. If you dont do it, you can run your tests with different connectivity profiles and regressions/improvements that you see is caused by your servers flakey internet connection
You can and should throttle the connection to make the connectivity slower to make it easier to catch regressions. If you dont do it, you can run your tests with different connectivity profiles and regressions/improvements that you see is caused by your servers flaky internet connection
The best way to do that is to setup a network bridge in Docker, use our connectivity engine [Throttle](https://github.com/sitespeedio/throttle) or if you use Kubernetes you can use [TSProxy](https://github.com/WPO-Foundation/tsproxy).

View File

@ -22,8 +22,8 @@ You can use sitespeed.io to keep track of what is happening with your site by ma
To do this you define your own [budget file](../performance-budget/#the-budget-file) with rules on when to break your build. This budget will return an error code status after the run. You can also choose to output JUnit XML and TAP reports.
## Github Actions
If you are using [Github Actions](https://github.com/features/actions) beta it's super easy to run sitespeed.io. Remember though that actions are in beta and can change. They are running an small instances at the moment and you have no way of [setting the connectivity](/documentation/sitespeed.io/connectivity/) so you shouldn't rely on timing metrics.
## GitHub Actions
If you are using [GitHub Actions](https://github.com/features/actions) beta it's super easy to run sitespeed.io. Remember though that actions are in beta and can change. They are running an small instances at the moment and you have no way of [setting the connectivity](/documentation/sitespeed.io/connectivity/) so you shouldn't rely on timing metrics.
Actions works good with a [performance budget](documentation/sitespeed.io/performance-budget/). You should set your budget in a file in the repo that you are testing. In this example we call the file *budget.json* and put it in the *.github* folder in the repo.
@ -51,7 +51,7 @@ Setup a simple budget that check the URLs you test against number of requests, t
}
```
Then you can setup your action either via the Github GUI or using configuration. Make sure to setup your action to the right Docker file: ```docker://sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %}-action```.
Then you can setup your action either via the GitHub GUI or using configuration. Make sure to setup your action to the right Docker file: ```docker://sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %}-action```.
A simple setup looks something like this:
@ -88,7 +88,7 @@ docker run -v ${WORKSPACE}:/sitespeed.io sitespeedio/sitespeed.io --outputFolder
![HTML reports]({{site.baseurl}}/img/html-publisher.png)
{: .img-thumbnail}
The HTML result pages runs Javascript, so you need to change the [Jenkins Content Security Policy](https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy) for them to work with the plugin.
The HTML result pages runs JavaScript, so you need to change the [Jenkins Content Security Policy](https://wiki.jenkins-ci.org/display/JENKINS/Configuring+Content+Security+Policy) for them to work with the plugin.
When you start Jenkins make sure to set the environment variable <code>-Dhudson.model.DirectoryBrowserSupport.CSP="sandbox allow-scripts; style-src 'unsafe-inline' *;script-src 'unsafe-inline' *;"</code>.

View File

@ -80,7 +80,7 @@ Then [**run.sh**](https://github.com/sitespeedio/dashboard.sitespeed.io/blob/mas
You need to modify our tests and scripts so that you don't test the exact same URLs as us :)
#### Configuration
In our example we have two configuration files on the server that we extends. These configuration files holds the secrets that we don't want to expose on our public Github repo. In our example it they look like this:
In our example we have two configuration files on the server that we extends. These configuration files holds the secrets that we don't want to expose on our public GitHub repo. In our example it they look like this:
**/conf/secrets.json**
```json
@ -132,18 +132,6 @@ Then our configuration files in [**/config/**](https://github.com/sitespeedio/da
And when we run our tests, we map the volume on the server /config to our docker container. You can see that in the [run.sh](https://github.com/sitespeedio/dashboard.sitespeed.io/blob/master/run.sh) file. Look for `-v /config:/config`. That is the magic line.
We also have a env config on the server (that we feed to Docker with `--env-file /config/env`):
**/conf/env**
```
SITESPEED_IO_BROWSERTIME__WIKIPEDIA__USER=username
SITESPEED_IO_BROWSERTIME__WIKIMEPIA__PASSWORD=secret
```
that is used for secrets that we want to use inside of scripts. You can see how that is used in [our login test script](https://github.com/sitespeedio/dashboard.sitespeed.io/blob/master/nyc3-1/desktop/scripts/loginWikipedia.js).
The environment variables are automatically picked by our CLI. *SITESPEED_IO_BROWSERTIME__WIKIPEDIA__USER* will be *wikipedia.user* in our options object.
We then also map the current working dir to `-v "$(pwd)":/sitespeed.io` and then feed the the config file to sitespeed `--config /sitespeed.io/config`. That way, inside the Docker container we have **/config/** that has the secret configuration files and in **/sitespeed.io/config** the configuration we want to use for our tests.

View File

@ -75,7 +75,7 @@ If you want to test and push to Graphite/InfluxDB:
- Run: <code>docker run --shm-size=1g --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io https://www.sitespeed.io -n 1 --graphite.host=192.168.65.1</code> to push the data to Graphite. The IP is the localhost IP if you run on a Mac.
- Check the metrics at [http://127.0.0.1:3000/](http://127.0.0.1:3000/).
If you are new to Git/Github and want to make a PR you can start with reading [Digital Oceans tutorial on how to make PRs](https://www.digitalocean.com/community/tutorials/how-to-create-a-pull-request-on-github).
If you are new to Git/GitHub and want to make a PR you can start with reading [Digital Oceans tutorial on how to make PRs](https://www.digitalocean.com/community/tutorials/how-to-create-a-pull-request-on-github).
### Log and debug
To get a better understanding of what happens you should use the log. You can change log level by using multiple <code>-v</code>. If you want to log on the lowest level getting all information you can use <code>-vvv</code>. If that is too much information use <code>-vv</code> or <code>-v</code>.
@ -105,7 +105,7 @@ Where pageInfo is the data structure that you wanna inspect.
#### Committing changes
* Install Commitizen with npm <code>npm install -g commitizen</code>
* Then simply use command <code>git cz</code> instead of <code>git commit</code> when commiting changes
* Then simply use command <code>git cz</code> instead of <code>git commit</code> when committing changes
#### Before you send the pull request
@ -113,7 +113,7 @@ Before you send the PR make sure you:
* Squash your commits so it looks sane
* Make sure your code follow our lint rule by running: <code>npm run lint</code>
* Make sure your code don't break any tests: <code>npm test</code>
* Update the documentation [https://github.com/sitespeedio/sitespeed.io/tree/master/docs](https://github.com/sitespeedio/sitespeed.io/tree/master/docs) in another pull request. When we merge the PR the documentaion will automatically be updated so we do that when we push the next release
* Update the documentation [https://github.com/sitespeedio/sitespeed.io/tree/master/docs](https://github.com/sitespeedio/sitespeed.io/tree/master/docs) in another pull request. When we merge the PR the documentation will automatically be updated so we do that when we push the next release
### Do a release
When you become a member of the sitespeed.io team you can push releases. You do that by running the release bash script in root: <code>./release.sh</code>
@ -122,13 +122,13 @@ To do a release you need to first install np (a better *npm publish*): <code>npm
Then run the bash script. It will push your new release to npm and the Docker hub. Remember to let your latest code change run a couple of hours on our test server before you push the release (the latest code is automatically deployed on the test server).
To be able to deploy a new version you new to have access to our Docker account, npm, our Github repos and use 2FA.
To be able to deploy a new version you new to have access to our Docker account, npm, our GitHub repos and use 2FA.
### Use sitespeed.io from NodeJS
If you want to integrate sitespeed.io into your NodeJS application you can checkout how we do that in [our Grunt plugin](https://github.com/sitespeedio/grunt-sitespeedio/blob/master/tasks/sitespeedio.js). It's a great working example. :)
### Contributing to the documentation
The documention lives in your cloned directory under *docs/*.
The documentation lives in your cloned directory under *docs/*.
First make sure you have Bundler: <code>gem install bundler</code>

View File

@ -62,9 +62,9 @@ To send metrics to Graphite you need to at least configure the Graphite host:
If you don't run Graphite on default port you can change that to by <code>--graphite.port</code>.
If your instance is behind authentication yoy can use <code>--graphite.auth</code> with the format **user:password**.
If your instance is behind authentication you can use <code>--graphite.auth</code> with the format **user:password**.
If you use a specifc port for the user inteface (and where we send the annotations) you can change that with <code>--graphite.httpPort</code>.
If you use a specifc port for the user interface (and where we send the annotations) you can change that with <code>--graphite.httpPort</code>.
If you use a different web host for Graphite than your default host, you can change that with <code>--graphite.webHost</code>. If you don't use a specific web host, the default domain will be used.
@ -156,7 +156,7 @@ You probably want to make sure that only your sitespeed.io servers can post data
Your Graphite server needs to open port 2003 and 8080 for TCP traffic for your servers running sitespeed.io.
If you are using AWS you always gives your servers a security group. The servers running sitespeed.io (collecting mtrics) can all have the same group (allows outbund traffic and only allowing inbound for ssh).
If you are using AWS you always gives your servers a security group. The servers running sitespeed.io (collecting metrics) can all have the same group (allows outbound traffic and only allowing inbound for ssh).
The Graphite server can the open 2003 and 8080 only for that group (write the group name in the source/security group field). In this example we also run Grafana on port 3000 and have it open to the world.
@ -177,7 +177,7 @@ If you are using Digital Ocean, you can setup the firewall rule in the admin. He
## Storing the data
You probably gonna need to store the metrics in Graphite on another disk. If you are an AWS user, you can use and [setup an EBS volume](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html). If you use Digital Ocean you can follow their [quick start guide](https://www.digitalocean.com/docs/volumes/quickstart/).
When your volume is mounted on your server that runs Graphite, you need to make sure Graphite uses the. Map the Graphite volume to the new volume outside of Docker (both Whisper and [graphite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db)). Map them like this on your physical server (make sure to copy the empty [grahite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db) file):
When your volume is mounted on your server that runs Graphite, you need to make sure Graphite uses the. Map the Graphite volume to the new volume outside of Docker (both Whisper and [graphite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db)). Map them like this on your physical server (make sure to copy the empty [graphite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db) file):
- `/path/on/server/whisper:/opt/graphite/storage/whisper`
- `/path/on/server/graphite.db:/opt/graphite/storage/graphite.db`
@ -188,7 +188,7 @@ If you use Grafana annotations, you should make sure grafana.db is outside of th
1. Make sure you have [configured storage-aggregation.conf](https://raw.githubusercontent.com/sitespeedio/sitespeed.io/master/docker/graphite/conf/storage-aggregation.conf) in Graphite to fit your needs.
2. Configure your [storage-schemas.conf](https://raw.githubusercontent.com/sitespeedio/sitespeed.io/master/docker/graphite/conf/storage-schemas.conf) how long you wanna store your metrics.
3. *MAX_CREATES_PER_MINUTE* is usually quite low in [carbon.conf](https://raw.githubusercontent.com/sitespeedio/sitespeed.io/master/docker/graphite/conf/carbon.conf). That means you will not get all the metrics created for the first run, so you can increase it.
4. Map the Graphite volume to a physical directory outside of Docker to have better control (both Whisper and [graphite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db)). Map them like this on your physical server (make sure to copy the empty [grahite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db) file):
4. Map the Graphite volume to a physical directory outside of Docker to have better control (both Whisper and [graphite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db)). Map them like this on your physical server (make sure to copy the empty [graphite.db](https://github.com/sitespeedio/sitespeed.io/blob/master/docker/graphite/graphite.db) file):
- /path/on/server/whisper:/opt/graphite/storage/whisper
- /path/on/server/graphite.db:/opt/graphite/storage/graphite.db
If you use Grafana annotations, you should make sure grafana.db is outside of the container. Follow the documentation at [grafana.org](http://docs.grafana.org/installation/docker/#grafana-container-using-bind-mounts).

View File

@ -1,6 +1,6 @@
---
layout: default
title: Documentation sitespeed.io 10.x
title: Documentation sitespeed.io 11.x
description: Read about all you can do with sitespeed.io.
keywords: tools, documentation, web performance
nav: documentation
@ -9,7 +9,7 @@ image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
twitterdescription: Documentation for sitespeed.io.
---
# Documentation v10
# Documentation v11
<img src="{{site.baseurl}}/img/logos/sitespeed.io.png" class="pull-right img-big" alt="sitespeed.io logo" width="200" height="214">
@ -21,7 +21,7 @@ Sitespeed.io is the complete toolbox to test the web performance of your web sit
* [Browsers](browsers/) - collect timings using real browsers. We support Firefox, Chrome, Chrome on Android and limited support for Safari on OS X and iOS.
* [Configuration](configuration/) - there's a lot of things you can do with sitespeed.io, lets checkout how!
* [Connectivity](connectivity/) - set the connectivity to emulate real users network conditions.
* [Continuously run your tests](continuously-run-your-tests/) - how to setup your test to continously run your tests.
* [Continuously run your tests](continuously-run-your-tests/) - how to setup your test to continuously run your tests.
* [Docker](docker/) - how to use our Docker containers.
* [F.A.Q and Best Practice](best-practice/) - here we keep track of questions we get in Slack.
* [Performance Dashboard](performance-dashboard/) - monitor your web site and keep track of your metrics and performance.
@ -31,7 +31,8 @@ Sitespeed.io is the complete toolbox to test the web performance of your web sit
## More details
* [Alerts](alerts/) - send alerts (email/Slack/PagerDuty etc) when you get a performance regression.
* [Axe](axe/) - run accessibility tests.
* [Continuous Integration](continuous-integration/) - generate JUnit XML/TAP and use Jenkins, Circle CI, Gitlab CI, Github Actions, Grunt or the Gulp plugin.
* [Continuous Integration](continuous-integration/) - generate JUnit XML/TAP and use Jenkins, Circle CI, Gitlab CI, GitHub Actions, Grunt or the Gulp plugin.
* [Configure HTML output](configure-html/) - change the default HTML output.
* [CPU](cpu/) - measure CPU metrics to see where your page spends the time.
* [Developers](developers/) - start here when you want to do PRs or create a plugin.
* [Graphite](graphite/) - how to configure and store your metrics in Graphite (and using StatsD).

View File

@ -21,7 +21,7 @@ You can run sitespeed.io using our Docker containers or using NodeJS.
## Docker
We have [Docker images](https://hub.docker.com/r/sitespeedio/sitespeed.io/) with sitespeed.io, Chrome, Firefox, Xvfb and all the software needed for recording a video of the browser screen and analyze it to get Visual Metrics. It is super easy to use). Here's how to use the container with both Firefox & Chrome (install [Docker](https://docs.docker.com/install/) first).
We have [Docker images](https://hub.docker.com/r/sitespeedio/sitespeed.io/) with sitespeed.io, Chrome, Firefox, Xvfb and all the software needed for recording a video of the browser screen and analyse it to get Visual Metrics. It is super easy to use). Here's how to use the container with both Firefox & Chrome (install [Docker](https://docs.docker.com/install/) first).
### Mac & Linux
@ -64,16 +64,16 @@ yarn global add sitespeed.io
We support Windows using [Docker](https://docs.docker.com/engine/installation/windows/). To be able to support running on Windows with NodeJS we need at least one [core contributor](/aboutus/) that can focus on Windows. Are you that one? Please [get in touch](https://github.com/sitespeedio/sitespeed.io/issues/new)!
### Skip installing Chromedriver/Geckodriver
If you don't want to install Chromedriver or Geckodriver when you install through npm you can skip them with an environment variable.
### Skip installing ChromeDriver/GeckoDriver
If you don't want to install ChromeDriver or GeckoDriver when you install through npm you can skip them with an environment variable.
Skip installing Chromedriver:
Skip installing ChromeDriver:
~~~bash
CHROMEDRIVER_SKIP_DOWNLOAD=true npm install sitespeed.io -g
~~~
Skip installing Geckodriver:
Skip installing GeckoDriver:
~~~bash
GECKODRIVER_SKIP_DOWNLOAD=true npm install sitespeed.io -g

View File

@ -17,7 +17,7 @@ twitterdescription: The web performance leaderboard.
* Let's place the TOC here
{:toc}
The [leaderboard dashboard](https://dashboard.sitespeed.io/dashboard/db/leaderboard) is the easist way to compare how you are doing against your competition. To get it going you need [Grafana](https://grafana.com) (6.2 or later) and Graphite. If you don't have that already, you can follow the instructions in [performance dashboard documentation](/documentation/sitespeed.io/performance-dashboard/#up-and-running-in-almost-5-minutes). And to run your tests, you should follow [our example](https://github.com/sitespeedio/dashboard.sitespeed.io).
The [leaderboard dashboard](https://dashboard.sitespeed.io/dashboard/db/leaderboard) is the easiest way to compare how you are doing against your competition. To get it going you need [Grafana](https://grafana.com) (6.2 or later) and Graphite. If you don't have that already, you can follow the instructions in [performance dashboard documentation](/documentation/sitespeed.io/performance-dashboard/#up-and-running-in-almost-5-minutes). And to run your tests, you should follow [our example](https://github.com/sitespeedio/dashboard.sitespeed.io).
The dashboard list the pages that you test. With fastest/best URL first (yes it is a leaderboard!). It looks like this:
![Leaderboard example]({{site.baseurl}}/img/leaderboard-example.png)
@ -67,4 +67,4 @@ When you try out our setup at [dashboard.sitespeed.io](https://dashboard.sitespe
![Score leaderboard]({{site.baseurl}}/img/combine-namespaces.png)
{: .img-thumbnail-center}
If you have any problem with dashboard, let us know in a [Github issue](https://github.com/sitespeedio/sitespeed.io/issues/new)!
If you have any problem with dashboard, let us know in a [GitHub issue](https://github.com/sitespeedio/sitespeed.io/issues/new)!

View File

@ -18,7 +18,7 @@ twitterdescription: Configuring metrics to use
{:toc}
# Collected metrics
Sitespeed.io collects a lot of metrics which are filtered before they are sent to Graphite/InfluxDB. You can remove filters and/or add your own filters. Some sensible defaults have been set for you, if you have suggestions to change them create an [issue at Github](https://github.com/sitespeedio/sitespeed.io/issues/new).
Sitespeed.io collects a lot of metrics which are filtered before they are sent to Graphite/InfluxDB. You can remove filters and/or add your own filters. Some sensible defaults have been set for you, if you have suggestions to change them create an [issue at GitHub](https://github.com/sitespeedio/sitespeed.io/issues/new).
## Summary vs pageSummary vs run
The metrics are separated into three groups:

View File

@ -99,7 +99,7 @@ And using Docker (remember: only works in Linux hosts):
docker run --privileged -v /dev/bus/usb:/dev/bus/usb -e START_ADB_SERVER=true --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} -n 1 --android --browsertime.xvfb false https://www.sitespeed.io
```
If you want to run Docker on Mac OS X, you can follow Appiums [setup](https://github.com/appium/appium-docker-android) by creating a docker-machine, give ut USB access and then run the container from that Docker machine.
If you want to run Docker on Mac OS X, you can follow Appiums [setup](https://github.com/appium/appium-docker-android) by creating a docker-machine, give out USB access and then run the container from that Docker machine.
### Driving multiple phones from the same computer
@ -110,7 +110,7 @@ You can do that with the `--device` Docker command:
The first part is the bus and that will not change, but the second part _devnum_ changes if you unplug the device or restart,
You need to know which phone are connected to which usb port.
You need to know which phone are connected to which USB port.
Here's an example on how you can get that automatically before you start the container, feeding the unique id (that you get from _lsusb_).
@ -144,11 +144,11 @@ You can choose which Chrome version you want to run on your phone using `--chrom
* Chromium - *org.chromium.chrome*
If you installed Chrome Canary on your phone and want to use it, then add `--chrome.android.package com.chrome.canary` to your run.
Driving different versions needs different versions of the Chromedriver. The Chrome version number needs to match the Chromedriver version number. Browsertime/sitespeed.io ships with the latest stable version of the Chromedriver. If you want to run other versions, you need to [download from the official Chromedriver page](https://chromedriver.chromium.org/downloads). And then you specify the version by using `--chrome.chromedriverPath`.
Driving different versions needs different versions of the ChromeDriver. The Chrome version number needs to match the ChromeDriver version number. Browsertime/sitespeed.io ships with the latest stable version of the ChromeDriver. If you want to run other versions, you need to [download from the official ChromeDriver page](https://chromedriver.chromium.org/downloads). And then you specify the version by using `--chrome.chromedriverPath`.
### Collect trace log
One important thing when testing on mobile is to analyze the Chrome trace log. You can get that with `--cpu`:
One important thing when testing on mobile is to analyse the Chrome trace log. You can get that with `--cpu`:
```bash
sitespeed.io --android --cpu https://www.sitespeed.io
@ -166,7 +166,7 @@ To be able to test you need latest OS X Catalina on your Mac computer and iOS 13
Run your test using npm (instead of Docker).
*Safardriver* the driver that drives Safari is bundled in OS X. But to be able to use it you need to enable it with:
*SafariDriver* the driver that drives Safari is bundled in OS X. But to be able to use it you need to enable it with:
```bash
safaridriver --enable

View File

@ -0,0 +1,95 @@
{
"budget": {
"timings": {
"firstPaint": limit,
"fullyLoaded": limit,
"serverResponseTime": limit,
"backEndTime": limit,
"pageLoadTime": limit,
"FirstVisualChange": limit,
"LastVisualChange": limit,
"SpeedIndex": limit,
"ContentfulSpeedIndex": limit,
"PerceptualSpeedIndex": limit,
"VisualReadiness": limit,
"VisualComplete95": limit,
"VisualComplete99": limit,
"VisualComplete": limit,
},
"cpu": {
"longTasks": limit,
"longTasksTotalDuration": limit,
},
"requests": {
"total": limit,
"html": limit,
"javascript": limit,
"css": limit,
"image": limit,
"font": limit,
"httpErrors": limit,
},
"transferSize": {
"total": limit,
"html": limit,
"javascript": limit,
"css": limit,
"image": limit,
"font": limit,
"favicon": limit,
"json": limit,
"other": limit,
"plain": limit,
"svg": limit,
},
"contentSize": {
"total": limit,
"html": limit,
"javascript": limit,
"css": limit,
"image": limit,
"font": limit,
"favicon": limit,
"json": limit,
"other": limit,
"plain": limit,
"svg": limit,
},
"thirdParty": {
"transferSize": limit,
"requests": limit,
},
"score": {
"score": limit,
"accessibility": limit,
"bestpractice": limit,
"privacy": limit,
"performance": limit,
},
"lighthouse": {
"performance": limit,
"accessibility": limit,
"best-practices": limit,
"seo": limit,
"pwa": limit,
},
"webpagetest": {
"SpeedIndex": limit,
"lastVisualChange": limit,
"render": limit,
"visualComplete": limit,
"visualComplete95": limit,
"TTFB": limit,
"fullyLoaded": limit,
},
"gpsi": {
"speedscore": limit,
},
"axe": {
"critical": limit,
"serious": limit,
"minor": limit,
"moderate": limit,
},
}
}

View File

@ -81,7 +81,7 @@ All URLs that you test then needs to have a SpeedIndex faster than 1000. But if
#### Full example
Here is an example of a fully configurued budget file. It shows you what you *can* configure (but you shouldn't configure all of them).
Here is an example of a fully configured budget file.
~~~json
{
@ -178,8 +178,14 @@ And then you can always combine them all.
If you need more metrics for your budget, either [create an issue](https://github.com/sitespeedio/sitespeed.io/issues/new) or look below for using the full internal data structure.
#### All possible metrics you can configure
~~~json
{% include_relative friendlynames.md %}
~~~
#### Budget configuration using the internal data structure
There's also an old version of settiung a budget where you can do it for all metrics collected by sitespeed.io and works on the internal data structure.
There's also an old version of setting a budget where you can do it for all metrics collected by sitespeed.io and works on the internal data structure.
You can read more about the metrics/data structure in the [metrics section]({{site.baseurl}}/documentation/sitespeed.io/metrics/).

View File

@ -31,7 +31,7 @@ And you will get a log entry that looks something like this:
~~~
...
The following plugins are enabled: assets,browsertime,coach,domains,html
The following plugins are enabled: assets,browsertime,coach,domains,ithubhtml
...
~~~
@ -44,7 +44,7 @@ You can remove/disable default plugins if needed. For instance you may not want
docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io --plugins.remove html
~~~
If you want to disable multiple plugins say you don't need the html and the har files (the harstorer plugin):
If you want to disable multiple plugins say you don't need the HTML and the har files (the harstorer plugin):
~~~bash
docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} https://www.sitespeed.io --plugins.remove html --plugins.remove harstorer
@ -117,7 +117,7 @@ docker run --rm -v "$(pwd)":/sitespeed.io my-custom-sitespeedio -b firefox --my-
Pretty cool, huh? :-)
## How to create your own plugin
First let us know about your cool plugin! Then share it with others by publish it to npm or just use Github.
First let us know about your cool plugin! Then share it with others by publish it to npm or just use GitHub.
### Basic structure
Your plugin needs to follow this structure.
@ -230,7 +230,7 @@ Data from different tools are passed with three different message types:
### Debug/log
You can use the sitespeed.io log to log messages. We use [intel](https://www.npmjs.com/package/intel) for logging.
You get the log object in the context object (so there's no need to require the log) but you you should get a specfic instance so that you can filter the log/see which part of sitespeed.io that writes to the log.
You get the log object in the context object (so there's no need to require the log) but you you should get a specific instance so that you can filter the log/see which part of sitespeed.io that writes to the log.
In the [open](#opencontext-options) function you can add something like this:
@ -342,7 +342,7 @@ queue.postMessage(make('budget.addMessageType', {type: 'gpsi.pagesummary'}));
~~~
## Testing your plugin
If your plugin lives on Github you should check out our [example Travis-ci file](https://github.com/sitespeedio/plugin-gpsi/blob/master/.travis.yml) for the GPSI plugin. In the example, we checkout the sitespeed.io project and run the plugin against the latest master (we also run it daily in the Travis crontab).
If your plugin lives on GitHub you should check out our [example Travis-ci file](https://github.com/sitespeedio/plugin-gpsi/blob/master/.travis.yml) for the GPSI plugin. In the example, we checkout the sitespeed.io project and run the plugin against the latest master (we also run it daily in the Travis crontab).
## Example plugin(s)
You can look at the standalone [GPSI plugin](https://github.com/sitespeedio/plugin-gpsi) or the [WebPageTest plugin](https://github.com/sitespeedio/sitespeed.io/tree/master/lib/plugins/webpagetest).

View File

@ -21,7 +21,7 @@ Since sitespeed.io 8.0 the pre/post script has changed. You probably should use
Before sitespeed.io loads and tests a URL you can run your own Selenium script. Do you want to access a URL and pre-load the cache or maybe you want to login as a user and then measure a URL?
We use the NodeJs version of Selenium, you can find the [API documentation here](http://seleniumhq.github.io/selenium/docs/api/javascript/index.html). You need to go into the docs to see how to select the elements you need to do the magic on your page.
We use the NodeJS version of Selenium, you can find the [API documentation here](http://seleniumhq.github.io/selenium/docs/api/javascript/index.html). You need to go into the docs to see how to select the elements you need to do the magic on your page.
Your script needs to follow a specific pattern to be able to run as a pre/post script. The simplest version of a script looks like this:

View File

@ -72,7 +72,7 @@ And then you should also make sure that all the result files (HTML/videos/screen
# Digital Ocean Spaces
[Digital Ocean Spaces](https://developers.digitalocean.com/documentation/spaces/#aws-s3-compatibility)
Digital Ocean is compatiable with the S3 api, so all that is required after setting up your space and aquiring a key and secret is to modify the endpoint that the s3 results are passed to as shown below.
Digital Ocean is compatible with the S3 api, so all that is required after setting up your space and acquiring a key and secret is to modify the endpoint that the s3 results are passed to as shown below.
## JSON configuration file
If the endpoint is not passed this will default to AWS's endpoint. You may safely exclude it for AWS integration. If you use a JSON configuration file you should make sure you add this to get S3 to work:

View File

@ -25,7 +25,7 @@ Test by scripting was introduced in sitespeed.io 8.0 and Browsertime 4.0 and mak
Scripting work the same in Browsertime and sitespeed.io, the documentation here are for both of the tools.
You have three different choices when you create your script:
* You can use our [commands objects](/documentation/sitespeed.io/scripting/#commmands). They are wrappers around plain JavaScript to make it easier to create your scripts. We prepared for many scenarios but if you need to do really complicated things, you also need [run plain JavaScript](/documentation/sitespeed.io/scripting/#jsrunjavascript) to be able to do what you want. But hey, that's easy!
* You can use our [commands objects](/documentation/sitespeed.io/scripting/#commands). They are wrappers around plain JavaScript to make it easier to create your scripts. We prepared for many scenarios but if you need to do really complicated things, you also need [run plain JavaScript](/documentation/sitespeed.io/scripting/#jsrunjavascript) to be able to do what you want. But hey, that's easy!
* Or you can run plain JavaScript to navigate or do what you need by using the command [js.run()](/documentation/sitespeed.io/scripting/#jsrunjavascript). That will make it easy to copy/paste your JavaScript from your browsers console and test what you want to do.
* If you are used to do everything with Selenium you can [use ... Selenium](/documentation/sitespeed.io/scripting/#use-selenium-directly) :)
@ -67,7 +67,7 @@ And then you have a few help commands:
* *[click](#click)* on a link and/or wait for the next page to load.
* *[js](#run-javascript)* - run JavaScript in the browser.
* *[switch](#switch)* to another frame or window.
* *[set](#set)* innerHthml, innerText or value to an element.
* *[set](#set)* innerHtml, innerText or value to an element.
Scripting only works for Browsertime. It will not work with Lighthouse/Google Pagespeed Insights or WebPageTest. If you need scripting for WebPageTest [read the WebPageTest scripting documentation](/documentation/sitespeed.io/webpagetest/#webpagetest-scripting).
{: .note .note-info}
@ -100,7 +100,7 @@ module.exports = async function(context, commands) {
That way you can just split your long scripts into multiple files and make it easier to manage.
## Getting values from your page
In some scenirous you want to do different things dependent on what shows on your page. For example: You are testing a shop checkout and you need to verify that the item is in stock. You can run JavaScript and get the value back to your script.
In some scenarios you want to do different things dependent on what shows on your page. For example: You are testing a shop checkout and you need to verify that the item is in stock. You can run JavaScript and get the value back to your script.
Here's an simple example, IRL you will need to get something from the page:
@ -126,7 +126,7 @@ if (exists) {
## Finding the right element
One of the key things in your script is to be able to find the right element to invoke. If the elemnt has an id it's easy. If not you can use developer tools in your favourite browser. The all work mostly the same: Open devtools in the page you want to inspect, click on the element and right click on devtools for that element. Then you will see something like this:
One of the key things in your script is to be able to find the right element to invoke. If the element has an id it's easy. If not you can use developer tools in your favourite browser. The all work mostly the same: Open DevTools in the page you want to inspect, click on the element and right click on DevTools for that element. Then you will see something like this:
![Using Safari to find the selector]({{site.baseurl}}/img/selector-safari.png)
{: .img-thumbnail-center}
@ -185,7 +185,7 @@ module.exports = async function(context, commands) {
// We try/catch so we will catch if the the input fields can't be found
// The error is automatically logged in Browsertime an rethrown here
// We could have an alternative flow ...
// else we can just let it cascade since it catched later on and reported in
// else we can just let it cascade since it caught later on and reported in
// the HTML
throw e;
}
@ -229,7 +229,7 @@ module.exports = async function(context, commands) {
// We try/catch so we will catch if the the input fields can't be found
// The error is automatically logged in Browsertime and re-thrown here
// We could have an alternative flow ...
// else we can just let it cascade since it catched later on and reported in
// else we can just let it cascade since it caught later on and reported in
// the HTML
throw e;
}
@ -258,7 +258,7 @@ module.exports = async function(context, commands) {
// We try/catch so we will catch if the the input fields can't be found
// The error is automatically logged in Browsertime and re-thrown here
// We could have an alternative flow ...
// else we can just let it cascade since it catched later on and reported in
// else we can just let it cascade since it caught later on and reported in
// the HTML
throw e;
}
@ -309,7 +309,7 @@ module.exports = async function(context, commands) {
} catch(e) {
// We try/catch so we will catch if the the input fields can't be found
// We could have an alternative flow ...
// else we can just let it cascade since it catched later on and reported in
// else we can just let it cascade since it caught later on and reported in
// the HTML
throw e;
}
@ -330,7 +330,7 @@ module.exports = async function(context, commands) {
### Measure multiple pages and start white
If you test multiple pages you will see that the layout is kept in the browser until the first paint of the new page. You can hack that by remvoving the current body and set the backgroud color to white. Then every video will start white.
If you test multiple pages you will see that the layout is kept in the browser until the first paint of the new page. You can hack that by removing the current body and set the background color to white. Then every video will start white.
~~~javascript
module.exports = async function(context, commands) {
@ -409,7 +409,7 @@ module.exports = async function(context, commands) {
~~~
### Error handling
You can try/catch failing commands that throw errors. If an error is not catched in your script, it will be catched in sitespeed.io and the error will be logged and reported in the HTML and to your data storage (Graphite/InfluxDb) under the key *browsertime.statistics.errors*.
You can try/catch failing commands that throw errors. If an error is not caught in your script, it will be caught in sitespeed.io and the error will be logged and reported in the HTML and to your data storage (Graphite/InfluxDb) under the key *browsertime.statistics.errors*.
If you do catch the error, you should make sure you report it yourself with the [error command](#error), so you can see that in the HTML. This is needed for all errors except navigating/measuring a URL. They will automatically be reported (since they are always important).
@ -450,13 +450,13 @@ If you wanna keep of what script you are running, you can include the script int
{: .img-thumbnail}
### Getting correct Visual Metrics
Visual metrics is the metrics that are collected using the video recording of the screen. In most cases that will work just out of the box. One thing to know is that when you go from one page to another page, the browser keeps the layout of the old page. That means that your video will start with the first page (instead of white) when yoy navigate to the next page.
Visual metrics is the metrics that are collected using the video recording of the screen. In most cases that will work just out of the box. One thing to know is that when you go from one page to another page, the browser keeps the layout of the old page. That means that your video will start with the first page (instead of white) when you navigate to the next page.
It will look like this:
![Page to page]({{site.baseurl}}/img/filmstrip-multiple-pages.jpg)
{: .img-thumbnail}
This is perfectly fine in most cases. But if you want to start white (the metrics somehow isn't correct) or if you click a link and that click changes the layout and is catched as First Visual Change, there are workarounds.
This is perfectly fine in most cases. But if you want to start white (the metrics somehow isn't correct) or if you click a link and that click changes the layout and is caught as First Visual Change, there are workarounds.
If you just want to start white and navigate to the next page you can just clear the HTML between pages:
@ -475,7 +475,7 @@ If you want to click a link and want to make sure that the HTML doesn't change w
module.exports = async function(context, commands) {
await commands.measure.start('https://www.sitespeed.io');
// Hide everything
// We do not hide the body since the body needs to be visibile when we do the magic to find the staret of the
// We do not hide the body since the body needs to be visible when we do the magic to find the staret of the
// navigation by adding a layer of orange on top of the page
await commands.js.run('for (let node of document.body.childNodes) { if (node.style) node.style.display = "none";}');
// Start measurning
@ -530,7 +530,7 @@ module.exports = async function(context, commands) {
};
~~~
## Commmands
## Commands
All commands will return a promise and you should await it to fulfil. If some command do not work, we will log that automatically and rethrow the error, so you can catch that and can act on that.
@ -826,7 +826,7 @@ module.exports = async function(context, commands) {
Create an error. Use it if you catch a thrown error, want to continue with something else, but still report the error.
### Meta data
Add meta data to your script. The extra data will be visibile in the HTML result page.
Add meta data to your script. The extra data will be visible in the HTML result page.
Setting meta data like this:
@ -858,7 +858,7 @@ Add a title of your script. The title is text only.
Add a description of your script. The description can be text/HTML.
### Use Selenium directly
You can use Selenium directly if you need to use things that are not availible through our commands.
You can use Selenium directly if you need to use things that are not available through our commands.
You get a hold of the Selenium objects through the context.
The *selenium.webdriver* that is the Selenium [WebDriver public API object](https://seleniumhq.github.io/selenium/docs/api/javascript/module/selenium-webdriver/index.html). And *selenium.driver* that's the [instantiated version of the WebDriver](https://seleniumhq.github.io/selenium/docs/api/javascript/module/selenium-webdriver/index_exports_WebDriver.html) driving the current version of the browser.

View File

@ -8,7 +8,7 @@ category: sitespeed.io
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
twitterdescription: Test a single page application - SPA
---
[Documentation]({{site.baseurl}}/documentation/sitespeed.io/) / Single page applicatiom
[Documentation]({{site.baseurl}}/documentation/sitespeed.io/) / Single page application
# Test a single page application
{:.no_toc}

View File

@ -1,6 +1,6 @@
---
layout: default
title: Record a video of the browser screen and analyze it to get Visual Metrics.
title: Record a video of the browser screen and analyse it to get Visual Metrics.
description: You can configure frames per second (fps), the quality of the video and a couple of more things.
keywords: video, documentation, web performance, sitespeed.io
nav: documentation
@ -19,11 +19,11 @@ twitterdescription: Use the video in sitespeed.io
## The stack (easy with Docker)
We use FFMpeg to record a video with 30 fps of the screen (but you can configure the number of frames per second). The easiest way is to use our Docker container with pre-installed FFMpeg but if you for some reason want to use the npm version, you can record a video too. As long as you install FFMpeg yourself.
When we got the video we use [Visual Metrics](https://github.com/WPO-Foundation/visualmetrics) (built by Pat Meenan) to analyze the video and get SpeedIndex and other visual metrics from the video. If you use our Docker container you get that for free, else you need to install all the [Visual Metrics dependencies](https://github.com/sitespeedio/docker-visualmetrics-deps/blob/master/Dockerfile) yourself.
When we got the video we use [Visual Metrics](https://github.com/WPO-Foundation/visualmetrics) (built by Pat Meenan) to analyse the video and get SpeedIndex and other visual metrics from the video. If you use our Docker container you get that for free, else you need to install all the [Visual Metrics dependencies](https://github.com/sitespeedio/docker-visualmetrics-deps/blob/master/Dockerfile) yourself.
We record the video in two steps: First we turn the background orange (that is used by VisualMetrics to know when
the navigation starts), sets the background to white and let the browser go to the URL. The video is recorded
lossless and then when the video has been analyzed, we remove the orange frames and convert the video to a compressed mp4.
lossless and then when the video has been analysed, we remove the orange frames and convert the video to a compressed mp4.
The video will look something like this:
@ -33,11 +33,14 @@ The video will look something like this:
There are a couple of things that you can do to configure the video and the metrics.
### SpeedIndex and other Visual Metrics
To collect Visual Metrics like firstVisualChange, SpeedIndex, visualComplete85%, visualComplete95% visualComplete99% and lastVisualChange you add the parameter <code>--speedIndex</code>. The video will then be recorded, analyzed and then removed.
To collect Visual Metrics like firstVisualChange, SpeedIndex, visualComplete85%, visualComplete95% visualComplete99% and lastVisualChange you add the parameter <code>--visualMetrics</code>. The video will then be recorded, analysed and then removed.
### Keep or remove the video
If you want to keep the video when you collect metrics or only want the video, just add <code>--video</code> to the list of parameters.
### Firefox window recorder
If you use Firefox you can use the built in window recoder (instead of using FFMPEG) to record the video. The Mozilla team uses it to make sure recording the video doesn't add any overhead. Turn it on with <code>--firefox.windowRecorder</code>.
### Video quality
You can change the number of frames per second (default is 30) by using <code>--browsertime.videoParams.framerate</code>. If you have a large server with a lot of extra CPU you can increase the amount. You should probably not decrease it lower than 30 since it will affect the precision of Visual Metrics.
@ -47,7 +50,7 @@ You can also change the constant rate factor (see [https://trac.ffmpeg.org/wiki/
The video will by default include a timer and show when visual metrics happens. If you want the video without any text/timer you just add <code>--browsertime.videoParams.addTimer false</code>.
### Filmstrip parameters
When the video is analyzed with [VisualMetrics](https://github.com/WPO-Foundation/visualmetrics) screenshots for
When the video is analysed with [VisualMetrics](https://github.com/WPO-Foundation/visualmetrics) screenshots for
a filmstrip is also created. With sitespeed.io 8.1 you can see them in the HTML.
![Page to page]({{site.baseurl}}/img/filmstrip-multiple-pages.jpg)

View File

@ -76,9 +76,9 @@ docker run --privileged -v /dev/bus/usb:/dev/bus/usb -e START_ADB_SERVER=true --
A couple of things:
- You need to run the container in privileged mode to be able to mount USB ports
- Add `-e START_ADB_SERVER=true` to start the adb server inside the container (that makes it possible to talk to your phone)
- Add `-e START_ADB_SERVER=true` to start the ADB server inside the container (that makes it possible to talk to your phone)
- Make sure xvfb is turned off `--xvfb false`
- To ignore HTTPS certificate errors add `--chrome.args ignore-certificate-errors-spki-list=PhrPvGIaAMmd29hj8BCZOq096yj7uMpRNHpn5PDxI6I=` and `--chrome.args user-data-dir=/data/tmp/chrome` (they only work together).
If you want to drive multiple phones from omne instance, you can change the ports WebPageReplay is using (making sure they do not collide between phones). You can do that with
If you want to drive multiple phones from one instance, you can change the ports WebPageReplay is using (making sure they do not collide between phones). You can do that with
`-e WPR_HTTP_PORT=XXX` and `-e WPR_HTTPS_PORT=YYY`. The default ports are 8080 and 8081.

View File

@ -22,7 +22,7 @@ We love [WebPageTest](https://www.webpagetest.org/) (WPT), so we have integrated
To use WPT you have a few options
- You can get an [API key](https://www.webpagetest.org/getkey.php) (sponsored by Akamai) for the public version
- Follow Pat Meenans instructions on how to get [a private version up and running in 5 minutes](http://calendar.perfplanet.com/2014/webpagetest-private-instances-in-five-minutes/).
- Read how [WikiMedia setup an instance using AWS](https://wikitech.wikimedia.org/wiki/WebPageTest).
- Read how [Wikimedia setup an instance using AWS](https://wikitech.wikimedia.org/wiki/WebPageTest).
You should use if you wanna need to run tests on browsers that WebPageTest supports but not sitespeed.io (Safari on Iphone and Microsoft browsers).
@ -37,7 +37,7 @@ By default we have the following configuration options:
--webpagetest.location The location for the test
--webpagetest.connectivity The connectivity for the test.
--webpagetest.runs The number of runs per URL.
--webpagetest.custom Execute arbitrary Javascript at the end of a test to collect custom metrics.
--webpagetest.custom Execute arbitrary JavaScript at the end of a test to collect custom metrics.
--webpagetest.script Direct WebPageTest script as a string
--webpagetest.file Path to a script file
~~~

View File

@ -90,7 +90,7 @@ throttle stop
```
## Add delay on your localhost (Linux only at the moment)
This is useful if you run [WebPageReplay](https://github.com/catapult-project/catapult/blob/master/web_page_replay_go/README.md) and want to add som latency to your tests.
This is useful if you run [WebPageReplay](https://github.com/catapult-project/catapult/blob/master/web_page_replay_go/README.md) and want to add some latency to your tests.
```bash
throttle --rtt 200 --localhost

View File

@ -15,13 +15,13 @@ image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
## sitespeed.io
Sitespeed.io uses Browsertime, the Coach and PageXray to collect and generate the result, so looking at result pages from sitespeed.io will give you a idea of what you can get from all tools. Analyzing two pages using Chrome looks like this:
Sitespeed.io uses Browsertime, the Coach and PageXray to collect and generate the result, so looking at result pages from sitespeed.io will give you a idea of what you can get from all tools. Analysing two pages using Chrome looks like this:
~~~bash
docker run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:{% include version/sitespeed.io.txt %} -b chrome --chrome.timeline https://en.wikipedia.org/wiki/Main_Page https://en.wikipedia.org/wiki/Barack_Obama
~~~
Gives the following [report](https://examples.sitespeed.io/8.2/en.wikipedia.org/2019-02-04-22-46-14/). The standard use case for sitespeed.io is to run it continously and send the data to Graphite/Grafana and create dashboards looking like this:
Gives the following [report](https://examples.sitespeed.io/8.2/en.wikipedia.org/2019-02-04-22-46-14/). The standard use case for sitespeed.io is to run it continuously and send the data to Graphite/Grafana and create dashboards looking like this:
[![Example dashboard]({{site.baseurl}}/img/examples/dashboard-examples.png)](https://dashboard.sitespeed.io/dashboard/db/page-summary?orgId=1)
{: .img-thumbnail}
@ -1100,7 +1100,7 @@ And it will generate a JSON that looks something like this:
},
"iframes": 0,
"localStorageSize": 0,
"metaDescription": "Sitespeed.io is an open source tool that helps you analyze and optimize your website speed and performance, based on performance best practices. Run it locally or use it in your continuous integration. Download or fork it on Github!",
"metaDescription": "Sitespeed.io is an open source tool that helps you analyse and optimize your website speed and performance, based on performance best practices. Run it locally or use it in your continuous integration. Download or fork it on GitHub!",
"pageContentSize": "120.9 kB",
"pageContentTypes": {
"css": {

BIN
docs/img/compare-button.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

BIN
docs/img/pagecolumns.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

BIN
docs/img/summary-boxes.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -27,7 +27,7 @@ We always use [semantic versioning](http://semver.org/) when we do a release.
We take your online privacy really serious: Our [documentation site](https://www.sitespeed.io/), our [dashboard](https://dashboard.sitespeed.io) and our [compare tool](https://compare.sitespeed.io) do not use any tracking software at all (no Google Analytics or any other tracking software). We never use tracking software. None of the sitespeed.io tools call home. You can read more about privacy at our [Privacy Policy](../privacy-policy/) page.
### Code of Conduct
When you create issues, do PRs, use our Slack channel or contact us on email, please follow our [Code Of Conduct](https://github.com/sitespeedio/sitespeed.io/blob/master/CODE_OF_CONDUCT.md).
When you create issues, do pull requests, use our Slack channel or contact us on email, please follow our [Code Of Conduct](https://github.com/sitespeedio/sitespeed.io/blob/master/CODE_OF_CONDUCT.md).
### Open Source
We release our software under the [MIT License](https://github.com/sitespeedio/sitespeed.io/blob/master/LICENSE) or [Apache License 2.0](https://github.com/sitespeedio/browsertime/blob/master/LICENSE). Please respect it and respect our work: We ask you not to change the logo or the contribution to the project. Please do this to pay the respect to the many hours we put down into the project.
@ -40,7 +40,7 @@ We try to release things as soon as the functionality is tested and ready (we re
We highly rely on testing on [Travic-CI](https://travis-ci.org/) and our [own automatic testing](https://www.sitespeed.io/releasing-with-confidence/).
### Planning a new major
Usually we have one big function that we wanna release (for 6.0 that is HTML support for plugins). We then focus on finishing that functionality and try to squeeze in as many other good things as possbile.
Usually we have one big function that we wanna release (for 6.0 that is HTML support for plugins). We then focus on finishing that functionality and try to squeeze in as many other good things as possible.
## Sustainability
We've been releasing sitespeed.io since 2012 and we plan to continue do it for a long time. At the moment we are a [three member team](../aboutus/) and we love to get more people involved!
@ -60,7 +60,7 @@ It sometimes happens that we get contacted about issues privately via email or D
## Who uses sitespeed.io
We had over one million downloads so far and still counting. We have companies in the Alexa top 10 that uses sitespeed.io. We have students at universities that uses our tools for their publications. We have retailers that use it. Even our moms and dads uses it. We are pretty sure sitespeed.io will work out good for you too.
We had over one million downloads so far and still counting. We have companies in the Alexa top 10 that uses sitespeed.io. We have students at universities that uses our tools for their publications. We have retailers that use it. Even our mothers and fathers uses it. We are pretty sure sitespeed.io will work out good for you too.
With the old 3.X we got the following feedback in the [Toolsday](http://www.toolsday.io/) podcast:

View File

@ -12,9 +12,9 @@ twitterdescription: Privacy policy sitespeed.io.
We take your online privacy really serious: Our [documentation site](https://www.sitespeed.io/), our [dashboard](https://dashboard.sitespeed.io) and our [compare tool](https://compare.sitespeed.io) do not use any tracking software at all (no Google Analytics or any other tracking software). We never use tracking software. None of the sitespeed.io tools call home to us. We don't use any 3rd party requests on our website. We don't collect any data about you or your sitespeed.io usage.
But beware: Chrome and Firefox call home. We would love PRs and tips how to make sure browsers don't call home when you run your tests!
But beware: Chrome and Firefox call home. We would love pull requests and tips how to make sure browsers don't call home when you run your tests!
Also be aware that Github/npm/Docker/Netlify can store information when you download sitespeed.io.
Also be aware that GitHub/npm/Docker/Netlify can store information when you download sitespeed.io.
Let us go through this thoroughly:
* Let's place the TOC here
@ -42,4 +42,4 @@ We don't collect any data, so we don't have any data to share.
We believe you can feel safe that we don't collect any data about you, nor have any information to share with companies that wants to capitalize on what you do.
## Is the intended use likely to cause individuals to object or complain?
We don't think any individuals have any complains with this. But if you do, please let us know in a [Github issue](https://github.com/sitespeedio/sitespeed.io/issues/new).
We don't think any individuals have any complains with this. But if you do, please let us know in a [GitHub issue](https://github.com/sitespeedio/sitespeed.io/issues/new).

View File

@ -1,7 +1,7 @@
---
layout: default
title: Sitespeed.io - Release notes 1.3
description: Sitespeed.io is an open source tool that helps you analyze and optimize your website speed and performance, based on perfor
description: Sitespeed.io is an open source tool that helps you analyse and optimize your website speed and performance, based on perfor
author: Peter Hedenskog
keywords: sitespeed.io, release, release-notes, 1.3
nav:

View File

@ -28,7 +28,7 @@ Sitespeed 1.5 was released the 6th of January 2013 with the following changes:
<h2>New functionality</h2>
<ul>
<li>Support for configuring the crawler (see the <i>dependencies/crawler.properties</i> file). You can now configure the number of crawler threads and connection/socket timeout. </li>
<li>Support for analyze behind proxy (thanks <a href="https://github.com/rhulse" target="_blank">rhulse</a> and <a href="https://github.com/samteeee" target="_blank">samteeee</a> for reporting and testing it). Configure it like <a href="{{site.baseurl}}/documentation/#proxy">this</a>.</li>
<li>Support for analyse behind proxy (thanks <a href="https://github.com/rhulse" target="_blank">rhulse</a> and <a href="https://github.com/samteeee" target="_blank">samteeee</a> for reporting and testing it). Configure it like <a href="{{site.baseurl}}/documentation/#proxy">this</a>.</li>
<li>You can now see errors from the crawl. If you have internal links that returns errors, an extra report file will be created.</li>
<li>You can now <a href="{{site.baseurl}}/documentation/#useragent">set</a> the user agent & view port.</li>
<li>You can now see the percentage of things when you hover on the summary page, for example how many percent of the request are missing a far expire header.</li>

View File

@ -4,71 +4,92 @@ title: Sitespeed.io - Release notes 1.6
description: Sitespeed.io 1.6 release contains a really easy way to see which assets that should have longer cache times and some other small changes.
author: Peter Hedenskog
keywords: sitespeed.io, release, release-notes, 1.6
nav:
nav:
image: http://sitespeed.io/img/sitespeed-1.5-twitter.jpg
twitterdescription: The new release contains a really easy way to identify which assets that should have longer cache times, better handling of the SPOF rule & see the most used assets on the site.
twitterdescription: The new release contains a really easy way to identify which assets that should have longer cachetimes, better handling of the SPOF rule & see the most used assets on the site.
---
<div class="page-header">
<h1>Sitespeed.io 1.6 release notes</h1>
</div>
<h3>Find the assets that should have longer cache time</h3>
<p>Wow, I really like this feature :) The idea was adopted from Steve Souders of functionality that's missing in the webperf tools that exist today.
It is actually three functions:
<ol>
<li>
On the detailed page level, you will for every asset see the time since it was last changed (now date minus last modification date) and
the actual cache time. This give you a nice view to see if assets are cached too short. Check this example: </p>
<p>Wow, I really like this feature :) The idea was adopted from Steve Souders of functionality that's missing in the
webperf tools that exist today.
It is actually three functions:
<ol>
<li>
On the detailed page level, you will for every asset see the time since it was last changed (now date minus last
modification date) and
the actual cache time. This give you a nice view to see if assets are cached too short. Check this example: </p>
<img src="assets.jpg">
<p>The asset marked in red was
last changed 315 days ago, but the cache time is 0s. That's not good.</p>
<p>The asset marked in red was
last changed 315 days ago, but the cache time is 0s. That's not good.</p>
</li>
<li>
<p>You will also see a summary of all assets on an individual page, of the average time since last modification and the cache time.</p>
<img src="page.jpg">
<p>You will also see a summary of all assets on an individual page, of the average time since last modification and
the cache time.</p>
<img src="page.jpg">
</li>
<li>
<p>On the summary page, you will get the same information but for the whole site. In this example, the average cache time is 10 days (and the median is 0 seconds). The average time since last modification for all assets are 321 days (and the median 189). You will gain a lot if you change the cache headers for this site :) </p>
<p>On the summary page, you will get the same information but for the whole site. In this example, the average cache
time is 10 days (and the median is 0 seconds). The average time since last modification for all assets are 321 days
(and the median 189). You will gain a lot if you change the cache headers for this site :) </p>
<img src="summary.jpg">
<img src="summary.jpg">
</li>
</ol>
<h3>The most used assets on the site</h3>
<p>Introducing a new brand thing: see which assets are used the most across the analyzed pages. Thanks <a href="https://github.com/tobli" target="_blank">Tobias Lidskog</a> for the great idea! Here you will see the most (max 200) assets, and how many times they are used. This will give you a hint on which assets you will win the most to fine tune.</p>
<p>Introducing a new brand thing: see which assets are used the most across the analysed pages. Thanks <a
href="https://github.com/tobli" target="_blank">Tobias Lidskog</a> for the great idea! Here you will see the most
(max 200) assets, and how many times they are used. This will give you a hint on which assets you will win the most to
fine tune.</p>
<img src="newassets.jpg">
<h3>Better handling of the SPOF rule</h3>
<p>The SPOF rule now only report font face that are loaded from another top level domain (that seems more reasonable than report all font face files). Also the actual font file is reported (before only the css that included the font-face).
<p>The SPOF rule now only report font face that are loaded from another top level domain (that seems more reasonable
than report all font face files). Also the actual font file is reported (before only the css that included the
font-face).
<h3>Show requests per domain on individual page</h3>
<p>Summarize the requests by domain, so you easy can see how the requests are sharded between domains. Every webperf tool has it, now also sitespeed.</p>
<img src="bydomain.jpg">
<h3>Show requests per domain on individual page</h3>
<p>Summarize the requests by domain, so you easy can see how the requests are sharded between domains. Every webperf
tool has it, now also sitespeed.</p>
<img src="bydomain.jpg">
<h3>Configure Yslow backend & rules</h3>
<p>Now it is possible to add a parameter to the script to choose which ruleset or which Yslow file to use. This has two great wins:
If you clone your own version, you can test your own rules without changing the main script. It also makes it more flexible in the future,
opening up for multiple rulesets & rule implementations</p>
<h3>Configure Yslow backend & rules</h3>
<p>Now it is possible to add a parameter to the script to choose which ruleset or which Yslow file to use. This has
two great wins:
If you clone your own version, you can test your own rules without changing the main script. It also makes it more
flexible in the future,
opening up for multiple rulesets & rule implementations</p>
<h3>Time spent in backend vs frontend</h3>
<p>No rules attached to this yet, this is more if informational data. How much time do your site spend in backend & in frontend? You will see the information both on each individual page and on the summary page (the new blue informational color).
</p>
<img src="frontback.jpg">
<h3>Time spent in backend vs frontend</h3>
<p>No rules attached to this yet, this is more if informational data. How much time do your site spend in backend & in
frontend? You will see the information both on each individual page and on the summary page (the new blue
informational color).
</p>
<img src="frontback.jpg">
<h3>Adjusted rules on summary page</h3>
Adjusted the warning rules on the summary page, now a warning is up to the average number collected from <a href="http://www.httparchive.org" target="_blank">http://www.httparchive.org</a> (where applicable). For example: to get red (BAD!) on number of css files, you need to exceed the average number of css files, that the sites have that is collected by HTTP Archive.
<h3>Adjusted rules on summary page</h3>
Adjusted the warning rules on the summary page, now a warning is up to the average number collected from <a
href="http://www.httparchive.org" target="_blank">http://www.httparchive.org</a> (where applicable). For example: to
get red (BAD!) on number of css files, you need to exceed the average number of css files, that the sites have that is
collected by HTTP Archive.
<h3>New Yslow version</h3>
<p>Yslow has been upgraded with better error handling of faulty javascripts.</p>
<h3>New Yslow version</h3>
<p>Yslow has been upgraded with better error handling of faulty javascripts.</p>
<h3>Use Java 1.6 or higher</h3>
<p>The Java code used by sitespeed is now compiled for Java 1.6, so 1.7 is no longer the requirement.</p>
<hr>
<p>
See the <a href="https://github.com/soulgalore/sitespeed.io/blob/master/CHANGELOG">changelog</a> for changes done in the past and the next <a href="https://github.com/soulgalore/sitespeed.io/issues?milestone=16&state=open">milestone</a> what will come in the next release.
</p>
<h3>Use Java 1.6 or higher</h3>
<p>The Java code used by sitespeed is now compiled for Java 1.6, so 1.7 is no longer the requirement.</p>
<hr>
<p>
See the <a href="https://github.com/soulgalore/sitespeed.io/blob/master/CHANGELOG">changelog</a> for changes done in
the past and the next <a
href="https://github.com/soulgalore/sitespeed.io/issues?milestone=16&state=open">milestone</a> what will come in
the next release.
</p>

View File

@ -55,7 +55,7 @@ twitterdescription: The 2.0 release is here! Test multiple sites with one go, co
<li>Simplified user agent by choosing between iphone, ipad or nexus and a real agent & viewport is set.</li>
<li>Output as CSV: Choose which column to output and always output ip, start url & date.</li>
<li>Fix for Windows-users that is having spaces in their path to Java.</li>
<li>Bug fix: URL:s that returns error (4XX-5XX and that sitespeed can't analyze) is now included in the JUnit xml.</li>
<li>Bug fix: URL:s that returns error (4XX-5XX and that sitespeed can't analyse) is now included in the JUnit xml.</li>
<li>Bug fix: The JUnit script can now output files to a relative path.</li>
<li>Bug fix: User Agent is now correctly set.</li>
</ul>

View File

@ -4,35 +4,49 @@ title: Sitespeed.io - Release notes 2.1
description: The 2.1 release makes it possible to break builds using Jenkins & Travis-CI if the browser timings hits the limits.
author: Peter Hedenskog
keywords: sitespeed.io, release, release-notes, 2.1
nav:
nav:
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
twitterdescription: The 2.1 release makes it possible to break builds using Jenkins & Travis-CI if the Navigation Timing metrics hits the limits.
---
<div class="page-header">
<h1>Sitespeed.io 2.1 release notes</h1>
</div>
<p>This release focus on one thing: Break builds for <a href="http://jenkins-ci.org/" target="_blank">Jenkins</a> and <a href="https://travis-ci.org/" target="_blank">Travis-CI</a> if your browser
Navigation Timing API metrics doesn't meet your limits.</p>
<p>This release focus on one thing: Break builds for <a href="http://jenkins-ci.org/" target="_blank">Jenkins</a> and <a
href="https://travis-ci.org/" target="_blank">Travis-CI</a> if your browser
Navigation Timing API metrics doesn't meet your limits.</p>
<h2>Break builds if your pages are too slow</h2>
<p>Configure the limits for your pages and break builds if the timings are higher than your configuration. You can use the timings in the Navigation Timing API or create your own custom timings. Choose how many times you want to test the page and use the median or different percentile values. Read more about it <a href="{{site.baseurl}}/documentation/#junit">here</a>.</p>
<p>Configure the limits for your pages and break builds if the timings are higher than your configuration. You can use
the timings in the Navigation Timing API or create your own custom timings. Choose how many times you want to test the
page and use the median or different percentile values. Read more about it <a
href="{{site.baseurl}}/documentation/#junit">here</a>.</p>
<h2>Keep track of your timings</h2>
<p>Using Jenkins you can easily create graphs of your browser timings. They look like they are from the 1980's but it's quite nice info (btw nicer graphs will come in later releases)!</p>
<p class="text-center"><img class="img-responsive img-thumbnail" src="{{site.baseurl}}/documentation/domContentLoadedTime.png"/></p>
<p>Using Jenkins you can easily create graphs of your browser timings. They look like they are from the 1980's but it's
quite nice info (btw nicer graphs will come in later releases)!</p>
<p class="text-center"><img class="img-responsive img-thumbnail"
src="{{site.baseurl}}/documentation/domContentLoadedTime.png" /></p>
<h2>Use sitespeed.io in your projects with Travis-CI</h2>
<p>Drop the latest sitespeed.io release in your project, configure the Travis config file and break the build if it doesn't meet your limits. Checkout the <a href="https://github.com/sitespeedio/travis-ci-example" target="_blank">example</a> project on Github. Now you can break builds if you break your web performance best practices rules or if you exceed your timing limits. Read <a href="{{site.baseurl}}/documentation/#travis">more</a> about the Travis-CI integration.</p>
<p>Drop the latest sitespeed.io release in your project, configure the Travis config file and break the build if it
doesn't meet your limits. Checkout the <a href="https://github.com/sitespeedio/travis-ci-example"
target="_blank">example</a> project on GitHub. Now you can break builds if you break your web performance best
practices rules or if you exceed your timing limits. Read <a href="{{site.baseurl}}/documentation/#travis">more</a>
about the Travis-CI integration.</p>
<h2>Minor changes</h2>
<ul>
<li>Prepared for HTTP 2.0 rules & renamed the current rulesets. The new names: <em>sitespeed.io-desktop</em> & <em>sitespeed.io-mobile</em></li>
<li>Better error handling: Output the input parameters to the <em>error.log</em> file so it is easy to reproduce the error. Also centralized the error logging</li>
<li>Made it possible to analyze sites with non signed certificates</li>
<li>Finetuned the logo</li>
<li>Bug fix: The crawler sometimes picked up URL:s linking to other content types than HTML (if you have linked to an image)</li>
<li>Bug fix: The JUnit XSLT outputted timings metrics</li>
<li>Prepared for HTTP 2.0 rules & renamed the current rulesets. The new names: <em>sitespeed.io-desktop</em> &
<em>sitespeed.io-mobile</em></li>
<li>Better error handling: Output the input parameters to the <em>error.log</em> file so it is easy to reproduce the
error. Also centralized the error logging</li>
<li>Made it possible to analyse sites with non signed certificates</li>
<li>Finetuned the logo</li>
<li>Bug fix: The crawler sometimes picked up URL:s linking to other content types than HTML (if you have linked to an
image)</li>
<li>Bug fix: The JUnit XSLT outputted timings metrics</li>
</ul>
<hr>
<p>
See the <a href="https://github.com/sitespeedio/sitespeed.io/blob/master/CHANGELOG">changelog</a> for changes done in the past.
<p>
See the <a href="https://github.com/sitespeedio/sitespeed.io/blob/master/CHANGELOG">changelog</a> for changes done in
the past.
</p>

View File

@ -16,7 +16,7 @@ twitterdescription: This release fixes a couple of small bugs.
<ul>
<li>Upgraded version of Browser Time (0.4) that again collects custom user measurements (will have better docs on this later)</li>
<li>Bug fix: User marks named with spaces broke the summary.xml</li>
<li>Bug fix: Sites with extremely (I mean extremely) far away last modification time on an asset, could break an analyze</li>
<li>Bug fix: Sites with extremely (I mean extremely) far away last modification time on an asset, could break an analyse</li>
</ul>
<hr>
<p>

View File

@ -24,7 +24,7 @@ twitterdescription: Sitespeed.io 2.2 release focus on packaging for Homebrew.
</ul>
And one bug fix:
<ul>
<li>The fix for removing invalid XML caharcters created by GA, sometimes broke the analyze, now fixed (<a href="https://github.com/sitespeedio/sitespeed.io/issues/304" target="_blank">#304</a>)</li>
<li>The fix for removing invalid XML caharcters created by GA, sometimes broke the analyse, now fixed (<a href="https://github.com/sitespeedio/sitespeed.io/issues/304" target="_blank">#304</a>)</li>
</ul>
<hr>
<p>

View File

@ -14,7 +14,7 @@ twitterdescription: Sitespeed.io 2.4 focus on adding extra value on the defaut s
<p>Here are the most important changes:</p>
<h2>Microsoft Windows</h2>
<p>With the new release sitespeed.io has been tested (and fixed) to work on Windows 8.1. You can run analyze and fetch metrics, create the JUnit results and test multiple sites at once. Check out the timing metrics fetched using Internet Explorer 11:</p>
<p>With the new release sitespeed.io has been tested (and fixed) to work on Windows 8.1. You can run analyse and fetch metrics, create the JUnit results and test multiple sites at once. Check out the timing metrics fetched using Internet Explorer 11:</p>
<p class="text-center">
<img src="time-metrics-ie-windows.jpg" class="img-thumbnail img-responsive"></img>
</p>

View File

@ -5,42 +5,59 @@ description: The new sitespeed.io has support for driving WebPageTest, send metr
author: Peter Hedenskog
keywords: sitespeed.io, release, release-notes, 3.0
nav:
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
twitterdescription: The new sitespeed.io has support for driving WebPageTest, send metrics to Graphite and much more.
---
<div class="page-header">
<h1>Sitespeed.io 3.0 </h1>
</div>
<p>Yep, it took us almost half a year to migrate sitespeed.io to NodeJS and now it is ready for a release! The new version is faster, has a much richer feature list and is
so much easier to maintain.</p>
<p>Yep, it took us almost half a year to migrate sitespeed.io to NodeJS and now it is ready for a release! The new
version is faster, has a much richer feature list and is
so much easier to maintain.</p>
<p>Here's a list of the new things:</p>
<ul>
<li><strong>NodeJS</strong> - We moved (almost) everything to NodeJS. We are still dependent on Java, because the (old) Java crawler is still used and we will try to remove it in the future.
The new version is faster than the old one. And so much easier to change and better separation between logic and </li>
<li><strong>Send metrics to Graphite</strong> - one of the number one feature requests for sitespeed.io has been to collect data over time and graph it. Sitespeed.io now supports sending the metrics to Graphite It can looks something like this
<img src="{{site.baseurl}}/documentation/grafana-timing-metrics.png" class="img-thumbnail"/>
<li><strong>NodeJS</strong> - We moved (almost) everything to NodeJS. We are still dependent on Java, because the
(old) Java crawler is still used and we will try to remove it in the future.
The new version is faster than the old one. And so much easier to change and better separation between logic and
</li>
<li><strong>Send metrics to Graphite</strong> - one of the number one feature requests for sitespeed.io has been to
collect data over time and graph it. Sitespeed.io now supports sending the metrics to Graphite It can looks
something like this
<img src="{{site.baseurl}}/documentation/grafana-timing-metrics.png" class="img-thumbnail" />
</li>
<li><strong>RUM SpeedIndex</strong> - We have rewritten BrowserTime to NodeJS, and it now supports running whatever javascript snippet you want< meaning you can fetch whatever
metric you want. It is not fully implemented yet in sitespeed.io, but with the current version it gives us
Pat Meenans <a href="https://github.com/WPO-Foundation/RUM-SpeedIndex">RUM SpeedIndex</a> and support for getting resource timing API metrics.</li>
<li><strong>PostTask</strong> - when sitespeed.io has analyzed all pages and collected the metrics, you can run your own code to handle the result. This can be used to store the data in a database
<li><strong>RUM SpeedIndex</strong> - We have rewritten BrowserTime to NodeJS, and it now supports running whatever
javascript snippet you want< meaning you can fetch whatever metric you want. It is not fully implemented yet in
sitespeed.io, but with the current version it gives us Pat Meenans <a
href="https://github.com/WPO-Foundation/RUM-SpeedIndex">RUM SpeedIndex</a> and support for getting resource timing
API metrics.</li>
<li><strong>PostTask</strong> - when sitespeed.io has analysed all pages and collected the metrics, you can run your
own code to handle the result. This can be used to store the data in a database
or create your own reports.
</li>
<li><strong>Drive WebPageTest</strong> - you can now feed <a href="http://www.webpagetest.org/">WebPageTest</a> with URL:s to test and collect the metrics and include them in your sitespeed.io report.
<img src="{{site.baseurl}}/documentation/wpt-summary.png" class="img-thumbnail"/>
<li><strong>Drive WebPageTest</strong> - you can now feed <a href="http://www.webpagetest.org/">WebPageTest</a> with
URL:s to test and collect the metrics and include them in your sitespeed.io report.
<img src="{{site.baseurl}}/documentation/wpt-summary.png" class="img-thumbnail" />
</li>
<li><strong>Google Page Speed Insights</strong> - collect the score and other data and show it in your sitespeed.io report.
<img src="{{site.baseurl}}/documentation/gpsi-detailed-page-info.png" class="img-thumbnail"/>
<li><strong>Google Page Speed Insights</strong> - collect the score and other data and show it in your sitespeed.io
report.
<img src="{{site.baseurl}}/documentation/gpsi-detailed-page-info.png" class="img-thumbnail" />
</li>
<li><strong>Support for PhantomJS 2</strong> - yep we know, 2.0 is not released yet and we are really looking forward
to it, so
we added the functionality to use it already. What's extra cool with 2.0 (except that it is much faster) is that it
supports the Navigation Timing API. You can use it by pointing out your binary by <code>--phantomjsPath</code></li>
<li><strong>Two new rules</strong> - check if connection is closed and don't set private headers on static assets.
</li>
<li><strong>Support for PhantomJS 2</strong> - yep we know, 2.0 is not released yet and we are really looking forward to it, so
we added the functionality to use it already. What's extra cool with 2.0 (except that it is much faster) is that it supports the Navigation Timing API. You can use it by pointing out your binary by <code>--phantomjsPath</code></li>
<li><strong>Two new rules</strong> - check if connection is closed and don't set private headers on static assets.</li>
<li><strong>Two new summary pages</strong> - slowest domains and hotlist (find troublesome assets fast)</li>
<li><strong>TAP</strong> - you can now generate <a href="http://testanything.org/">TAP</a> reports, making your CI tool break your build if you don't follow the web performance best practice rules</li>
<li><strong>Performance budget</strong> - we kind of had support for a performance budget before with the JUnitXML. With the new version it is much cleaner: <a href="{{site.baseurl}}/documentation/#perfBudget">specify</a> which values
you will test against and run. </li>
<li><strong>Throttle the connection</strong> - You can throttle the connection when you are fetching timing metrics from the browser. Choose between:
<li><strong>TAP</strong> - you can now generate <a href="http://testanything.org/">TAP</a> reports, making your CI
tool break your build if you don't follow the web performance best practice rules</li>
<li><strong>Performance budget</strong> - we kind of had support for a performance budget before with the JUnitXML.
With the new version it is much cleaner: <a href="{{site.baseurl}}/documentation/#perfBudget">specify</a> which
values
you will test against and run. </li>
<li><strong>Throttle the connection</strong> - You can throttle the connection when you are fetching timing metrics
from the browser. Choose between:
<ul>
<li><strong>mobile3g</strong> - 1.6 Mbps/768 Kbps - 300 RTT</li>
<li><strong>mobile3gfast</strong> - 1.6 Mbps/768 Kbps - 150 RTT</li>
@ -50,9 +67,14 @@ so much easier to maintain.</p>
</li>
</ul>
<p>Tobias and me (Peter) have worked really hard the last half year to make the 3.0 release happen. We hope that you will love it as much as we do.
There are still bugs out there, if you find them, please add an <a href="https://github.com/sitespeedio/sitespeed.io/issues">issue</a> at GitHub and solve it with a pull request :)
<p>Tobias and me (Peter) have worked really hard the last half year to make the 3.0 release happen. We hope that you
will love it as much as we do.
There are still bugs out there, if you find them, please add an <a
href="https://github.com/sitespeedio/sitespeed.io/issues">issue</a> at GitHub and solve it with a pull request :)
<div class="note note-warning">The input parameters has changed since 2.5. Yep we know it is bad practice, the reason is that the old version
had a really bad CLI handling where you could only fetch a input parameter with a character and we where running out of characters! To see the new parameters see
<pre>sitespeed.io -h</pre></div>
<div class="note note-warning">The input parameters has changed since 2.5. Yep we know it is bad practice, the reason
is that the old version
had a really bad CLI handling where you could only fetch a input parameter with a character and we where running out
of characters! To see the new parameters see
<pre>sitespeed.io -h</pre>
</div>

View File

@ -12,7 +12,7 @@ twitterdescription: THere is 3.2 making it easier to test multiple sites and run
# Sitespeed.io 3.2
We've been releasing many small 3.1.x releases and now it's time for 3.2. Here are three important new things:
* We have changed the way of fetching multiple sites. In the new version you can configure multiple sites by adding the parameter **sites** one time for each site. The main reason is that it simpler and also makes it easier to run in our Docker container. Check the full [documentation]({{site.baseurl}}/documentation/configuration/#analyze-sites-and-benchmark).
* We have changed the way of fetching multiple sites. In the new version you can configure multiple sites by adding the parameter **sites** one time for each site. The main reason is that it simpler and also makes it easier to run in our Docker container. Check the full [documentation]({{site.baseurl}}/documentation/configuration/#analyse-sites-and-benchmark).
* We have decreased the default size of the memory for the Java crawler. The old default (1024 mb) was good for crawling thousand of URL:s so if you are doing that today, add the parameter *--memory 1024* when you run the script. 1024 works bad on small machines so 256 is the new default.
* We upgraded to Browsertime 0.9.0 with support for configuring a *waitScript* and run custom Javascripts in the browser. What does it mean? You can now choose [when to end a run]({{site.baseurl}}/documentation/browsers/#choose-when-to-end-your-test) when fetching timings from the browser (catching events happening after the loadEventEnd) and [collect custom metrics]({{site.baseurl}}/documentation/browsers/#custom-metrics). The custom metrics will automatically be presented in the result pages and sent to Graphite.

27
docs/sponsor/index.md Normal file
View File

@ -0,0 +1,27 @@
---
layout: default
title: Sponsor sitespeed.io and related projects!
description: Did you know that you can help out making sitespeed.io better?
keywords: sponsor, help
nav: sponsor
image: https://www.sitespeed.io/img/sitespeed-2.0-twitter.png
twitterdescription: Sponsor sitespeed.io and related projects.
---
# Sponsor
{:.no_toc}
There's a couple of ways you can help us building sitespeed.io:
* If you are a user of sitespeed.io you can help us make the documentation better. Start by reading [how to contribute to the documentation](/documentation/sitespeed.io/developers/#contributing-to-the-documentation).
* If you are a developer, you can help us with bug fixes and pull requests. We have a [special page for you to start](/documentation/sitespeed.io/developers/).
* If you are a designer you can help us with the design of the result pages. Please [get in touch with us](https://github.com/sitespeedio/sitespeed.io/issues/new) and we can help you get started.
* If you are a company or organisation using one of the sitespeed.io tools, you can either set aside time for your employees to contribute back or you can [sponsor us with money](https://github.com/users/soulgalore/sponsorship). Money will secure that we can keep running sitespeed.io with the high quality we've been known for.
## Why should you sponsor?
Sponsorships will help us continue working on open source software, keeping the documentation updated and upgrade and add more test servers (we only run two at the moment) that makes sure our releases are as bug free as possible.
At the moment we run two servers sponsored by Digital Ocean: One running Grafana/Graphite/InfluxDB and one instance that runs the latest commit in master for sitespeed.io and collect metrics using [https://github.com/sitespeedio/dashboard.sitespeed.io](https://github.com/sitespeedio/dashboard.sitespeed.io).
We want to run at least one more instance to be able to test changes direct in Browsertime. That would help us a lot finding issues earlier. With sponsor money we could also choose to use other cloud providers and making it easier to deploy sitespeed.io
Head over to [Peters sponsor page](https://github.com/users/soulgalore/sponsorship) to sponsor sitespeed.io!

View File

@ -15,8 +15,8 @@ If you need help or support then:
1. Try `--help`! Most sitespeed.io tools support the `--help` parameter. Try `sitespeed.io --help` and see what you can do with the tool.
2. We try to document all the things, so read the [documentation](/documentation/).
3. We have a lot of documentation. Sometimes its hard to find the thing you need, try the [search](/search/).
4. If you don't find what you need, [search Github open/closed issues](https://github.com/search?q=org+sitespeedio&type=Issues). The project been alive for many years and you can get a lot of help in closed Github issues.
4. If you don't find what you need, [search GitHub open/closed issues](https://github.com/search?q=org+sitespeedio&type=Issues). The project been alive for many years and you can get a lot of help in closed GitHub issues.
5. If you still haven't found what you need, [join our Slack channel](https://sitespeedio.herokuapp.com/) and tell us about your problem.
If you have a bug, please read [How to do a reproducable bug report](https://www.sitespeed.io/documentation/sitespeed.io/bug-report/) and file your bug under the project you think match. If you don't know where to report the bug, [create the issue on the main sitespeed.io repo](https://github.com/sitespeedio/sitespeed.io/issues/new) and we will move the issue to the right place!
If you have a bug, please read [How to do a reproducible bug report](https://www.sitespeed.io/documentation/sitespeed.io/bug-report/) and file your bug under the project you think match. If you don't know where to report the bug, [create the issue on the main sitespeed.io repo](https://github.com/sitespeedio/sitespeed.io/issues/new) and we will move the issue to the right place!

View File

@ -132,7 +132,7 @@ module.exports.parseCommandLine = function parseCommandLine() {
alias: ['b', 'browser'],
default: browsertimeConfig.browser,
describe:
'Choose which Browser to use when you test. Safari only works on Mac OS X and iOS 13 (or later). Chrome needs to be the same version as the current installed Chromedriver (check the changelog for what version that is currently used). Use --chrome.chromedriverPath to use another Chromedriver version.',
'Choose which Browser to use when you test. Safari only works on Mac OS X and iOS 13 (or later). Chrome needs to be the same version as the current installed ChromeDriver (check the changelog for what version that is currently used). Use --chrome.chromedriverPath to use another ChromeDriver version.',
choices: ['chrome', 'firefox', 'safari'],
group: 'Browser'
})
@ -203,7 +203,7 @@ module.exports.parseCommandLine = function parseCommandLine() {
.option('browsertime.pageCompleteCheck', {
alias: 'pageCompleteCheck',
describe:
'Supply a Javascript that decides when the browser is finished loading the page and can start to collect metrics. The Javascript snippet is repeatedly queried to see if page has completed loading (indicated by the script returning true). Use it to fetch timings happening after the loadEventEnd.',
'Supply a JavaScript that decides when the browser is finished loading the page and can start to collect metrics. The JavaScript snippet is repeatedly queried to see if page has completed loading (indicated by the script returning true). Use it to fetch timings happening after the loadEventEnd.',
group: 'Browser'
})
.option('browsertime.pageCompleteWaitTime', {
@ -253,19 +253,19 @@ module.exports.parseCommandLine = function parseCommandLine() {
.option('browsertime.preURL', {
alias: 'preURL',
describe:
'A URL that will be accessed first by the browser before the URL that you wanna analyze. Use it to fill the cache.',
'A URL that will be accessed first by the browser before the URL that you wanna analyse. Use it to fill the cache.',
group: 'Browser'
})
.option('browsertime.preScript', {
alias: 'preScript',
describe:
'Selenium script(s) to run before you test your URL. They will run outside of the analyze phase. Note that --preScript can be passed multiple times.',
'Selenium script(s) to run before you test your URL. They will run outside of the analyse phase. Note that --preScript can be passed multiple times.',
group: 'Browser'
})
.option('browsertime.postScript', {
alias: 'postScript',
describe:
'Selenium script(s) to run after you test your URL. They will run outside of the analyze phase. Note that --postScript can be passed multiple times.',
'Selenium script(s) to run after you test your URL. They will run outside of the analyse phase. Note that --postScript can be passed multiple times.',
group: 'Browser'
})
.option('browsertime.delay', {
@ -511,7 +511,7 @@ module.exports.parseCommandLine = function parseCommandLine() {
.option('browsertime.chrome.chromedriverPath', {
alias: 'chrome.chromedriverPath',
describe:
"Path to custom Chromedriver binary. Make sure to use a Chromedriver version that's compatible with " +
"Path to custom ChromeDriver binary. Make sure to use a ChromeDriver version that's compatible with " +
"the version of Chrome you're using",
group: 'Chrome'
})

View File

@ -167,8 +167,8 @@ class QueueHandler {
async run(sources) {
/*
setup - plugins chance to talk to each other or setup what they need.
url - urls passed around to analyze
summarize - all analyze is finished and we can summarize all data
url - urls passed around to analyse
summarize - all analyse is finished and we can summarize all data
render - plugin store data to disk
final - is there anything you wanna do before sitespeed exist? upload files to the S3?
*/

View File

@ -86,6 +86,7 @@ module.exports = {
'Content-Length': Buffer.byteLength(postData)
}
};
console.log(postData);
// If Grafana is behind auth, use it!
if (options.grafana.auth) {
log.debug('Using auth for Grafana');
@ -110,6 +111,9 @@ module.exports = {
reject(e);
} else {
res.setEncoding('utf8');
res.on('data', function(chunk) {
console.log('BODY: ' + chunk);
});
log.debug('Sent annotation to Grafana');
resolve();
}

View File

@ -119,18 +119,18 @@ class HTMLBuilder {
this.summary.assets = {
pageTitle: `Most used assets for ${name} tested at ${timestamp}`,
pageDescription: 'A list of the most used assets for the analyzed pages.'
pageDescription: 'A list of the most used assets for the analysed pages.'
};
this.summary.toplist = {
pageTitle: `Largest assets by type for ${name} tested at ${timestamp}`,
pageDescription: 'A list of the largest assets for the analyzed pages.'
pageDescription: 'A list of the largest assets for the analysed pages.'
};
if (options.multi && options.html.showScript) {
const scripts = await getScripts(options);
this.summary.scripts = {
pageTitle: `Scripts used to run the analyze`,
pageTitle: `Scripts used to run the analyse`,
pageDescription: '',
scripts
};

View File

@ -108,28 +108,28 @@ block content
p This new metric is developed by Bas Schouten at Mozilla which uses edge detection to calculate the amount of "content" that is visible on each frame. It was primarily designed for two main purposes: Have a good metric to measure the amount of text that is visible. Design a metric that is not easily fooled by the pop up splash/login screens that commonly occur at the end of a page load. These can often disturb the speed index numbers since the last frame that is being used as reference is not accurate.
h5(id='FirstVisualChange') First Visual Change
p The time when something for the first time is painted within the viewport. Calculated by analyzing a video.
p The time when something for the first time is painted within the viewport. Calculated by analysing a video.
h5(id='VisualComplete85') Visual Complete 85%
p When the page is visually complete to 85% (or more). Calculated by analyzing a video.
p When the page is visually complete to 85% (or more). Calculated by analysing a video.
h5(id='VisualComplete95') Visual Complete 95%
p When the page is visually complete to 95% (or more). Calculated by analyzing a video.
p When the page is visually complete to 95% (or more). Calculated by analysing a video.
h5(id='VisualComplete99') Visual Complete 99%
p When the page is visually complete to 99% (or more). Calculated by analyzing a video.
p When the page is visually complete to 99% (or more). Calculated by analysing a video.
h5(id='LastVisualChange') Last Visual Change
p The time when something for the last time changes within the viewport. Calculated by analyzing a video.
p The time when something for the last time changes within the viewport. Calculated by analysing a video.
h5(id='LargestImage') Largest Image
p The time when the largest image within the viewport has finished painted at the final position on the screen. Calculated by analyzing a video.
p The time when the largest image within the viewport has finished painted at the final position on the screen. Calculated by analysing a video.
h5(id='Heading') Heading
p The time when the largest H1 heading within the viewport has finished painted at the final position on the screen. Calculated by analyzing a video.
p The time when the largest H1 heading within the viewport has finished painted at the final position on the screen. Calculated by analysing a video.
h5(id='Logo') Logo
p The time when the logo (configured with --scriptInput.visualElements) within the viewport has finished painted at the final position on the screen. Calculated by analyzing a video.
p The time when the logo (configured with --scriptInput.visualElements) within the viewport has finished painted at the final position on the screen. Calculated by analysing a video.
h5(id='rumSpeedIndex') RUM-SpeedIndex
p A browser version also created by Pat Meenan that calculates the SpeedIndex measurements using Resource Timings. It is not as perfect as Speed Index but a good start.
@ -140,8 +140,8 @@ block content
h5(id='cssSizePerPage') CSS transfer size per page
p The transfer size of CSS per page, meaning if the CSS is sent compressed the unpacked size is larger.
h5(id='jsSizePerPage') Javascript transfer size per page
p The transfer size of Javascript per page.
h5(id='jsSizePerPage') JavaScript transfer size per page
p The transfer size of JavaScript per page.
h5(id='fontSizePerPage') Font transfer size per page
p The transfer size of fonts per page.
@ -158,8 +158,8 @@ block content
h5(id='cssRequestsPerPage') CSS requests per page
p The number of CSS requests on a page.
h5(id='jsRequestsPerPage') Javascript requests per page
p The number of Javascript requests on a page.
h5(id='jsRequestsPerPage') JavaScript requests per page
p The number of JavaScript requests on a page.
h5(id='fontRequestsPerPage') Font requests per page
p The number of font requests on a page.

View File

@ -1,7 +1,7 @@
- const profile = options.mobile ? 'mobile' : 'desktop'
- const connectivity = options.browsertime.connectivity.alias || options.browsertime.connectivity.profile
h2.url #{h.plural(noPages,'page')} analyzed for #{options.name ? options.name : h.short(context.name, 30)}
h2.url #{h.plural(noPages,'page')} analysed for #{options.name ? options.name : h.short(context.name, 30)}
p.small Tested #{timestamp} using #{h.cap(options.browsertime.browser)} for
| #{ h.get(options, 'browsertime.chrome.android.package') ? h.get(options, 'browsertime.chrome.android.package') + ' ': ''}
| #{options.preURL ? 'preURL ' + h.short(options.preURL, 60) + ' ' : ''}

View File

@ -58,7 +58,7 @@ module.exports = function(dataCollector, errors, resultUrls, name, options) {
};
let summaryText =
`${h.plural(dataCollector.getURLs().length, 'page')} analyzed for ${h.short(
`${h.plural(dataCollector.getURLs().length, 'page')} analysed for ${h.short(
name,
30
)} ` +

View File

@ -26,7 +26,7 @@ function getHeader(context, options) {
const noPages = options.urls.length;
return drab(
[
`${h.plural(noPages, 'page')} analyzed for ${h.short(context.name, 30)} `,
`${h.plural(noPages, 'page')} analysed for ${h.short(context.name, 30)} `,
`(${h.plural(options.browsertime.iterations, 'run')}, `,
`${h.cap(options.browsertime.browser)}/${
options.mobile ? 'mobile' : 'desktop'

View File

@ -31,4 +31,6 @@ bin/sitespeed.js --version | tr -d '\n' > docs/_includes/version/sitespeed.io.tx
# Generate the help for the docs
bin/sitespeed.js --help > docs/documentation/sitespeed.io/configuration/config.md
# Generate friendly names from code
node release/friendlyNames.js > docs/documentation/sitespeed.io/configure-html/friendlynames.md
node release/friendlyNamesBudget.js > docs/documentation/sitespeed.io/performance-budget/friendlynames.md

11
release/friendlyNames.js Normal file
View File

@ -0,0 +1,11 @@
'use strict';
const friendly = require('../lib/support/friendlynames');
for (let key of Object.keys(friendly)) {
for (let tool of Object.keys(friendly[key])) {
for (let metric of Object.keys(friendly[key][tool])) {
console.log(tool + '.' + metric);
}
}
}

View File

@ -0,0 +1,17 @@
'use strict';
const friendly = require('../lib/support/friendlynames');
console.log('{');
console.log(' "budget": {');
for (let key of Object.keys(friendly)) {
for (let tool of Object.keys(friendly[key])) {
console.log(' "' + tool + '": {');
for (let metric of Object.keys(friendly[key][tool])) {
console.log(' "' + metric + '": limit,');
}
console.log(' },');
}
}
console.log(' }');
console.log('}');