Substring matches when you are actually only trying to match one
specific item is a smell, its unclear what's intended and causes the
reader to have to stop and think.
This is for parsing diffs like:
# via importlib-metadata
+
+ # The following packages were excluded from the output:
+ # setuptools
Otherwise further parsing breaks because it is looking for a package
name on the line.
see 433074c681, adf39e9f72, db83a82fe1 & 78a74a2e2a
Now that we are using pip-compile to build requirements lock files,
instead of pip-freeze, these things shouldn't be showing up in the
results.
Looks like `typeguard` was dropped in a192a3d9c7d0e87 "initial compile based bump"
I guess that was being pulled in by pip freeze too.
I find these comments useful to show why packages are included in the
final compiled requirements files.
Required a small change to `recompile_requirements` to ignore these new
comment line (it was saying there was a new requirement with an empty
name that needed a changelog entry).
The addition of the `os.path.realpath()` is to clean up the paths of the
requirements files in the annotations, eg:
check-manifest==0.49
- # via -r scripts/dev/../../misc/requirements/requirements-check-manifest.txt-raw
+ # via -r misc/requirements/requirements-check-manifest.txt-raw
For the pip freeze backend pip is being passed `--all` when the tox
requirements file is being processed so that pip and setuptools are
included in the requirements file. This was added in 922dca039b
for reasons I haven't fully grokked.
This commit adds similar behaviour for the pip compile backend via:
1. don't add the --no-emit-package args for the tox requirements file
2. add pip and setuptools to the tox requirements file
It seems that pip and setuptools aren't even requirements of tox, but
they are being included in the compiled requirements file anyway. Why
aren't the included in the raw requirements file? I don't know, but from
what I can figure it's not going to harm anything to have them in there.
This is to match the `pip freeze` requirements compilation method. It's
not clear to me if we actually want this behaviour or not.
If seems `pip freeze` will exclude dependencies of itself: https://pip.pypa.io/en/stable/cli/pip_freeze/#cmdoption-all
Even if there are other packages installed that depend on those
dependencies.
`uv pip compile`, and now the original `pip-compile` both have decided
to include setuptools in generated requirements files:
https://github.com/astral-sh/uv/issues/1353https://github.com/jazzband/pip-tools/issues/989#issuecomment-1134985118
So I'm not sure if we have a reason for going against this here or if
they were just being excluded because that's what pip freeze does.
Hopefully we can drop this commit and use the default behaviour in the
future. For now when I'm trying to provide the new backend it's here to
make the diff of generated files more precise.
This message prefix to identify a pip compile comment was taken from
these examples:
# The following packages were excluded from the output:
# setuptool
# The following packages are considered to be unsafe in a requirements file:
# setuptools==41.4.0 # via protobuf
Since I was looking at how hard it would be to support using pip-compile
to recompute requirements, I was worried that I would break the markers
we support in the raw requirements files.
This adds two tests:
* disabled_test_markers_real_pip_and_venv
A test that sets up a local python index and runs the real pip/uv
binaries. It works (and it was fun to setup a package index factory)
but has a couple of downsides:
1. it hits the real pypi index, which is not great in a test. This can
be prevented by removing the EXTRA bit from the INDEX_URL env vars
and pre-downloading pip, wheel, setuptools and uv to the test repo
(and adding index.htmls for them). But because of the next item I'm
not sure it's worth the effort of keeping this test around
2. it's slow because of having to download packages from the internet
(even if we pre-cached them it would still have to download them, I
guess we could include a zip of fixed/vendored versions, but that
will probably require maintenance over time) and because it cals
venv to make new virtual environments, which isn't the quickest
operation (maybe uv venv is quicker?)
* test_markers_in_comments
Tests just the comment reading and line manipulation logic. Could be
made a bit more pure by just calling read_comments() and
convert_line(), but that wouldn't test that "add" marker.
There was a fair bit of duplicate code, so I've pulled out the "take a
list of requirements, give me a new one" out to separate methods. The
stuff around parsing the output can stay common thankfully!
Even if we drop one of the methods this level of abstraction is probably
fine to keep.
The CHANGELOG_URLS variable is imported by the
`./scripts/dev/misc_checks.py changelog-urls` check. Since there is a
bit of churn in package name case in this change (and a while back when
a bunch of packages switched between underscores and hyphens), update
this variable at import time so that that checker will be looking at
normalized names too.
In #8269 we saw some packages leaking into the pip freeze output that we
don't want in the requirements files (if setuptools isn't supposed to be
in there, why should its dependencies).
I've also greatly missed the comments that pip-compile puts in
requirements.txt files explaining where indirect dependencies come from.
So I took the opportunity to switch our tooling for updating and parsing
new dependencies and their versions to use pip-compile instead of `pip -U
install && pip freeze`.
It turned out to not be a big change because the pip freeze output is
largely compatible with requirements files (we are writing directly to
one after all). So we just need to switch what commands we are running
and triage any compatibility issues.
I chose `uv pip compile` instead of `pip-compile` because I like what uv
is doing (fast, aiming for compatibility, consolidating a confusing
ecosystem into a single tool). But pip-compile/tools should do the same
job if we want to go that route.
The biggest differences are:
* outputs normalized names: this generally results in a larger diff than
otherwise (I propose we go through an regenerate all the requirements
files in one go, and maybe add that commit to a blame ignore file) and
requires our comparison logic to deal with normalized package names
everywhere
* setuptools and pip not included in tox requirement file - not sure
what to do about that yet, should they be in the .text-raw file?
TODO:
* remove support for pip_args?
* change markers in raw files to lower case? Ideally don't require, if a human
can write them in any case and a robot can normalize we should do that. If
if there are patterns with `._` in them as part of names, how do we handle
that?
* pull out similar bits of `build_requirements*` methods
* maybe make it so you can pass `requirements=None` to `init_venv` to
make it not install stuff, install uv, do the uv invocation, gate
all that behind a `--method="freeze|compile"` arg?
* add pip and setuptools to tox requirements file?
* basename requirements file names so they don't have
* `script_path/../../` in them in the annotated version
* add tests for the markers (with inputs of differing cases) to make
sure they all still work
* update changelog check used in CI to normalize names too
The normal PDF.js build only officially supports the latest Chromium, so things
might break every once in a while with QtWebEngine (e.g. #8199, #7335).
Let's instead bundle and recommend the legacy build.
Closes#8332Closes#7721 (reworded)
Also see #7135
This re-enables the pylint too-many-positional-arguments for the main
application code. It's still disabled for tests because that's how you pull in
pytlint fixtures, and I don't think we want to push people into being creative
with fixtures just to get around that.
When functions are called with many positional arguments the reader has to do
a bit of heavy lifting to figure out in what position a value is being passed,
and it's easier to make mistakes. So I would like to encourage using keyword
arguments for long argument lists.
I've set the `max-positional-arguments` to a completely arbitrary 7, from a
completely arbitrary 5, because there were many more violations under 7. If
like 99% of our functions fit under 7 it's probably fine.
Regarding the exceptions:
* objreg.register: I grepped it and it looks like everything is only passing
the first two args as positional already, lucky!
* `_get_color_percentage`: only one usage of it, but I was in "add directive
comment" mode
* update_3rdparty.py: only one usage, already using kwargs
* pyqtProperty: idk
* commands.py: "its complicated". Many methods in this file map to commands
used in qutebrowser's command mode. In that case it's usual for them to be
called as flags, rather than positional. But it could be complicated to wade
into that, and having one file excluded isn't so bad.
I just searched for qt5 and deleted stuff. EZ.
Will leave on a branch for a bit and see if I feel like testing this at
all, otherwise maybe leave this stuff in here and make it not called.
Not 100% sure that we need to remove all this stuff when we just want
the CI to go green. But tbh if we don't need to make Qt5 releases then
we don't need it. Better to be bold and pull it out than have to work
around it in the future. And we can always revert the commit.
This commit takes a screenshot of the active browser window when an
end2end test fails. When running on CI a zip file of screenshots will be
attached to the run summary as an artifact. When run locally screenshots
will be left in /$TMPDIR/pytest-screenshots/.
The screenshot is of the Xvfb screen that the tests are running under.
If there are multiple windows open it will likely only show the active
window because a) we aren't running with a window manager b) the Xvfb
display is, by default, the same size as the browser window.
I'm not sure if xvfb is used on the Window runs in CI. We could fall
back to trying to take screenshots if not running under xvfb but I'm a
bit wary of an automatic feature that takes screenshots of people's
desktops when running locally. Even if they just to to /tmp/ it might be
surprising. We can change it later if it turns out we need to try to
take screenshots in more cases.
I'm using pillow ImageGrab, the same as pyvirtualdisplay.smartdisplay. I'm
getting the display number from the pytest-xvfb plugin and formatting it
appropriately (pyvirtualdisplay has an already formatted one which is used by
the smartdisplay, but that's not accessible).
Pillow is now a requirement for running the tests. I thought about making
it gracefully not required but I'm not sure how to inform the user with
a warning from pytest, or if they would even want one. Maybe we could
add a config thing to allow not taking screenshots?
I had to bump the colordepth config for pytest-xvfb otherwise pillow
complained that the default 16bit color depth wasn't supported as it
only supports 24bit, see https://github.com/python-pillow/Pillow/blob/1138ea5370cbda5eb328ec949
8c314d376c81265/src/display.c#L898
I'm saving screenshots to a temp dir because I don't want to put them in
my workdir when running locally. I want to clear the directory for each
run so that you don't get confused by looking at old images. I'm not
100% sure about the lifecycle of the process classes though. Eg if we
have two processes they might both try to create the output directory.
I'm using pytest.session.stash to save the directory so perhaps the
lifecycle of the stash will handle that? Not sure.
Ideally the images would be uploaded somewhere where we could click
through and open them in the browser without having to download a zip
file, but I'm not sure how to achieve that.
It would be nice to print in the test logs that a screenshot was saved
and where to. Just so you could copy paste the filename instead of
having to match the sanitized filename against failing test names. But I
don't know how to log stuff from this stage in the pytest lifecycle.
TODO:
* I'm not sure that the screenshot captures the whole browser window?
Maybe the browser windows is bigger than the X11 display?
Closes: #7625