This reduces the surface area of our `Kernel` monkeypatch and removes
the need to `include Kernel` in a bunch of modules.
While we're here, also move `Kernel#require?` to `Homebrew` and fully
scope the calls to it.
We need to `source utils/helpers.sh` before calling `odie`. We also
don't need to `source utils/wrapper.sh` again here, because we are
already in `utils/wrapper.sh`.
`HOMEBREW_FORCE_BREW_WRAPPER` can be used as a security/compliance
feature, but allowing it to be disabled by setting
`HOMEBREW_NO_FORCE_BREW_WRAPPER` leaves a pretty large hole in it that
allows it to be sidestepped.
Let's fix that by actually checking the path of the process that called
`brew`, and the verify that that path matches the configured value of
`HOMEBREW_NO_FORCE_BREW_WRAPPER`.
`ensure_formula_installed!` requires the `Formula` class to be loaded
before being called to work properly.
Let's guarantee that instead by implementing it as an instance method of
the `Formula` class.
See discussion at #20358.
- move some things out of `extend` that don't really fit there e.g.
`Module`s that are included but not doing any
overriding/monkeypatching
- move some code into `extend/os` to fix all remaining
`rubocop:todo Homebrew/MoveToExtendOS`s
- remove some unneeded `bundle` skipper code that doesn't really make
sense given our current bottling strategy
- extract some `Pathname` extensions to `extend/pathname` for separate
files
- move a `ENV` `Kernel` extension into `kernel.rb`
- `odeprecate` a seemingly unused backwards compatibility method
- move `readline_nonblock` from a monkeypatch to a
`ReadlineNonblock.read` method as its only used in one place
- fix up a link in documentation
- Remove a bunch of non-actionable/unnecessary noise in GitHub Actions
CI.
- Limit number of threads used to generate analytics API data to avoid
reproducible failures producing errors and requiring retries.
- Move to Debian Old Stable for testing non-system `glibc`.
- Remove unneeded core taps/updates.
- Improve naming of CI jobs to clarify purpose i.e. we're testing
things work on Linux, not Ubuntu specifically.
- Remove dedicated non-online/non-generic Linux `brew tests` jobs from
3 to 1.
Co-authored-by: Rylan Polster <rslpolster@gmail.com>
This was more painful that I expected but will allow `brew bundle sh`
and `brew sh` to use the user's configuration but use our custom prompt
for Bash and ZSH.
The `eol_data` method uses `@eol_data["#{product}/#{cycle}"] ||=`,
which can unncessarily allow a duplicate API call if the same
product/cycle combination was previously tried but returned a 404
(Not Found) response. In this scenario, the value would be `nil` but
the existing logic doesn't check whether this is a missing key or a
`nil` value. If the key is present, we shouldn't make the same
request again.
This updates the method to return the existing value if the key
exists, which effectively prevents duplicate fetches. This new logic
only modifies `@eol_data` if `curl` is successful, so it does allow
the request to be made again if it failed before.
That said, this shouldn't normally be an issue and this is mostly
about refactoring the method to allow for nicer code organization.
This approach reduces the `begin` block to only the `JSON.parse` call,
which allows us to use `return unless result.status.success?` (this
previously led to a RuboCop offense because it was called within a
`begin` block).
The endoflife.date API has been updated, so this modifies the URL in
`SharedAudits.eol_data` to use the up to date URL and modifies the
related logic in `FormulaAuditor.audit_eol` to work with the new
response format. Specifically, there is now an `isEol` boolean value
and the EOL date is found in `eolFrom`.
One wrinkle of the new setup is that 404 responses now return HTML
content even if the request includes an `Accept: application/json`
header. This handles these types of responses by catching
`JSON::ParserError` but ideally we would parse the response headers
and use `Utils::Curl.http_status_ok?` to check for a good response
status before trying to parse the response body as JSON.