We use this on the main core repository, so it makes sense to use it here as well. It should reduce the overall flakiness of the discourse_docker build.
- Remove manual database creation, and instead promote discourse user to postgres SUPERUSER. This means that `db:drop` and `db:create` commands can be run in the dev image, just like in other local development environments. As well as simplifying things, it fixes turbo_rspec, which was previously impossible in the docker dev environment (because `discourse` didn't have permissions to create the parallel databases)
- Stop pre-migrating test database in dev image. It adds additional build time & image size, and doesn't actually help because core's `bin/docker/boot_dev` script overwrites the container's postgres directory with a volume mount
* FIX: Variable isn't being set
* DEV: Goodbye tabs, they're probably feeling lonely anyway
* FIX: Variable is never used. No need for command substitution
* FIX: The likelihood of knowing the PID for the script before execution is exceptionally low
add tab completion for launcher and discourse-setup.
For launcher, offers command (e.g., rebuild, start) and then offers yml files from containers directory. After that switches (e.g., --run-image) are offered. (Will not offer switches except in final position, sorry.)
discourse-setup offers switches (e.g., --two-container).
discourse-docter has no command line arguments.
when we have already run an initial setup, fall back to just checking for
socket, rather than outright failing if the init script has already been run.
This allows 'configure' steps to be re-run in standalone cases.
eg: `launcher2 configure app && launcher2 configure app`
current version: fails as it's missing the install_postgres file
with PR: checks for psql socket, and builds.
doing something like `launcher2 start app && launcher2 configure app` would also
print out a more correct error message, "postgres already running stop container"
We started installing Chromium because there is no linux ARM support
for Chrome yet. However, trying to run tests on Chromium seems to be
extra challenging. For example, upgrading to Debian 12 causes our
Javascript tests to fail on Chromium but not on Chrome.
Chrome for Testing was built specifically for web app testing so let's
follow Google's recommendation.
We need to upgrade to bookworm because bullseye is EOL. This commit when merged into branch will push the following images to Docker hub:
1. `discourse/base:slim-bookworm`
2. `discourse/base:release-bookworm`
3. `discourse/discourse_test:slim-bookworm`
4. `discourse/discourse_test:slim-browsers-bookworm`
5. `discourse/discourse_test:release-bookworm`
This fixes a regression introduced in
bbefa1e5f3. Basically, we cannot configure
the default bundle jobs when building the image because the number of
cores used to build the image can be different from the number of cores
on the machine running the image.
A number of people have reported hitting yarn timeouts on low-spec DO droplets, which causes the build to fail. This should provide a little more leeway
This commit adds `sharedscripts` which will ensure that our `postrotate`
script is only ran once even if multiple log files in the `/shared/log/rails/`
are rotated. If `sharedscripts` is not specified, we are sending `sv 1
unicorn` once per log file rotated and this has resulted in weird
behaviours like our Sidekiq process hanging indefinitely.
Note the following from the manpage for logrotate:
```
sharedscripts
Normally, prerotate and postrotate scripts are run for each log which is rotated and the absolute path to the log file is passed as first argument to the script. That means a single script may be run multiple times for log file entries which match multiple files (such as the /var/log/news/* example). If sharedscripts is specified, the scripts are only run once, no matter how many logs match the wildcarded pattern, and whole pattern is passed to them.
```
* DEV: Updated vanilla.template.yml
* updated vanilla.template.yml to make the migration process more straight forward
* removed branch pull
* implemented suggested changes
* added suggested chantes
* added before_code hook to set remote fork
* updated with suggested changes
Bumping Ruby to 3.3.1 to pull in latest performance and memory
improvements made to YJIT. On Discourse hosting services with Ruby 3.3.1
+ YJIT, we saw an
estimate 10-20% improvement in time spent executing Ruby code over Ruby
3.2.3 + YJIT.
In order to download the free MaxMind GeoLite2 databases, an account ID
and license key is required going forward. This commit updates
`discourse-setup` to start prompting the user to provide the MaxMind
Account ID first before asking for the MaxMind license key. If the user
does not provide the Account ID, the script will not prompt for the
license key as we assume the user has opted out.
We are aware that we don't have a reliable way to test for changes to
the `discourse-setup` script but it is what it is at this point in time.
We intend to invest resources in improving things in the future but now
is not the time.
This commit adds a `ruby_3_3` job to our Github workflow which releases
a `discourse/base:release-ruby-3.3.1` Docker image to allow us to test
Ruby 3.3.1 before eventually changing to that version as the default.
This commit does 2 things:
1. Added a new yarn hook to replace the npm mirror before `yarn install`.
2. Modified `web.china.template.yml` to add more mirror sources.
Below is an explanation of these modifications:
- The GitHub proxy added in `web.china.template.yml` has existed in China for many years, and its repository https://github.com/hunshcn/gh-proxy has 6k+ stars, which can ensure its security and stability.
- The NPM mirror site added in `web.china.template.yml` is maintained by Alibaba Group, one of the largest Internet companies in China.
- Modified the Gem mirror in `web.china.template.yml` to the mirror provided by Tsinghua University, one of the top universities in China.
- The reason why sed is used to replace the `yarn.lock` file is because `yarn install --frozen-lockfile` is used for installation below. If the url is not replaced, the NPM mirror will not take effect.
After applying these modifications, I successfully installed Discourse on the Tencent Cloud China server. No more network problems.
This commit updates Ruby to 3.2.4 which includes security fixes for the
following CVEs:
* CVE-2024-27282: Arbitrary memory address read vulnerability with Regex search
* CVE-2024-27281: RCE vulnerability with .rdoc_options in RDoc
* CVE-2024-27280: Buffer overread vulnerability in StringIO
* Add tags to pups templates
The purpose here is to allow greater flexibility in how and where
docker images are built and run. It achieves this by breaking up
build steps into distinct run steps which can be saved along the way.
Customizable base images may then be prebuilt with as many batteries
included as possible, with zero environment setup so those images
can then be configured at a later stage.
Add the ability to run partial pups configuration:
`build`: build base image with no db - ember build.
`precompile`: precompile stage that requires postgres and redis.
`migrate`: run migration tasks.
`db`: start bundled postgres/redis, if included.
Adds a create_db script in postgres template for creating db on the fly.
Called below in unicorn run:
updates unicorn run command with 3 env flags:
CREATE_DB_ON_BOOT: if 1, creates base db schema, allows for deferral of creation.
MIGRATE_ON_BOOT: if 1, runs db:migrate - allows for deferral of db migration.
PRECOMPILE_ON_BOOT: if 1, precompiles assets (without ember build).
PRECOMPILE_ON_BOOT initially defaults to 1 in base builds (no tags).
During the `precompile` build step, this updates the default to be 0.
All other new flags default to 0 (off). With these three flags, we're now able
to ship and start a container from a base image, and it'll be able to bootstrap
a blank database.
Updates hook to start redis before_db_migrate as before_code hook
is not guaranteed to fire before migrate tasks if pups is filtered by tags.
Removing the -p from the "nc" command.
Reason:
# nc -w 4 -l -p 80
nc: cannot use -p and -l
Without -p it works just fine.
> -l' Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored.
Chrome isn’t available for aarch64 yet, but Chromium (which is basically
the same browser without the proprietary bits from Google) is shipped by
Debian. They also ship a Chrome driver compiled for aarch64.
This patch adds Chromium to our images without removing Chrome on
x86_64, allowing a smooth transition to using Chromium only.
Chrome isn’t available yet for aarch64, but Chromium (which is basically
the same browser without the proprietary bits from Google) is shipped by
Debian. They also ship a Chrome driver compiled for aarch64.
By using Chromium instead of Chrome, we unify how we do things
regardless of the architecture used in the generated image.
Why this change?
Now that we can efficiently build Docker images targeted at `linux/arm64`,
we will start to release images for `linux/arm64` in the same way we do
for `linux/amd64` images.
Images released for `linux/amd64` are tagged as follows:
1. discourse/base:2.0.\<datetime\>-slim
2. discourse/base:slim
3. discourse/base:2.0.\<datetime\>
4. discourse/base:release
For `linux/arm64`, the images are tagged as follows:
1. discourse/base:2.0.\<datetime\>-slim-arm64
2. discourse/base:slim-arm64
3. discourse/base:2.0.\<datetime\>-arm64
4. discourse/base:release-arm64
5. discourse/base:aarch64 (For backwards compatibility)
For `linux/arm64`, we unfortunately cannot install chrome because chrome
does not currently release binaries for the arch. Therefore, we install
chromium which chrome is based off and also install the chromedriver
binary for `linux/arm64` released by the electron project.
Why this change?
We have been given access to Github's private beta of ARM hosted
runners. Switching to ARM runners should drastically speed up the time
required for us to build our ARM image.
What does this change do?
1. Switch to use Github's ARM hosted runners.
2. Build release image for arm64 as well. We previously only built the
slim image because building the release image through emulation is
way too slow so we skipped the release image.
3. Update `bundle` in `release.Dockerfile` to install gems in parallel
based on the number of cores instead of hardcoding it to 4 jobs.
automatically
While x64 is still on jemalloc 3.6, arm64 is using latest jemalloc.
They have different names for the library file, so we will now use the
symlink to automatically load the one available.