Compare commits

...
Sign in to create a new pull request.

54 commits

Author SHA1 Message Date
nexy7574
6aae5ca525
Reset README back to HEAD 2025-04-19 20:26:59 +01:00
Jade Ellis
b9525ae91f ci: Run builtin registry whenever secret is available 2025-04-19 12:13:47 -07:00
Jade Ellis
da7bee5305 ci: Try invert condition for branch prefix 2025-04-19 12:13:47 -07:00
Jade Ellis
f96ce20427 ci: Enable buildx caching 2025-04-19 12:13:47 -07:00
Jade Ellis
77a62215ea chore: Update git links 2025-04-19 12:13:47 -07:00
Jade Ellis
b6420e7def ci: Use dind label 2025-04-19 12:13:47 -07:00
Jade Ellis
e508c9f9cf ci: Remove non-functional cache steps 2025-04-19 12:13:47 -07:00
Jade Ellis
ee71cf2008 fix: Disable buildkit caching
This is for tom's runners, whilst they're having network issues
2025-04-19 12:13:47 -07:00
Jade Ellis
8269a2fd1c ci: Only prefix non-default branches
AKA, tag image:main as the latest commit
2025-04-19 12:13:47 -07:00
Jade Ellis
3d27cce047 ci: Limit concurrency
Mainly to prevent runners from getting bogged down
2025-04-19 12:13:47 -07:00
Jade Ellis
7763b2479b fix: Replace rust cache with direct cache use, as Rust is not installed on CI image 2025-04-19 12:13:47 -07:00
Jade Ellis
cd24a72078 ci: Prefix branch builds with branch- 2025-04-19 12:13:47 -07:00
Jade Ellis
9298c53a40 fix: Hardcode matrix 2025-04-19 12:13:47 -07:00
Jade Ellis
25378a4668 fix: Use forgejo patched artifact actions 2025-04-19 12:13:47 -07:00
Jade Ellis
fdef36c47f fix: Allow specifying user & password for builtin registry 2025-04-19 12:13:47 -07:00
Jade Ellis
406f689301 build: Use hacks for a cached actions build
- Use cache dance for github actions caching
- Use timelord hack to avoid bad cache invalidation
2025-04-19 12:13:47 -07:00
Jade Ellis
7185d71827 feat: Docker images built with Forgejo Actions 2025-04-19 12:13:47 -07:00
Jade Ellis
ff83e0c5b2 chore: Change branding string to continuwuity 2025-04-19 12:13:47 -07:00
Jade Ellis
b26247e31e fix: Disambiguate appservices in lazy loading context
In the previous commit, app services would all appear to be the same
device when accessing the same user. This sets the device ID to be the
appservice ID when available to avoid possible clobbering.
2025-04-19 12:13:47 -07:00
nexy7574
814f321cab fix: Do not panic when sender_device is None in /messages route
The device ID is not always present when the appservice is the client.
This was causing 500 errors for some users, as appservices can lazy
load from `/messages`.

Fixes #738

Co-authored-by: Jade Ellis <jade@ellis.link>
2025-04-19 12:13:47 -07:00
Tom Foster
904fa3c869 Add Forgejo CI workflow for Cloudflare Pages 2025-04-19 12:13:47 -07:00
Tom Foster
b04a9469ae Add Matrix .well-known files 2025-04-19 12:13:47 -07:00
Tom Foster
6fbff4af6f Update mdBook config for continuwuity 2025-04-19 12:13:47 -07:00
Jade Ellis
dede3323f6 chore: Add words to cspell dictionary 2025-04-19 12:13:47 -07:00
Jade Ellis
a21d96d336 chore: Update Olivia Lee in mailmap 2025-04-19 12:13:47 -07:00
Jade Ellis
f5622881b3 chore: Add Timo Kösters to the mailmap 2025-04-19 12:13:47 -07:00
Jade Ellis
a869f06239 chore: Add mailmap 2025-04-19 12:13:47 -07:00
Jade Ellis
20c2091e5c ci: Delete all old CI files
Part of #753
2025-04-19 12:13:47 -07:00
Jade Ellis
04f7e26927 docs: Phrasing 2025-04-19 12:12:24 -07:00
Jade Ellis
a9eba0e117 docs: New readme
It's a continuwuation!
2025-04-19 12:12:08 -07:00
Jacob Taylor
eb2949d6d7 Fix spaces rooms list load error.
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-16 20:58:48 -07:00
Jacob Taylor
de7842b470 Fix spaces limit/max_depth bug in response.
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-16 20:58:48 -07:00
Peter Gervai
937c5fc86a config: rocksdb_compaction help was inverted
probably old remnnant of an inverted option.
2025-04-15 08:09:21 -07:00
Jason Volk
79268bda1e Remove the default sentry endpoint.
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 07:28:14 +00:00
Jason Volk
edb245a2ba Remove the updates service.
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 07:19:16 +00:00
Jason Volk
ae2abab4c9 Remove some workflows.
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 07:18:26 +00:00
Jason Volk
b9fd88b65a Update README [ci skip]
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 06:56:16 +00:00
Jason Volk
4094cd52ee reduce large stack frames 2025-04-13 05:13:00 +00:00
Jason Volk
aa80e952d1 mitigate additional debuginfo expansions
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
b0203818db add missing feature-projections between intra-workspace crates
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
1fd881bda5 eliminate Arc impl for trait Event
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
5b322561ce simplify database backup interface related
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
54fb48a983 replace admin command branches returning RoomMessageEventContent
rename admin Command back to Context

Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
d82f00c31c misc async optimizations; macro reformatting
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
cd4e6b61a9 improve appservice service async interfaces
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
04d7f7f626 remove box ids from admin room command arguments
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
d9616c625d propagate better message from RustlsConfig load error. (#734)
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
75aadd5c6a slightly optimize user directory search loop
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:13:00 +00:00
Jason Volk
e0508958b7 increase snake sync asynchronicity
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-13 05:12:52 +00:00
Jason Volk
ccf10c6b47 modest cleanup of snake sync service related
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-09 03:40:44 +00:00
Jason Volk
fd33f9aa79 modernize state_res w/ stream extensions
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-09 03:40:44 +00:00
Jason Volk
7c9d3f7e07 add ReadyEq future extension
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-09 03:40:44 +00:00
Jason Volk
7cf61b5b7b add ready_find() stream extension
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-09 03:40:44 +00:00
Jason Volk
ce6e5e48de relax Send requirement on some drier stream extensions
Signed-off-by: Jason Volk <jason@zemos.net>
2025-04-09 03:40:44 +00:00
104 changed files with 2738 additions and 3755 deletions

View file

@ -1,9 +1,9 @@
# Local build and dev artifacts # Local build and dev artifacts
target target/
tests
# Docker files # Docker files
Dockerfile* Dockerfile*
docker/
# IDE files # IDE files
.vscode .vscode

View file

@ -0,0 +1,68 @@
name: Documentation
on:
pull_request:
push:
branches:
- main
tags:
- "v*"
workflow_dispatch:
concurrency:
group: "pages-${{ github.ref }}"
cancel-in-progress: true
jobs:
docs:
name: Build and Deploy Documentation
runs-on: not-nexy
steps:
- name: Sync repository
uses: https://github.com/actions/checkout@v4
with:
persist-credentials: false
fetch-depth: 0
- name: Setup mdBook
uses: https://github.com/peaceiris/actions-mdbook@v2
with:
mdbook-version: "latest"
- name: Build mdbook
run: mdbook build
- name: Prepare static files for deployment
run: |
mkdir -p ./public/.well-known/matrix
# Copy the Matrix .well-known files
cp ./docs/static/server ./public/.well-known/matrix/server
cp ./docs/static/client ./public/.well-known/matrix/client
# Copy the custom headers file
cp ./docs/static/_headers ./public/_headers
echo "Copied .well-known files and _headers to ./public"
- name: Setup Node.js
uses: https://github.com/actions/setup-node@v4
with:
node-version: 20
- name: Install dependencies
run: npm install --save-dev wrangler@latest
- name: Deploy to Cloudflare Pages (Production)
if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/main' }}
uses: https://github.com/cloudflare/wrangler-action@v3
with:
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
command: pages deploy ./public --branch=main --commit-dirty=true --project-name=${{ vars.CLOUDFLARE_PROJECT_NAME }}"
- name: Deploy to Cloudflare Pages (Preview)
if: ${{ github.event_name != 'push' || github.ref != 'refs/heads/main' }}
uses: https://github.com/cloudflare/wrangler-action@v3
with:
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
command: pages deploy ./public --branch=${{ github.head_ref }} --commit-dirty=true --project-name=${{ vars.CLOUDFLARE_PROJECT_NAME }}"

View file

@ -0,0 +1,222 @@
name: Release Docker Image
concurrency:
group: "release-image-${{ github.ref }}"
on:
pull_request:
push:
paths-ignore:
- '.gitlab-ci.yml'
- '.gitignore'
- 'renovate.json'
- 'debian/**'
- 'docker/**'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
BUILTIN_REGISTRY: forgejo.ellis.link
BUILTIN_REGISTRY_ENABLED: "${{ ((vars.BUILTIN_REGISTRY_USER && secrets.BUILTIN_REGISTRY_PASSWORD) || (github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false)) && 'true' || 'false' }}"
jobs:
define-variables:
runs-on: ubuntu-latest
outputs:
images: ${{ steps.var.outputs.images }}
images_list: ${{ steps.var.outputs.images_list }}
build_matrix: ${{ steps.var.outputs.build_matrix }}
steps:
- name: Setting variables
uses: https://github.com/actions/github-script@v7
id: var
with:
script: |
const githubRepo = '${{ github.repository }}'.toLowerCase()
const repoId = githubRepo.split('/')[1]
core.setOutput('github_repository', githubRepo)
const builtinImage = '${{ env.BUILTIN_REGISTRY }}/' + githubRepo
let images = []
if (process.env.BUILTIN_REGISTRY_ENABLED === "true") {
images.push(builtinImage)
}
core.setOutput('images', images.join("\n"))
core.setOutput('images_list', images.join(","))
const platforms = ['linux/amd64', 'linux/arm64']
core.setOutput('build_matrix', JSON.stringify({
platform: platforms,
include: platforms.map(platform => { return {
platform,
slug: platform.replace('/', '-')
}})
}))
build-image:
runs-on: dind
container: ghcr.io/catthehacker/ubuntu:act-latest
needs: define-variables
permissions:
contents: read
packages: write
attestations: write
id-token: write
strategy:
matrix: {
"include": [
{
"platform": "linux/amd64",
"slug": "linux-amd64"
},
{
"platform": "linux/arm64",
"slug": "linux-arm64"
}
],
"platform": [
"linux/amd64",
"linux/arm64"
]
}
steps:
- name: Echo strategy
run: echo '${{ toJSON(fromJSON(needs.define-variables.outputs.build_matrix)) }}'
- name: Echo matrix
run: echo '${{ toJSON(matrix) }}'
- name: Checkout repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
- name: Login to builtin registry
uses: docker/login-action@v3
with:
registry: ${{ env.BUILTIN_REGISTRY }}
username: ${{ vars.BUILTIN_REGISTRY_USER || github.actor }}
password: ${{ secrets.BUILTIN_REGISTRY_PASSWORD || secrets.GITHUB_TOKEN }}
# This step uses [docker/metadata-action](https://github.com/docker/metadata-action#about) to extract tags and labels that will be applied to the specified image. The `id` "meta" allows the output of this step to be referenced in a subsequent step. The `images` value provides the base name for the tags and labels.
- name: Extract metadata (labels, annotations) for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{needs.define-variables.outputs.images}}
# default labels & annotations: https://github.com/docker/metadata-action/blob/master/src/meta.ts#L509
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: manifest,index
# This step uses the `docker/build-push-action` action to build the image, based on your repository's `Dockerfile`. If the build succeeds, it pushes the image to GitHub Packages.
# It uses the `context` parameter to define the build's context as the set of files located in the specified path. For more information, see "[Usage](https://github.com/docker/build-push-action#usage)" in the README of the `docker/build-push-action` repository.
# It uses the `tags` and `labels` parameters to tag and label the image with the output from the "meta" step.
# It will not push images generated from a pull request
- name: Get short git commit SHA
id: sha
run: |
calculatedSha=$(git rev-parse --short ${{ github.sha }})
echo "COMMIT_SHORT_SHA=$calculatedSha" >> $GITHUB_ENV
- name: Get Git commit timestamps
run: echo "TIMESTAMP=$(git log -1 --pretty=%ct)" >> $GITHUB_ENV
- name: Build and push Docker image by digest
id: build
uses: docker/build-push-action@v6
with:
context: .
file: "docker/Dockerfile"
build-args: |
CONDUWUIT_VERSION_EXTRA=${{ env.COMMIT_SHORT_SHA }}
platforms: ${{ matrix.platform }}
labels: ${{ steps.meta.outputs.labels }}
annotations: ${{ steps.meta.outputs.annotations }}
cache-from: type=gha
cache-to: type=gha,mode=max
sbom: true
outputs: type=image,"name=${{ needs.define-variables.outputs.images_list }}",push-by-digest=true,name-canonical=true,push=true
env:
SOURCE_DATE_EPOCH: ${{ env.TIMESTAMP }}
# For publishing multi-platform manifests
- name: Export digest
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
- name: Upload digest
uses: forgejo/upload-artifact@v4
with:
name: digests-${{ matrix.slug }}
path: /tmp/digests/*
if-no-files-found: error
retention-days: 1
merge:
runs-on: dind
container: ghcr.io/catthehacker/ubuntu:act-latest
needs: [define-variables, build-image]
steps:
- name: Download digests
uses: forgejo/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
# Uses the `docker/login-action` action to log in to the Container registry registry using the account and password that will publish the packages. Once published, the packages are scoped to the account defined here.
- name: Login to builtin registry
uses: docker/login-action@v3
with:
registry: ${{ env.BUILTIN_REGISTRY }}
username: ${{ vars.BUILTIN_REGISTRY_USER || github.actor }}
password: ${{ secrets.BUILTIN_REGISTRY_PASSWORD || secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Extract metadata (tags) for Docker
id: meta
uses: docker/metadata-action@v5
with:
tags: |
type=semver,pattern=v{{version}}
type=semver,pattern=v{{major}}.{{minor}},enable=${{ !startsWith(github.ref, 'refs/tags/v0.0.') }}
type=semver,pattern=v{{major}},enable=${{ !startsWith(github.ref, 'refs/tags/v0.') }}
type=ref,event=branch,prefix=${{ format('refs/heads/{0}', github.event.repository.default_branch) 1= github.ref && 'branch-' || '' }}
type=ref,event=pr
type=sha,format=long
images: ${{needs.define-variables.outputs.images}}
# default labels & annotations: https://github.com/docker/metadata-action/blob/master/src/meta.ts#L509
env:
DOCKER_METADATA_ANNOTATIONS_LEVELS: index
- name: Create manifest list and push
working-directory: /tmp/digests
env:
IMAGES: ${{needs.define-variables.outputs.images}}
shell: bash
run: |
IFS=$'\n'
IMAGES_LIST=($IMAGES)
ANNOTATIONS_LIST=($DOCKER_METADATA_OUTPUT_ANNOTATIONS)
TAGS_LIST=($DOCKER_METADATA_OUTPUT_TAGS)
for REPO in "${IMAGES_LIST[@]}"; do
docker buildx imagetools create \
$(for tag in "${TAGS_LIST[@]}"; do echo "--tag"; echo "$tag"; done) \
$(for annotation in "${ANNOTATIONS_LIST[@]}"; do echo "--annotation"; echo "$annotation"; done) \
$(for reference in *; do printf "$REPO@sha256:%s\n" $reference; done)
done
- name: Inspect image
env:
IMAGES: ${{needs.define-variables.outputs.images}}
shell: bash
run: |
IMAGES_LIST=($IMAGES)
for REPO in "${IMAGES_LIST[@]}"; do
docker buildx imagetools inspect $REPO:${{ steps.meta.outputs.version }}
done

View file

@ -1,717 +0,0 @@
name: CI and Artifacts
on:
pull_request:
push:
paths-ignore:
- '.gitlab-ci.yml'
- '.gitignore'
- 'renovate.json'
- 'debian/**'
- 'docker/**'
branches:
- main
tags:
- '*'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
concurrency:
group: ${{ github.head_ref || github.ref_name }}
cancel-in-progress: true
env:
# Required to make some things output color
TERM: ansi
# Publishing to my nix binary cache
ATTIC_TOKEN: ${{ secrets.ATTIC_TOKEN }}
# conduwuit.cachix.org
CACHIX_AUTH_TOKEN: ${{ secrets.CACHIX_AUTH_TOKEN }}
# Just in case incremental is still being set to true, speeds up CI
CARGO_INCREMENTAL: 0
# Custom nix binary cache if fork is being used
ATTIC_ENDPOINT: ${{ vars.ATTIC_ENDPOINT }}
ATTIC_PUBLIC_KEY: ${{ vars.ATTIC_PUBLIC_KEY }}
# Get error output from nix that we can actually use, and use our binary caches for the earlier CI steps
NIX_CONFIG: |
show-trace = true
extra-substituters = https://attic.kennel.juneis.dog/conduwuit https://attic.kennel.juneis.dog/conduit https://conduwuit.cachix.org https://aseipp-nix-cache.freetls.fastly.net https://nix-community.cachix.org https://crane.cachix.org
extra-trusted-public-keys = conduit:eEKoUwlQGDdYmAI/Q/0slVlegqh/QmAvQd7HBSm21Wk= conduwuit:BbycGUgTISsltcmH0qNjFR9dbrQNYgdIAcmViSGoVTE= conduwuit.cachix.org-1:MFRm6jcnfTf0jSAbmvLfhO3KBMt4px+1xaereWXp8Xg= nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs= crane.cachix.org-1:8Scfpmn9w+hGdXH/Q9tTLiYAE/2dnJYRJP7kl80GuRk=
experimental-features = nix-command flakes
extra-experimental-features = nix-command flakes
accept-flake-config = true
WEB_UPLOAD_SSH_USERNAME: ${{ secrets.WEB_UPLOAD_SSH_USERNAME }}
GH_REF_NAME: ${{ github.ref_name }}
WEBSERVER_DIR_NAME: ${{ (github.head_ref != '' && format('merge-{0}-{1}', github.event.number, github.event.pull_request.user.login)) || github.ref_name }}-${{ github.sha }}
permissions: {}
jobs:
tests:
name: Test
runs-on: self-hosted
steps:
- name: Setup SSH web publish
env:
web_upload_ssh_private_key: ${{ secrets.WEB_UPLOAD_SSH_PRIVATE_KEY }}
if: (startsWith(github.ref, 'refs/tags/v') || github.ref == 'refs/heads/main' || (github.event.pull_request.draft != true)) && (env.web_upload_ssh_private_key != '') && github.event.pull_request.user.login != 'renovate[bot]'
run: |
mkdir -p -v ~/.ssh
echo "${{ secrets.WEB_UPLOAD_SSH_KNOWN_HOSTS }}" >> ~/.ssh/known_hosts
echo "${{ secrets.WEB_UPLOAD_SSH_PRIVATE_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
cat >>~/.ssh/config <<END
Host website
HostName ${{ secrets.WEB_UPLOAD_SSH_HOSTNAME }}
User ${{ secrets.WEB_UPLOAD_SSH_USERNAME }}
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking yes
AddKeysToAgent no
ForwardX11 no
BatchMode yes
END
echo "Checking connection"
ssh -q website "echo test" || ssh -q website "echo test"
echo "Creating commit rev directory on web server"
ssh -q website "rm -rf /var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/" || ssh -q website "rm -rf /var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/"
ssh -q website "mkdir -v /var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/" || ssh -q website "mkdir -v /var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/"
echo "SSH_WEBSITE=1" >> "$GITHUB_ENV"
- name: Sync repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Tag comparison check
if: ${{ startsWith(github.ref, 'refs/tags/v') && !endsWith(github.ref, '-rc') }}
run: |
# Tag mismatch with latest repo tag check to prevent potential downgrades
LATEST_TAG=$(git describe --tags `git rev-list --tags --max-count=1`)
if [ ${LATEST_TAG} != ${GH_REF_NAME} ]; then
echo '# WARNING: Attempting to run this workflow for a tag that is not the latest repo tag. Aborting.'
echo '# WARNING: Attempting to run this workflow for a tag that is not the latest repo tag. Aborting.' >> $GITHUB_STEP_SUMMARY
exit 1
fi
- name: Prepare build environment
run: |
echo 'source $HOME/.nix-profile/share/nix-direnv/direnvrc' > "$HOME/.direnvrc"
direnv allow
nix develop .#all-features --command true
- name: Cache CI dependencies
run: |
bin/nix-build-and-cache ci
bin/nix-build-and-cache just '.#devShells.x86_64-linux.default'
bin/nix-build-and-cache just '.#devShells.x86_64-linux.all-features'
bin/nix-build-and-cache just '.#devShells.x86_64-linux.dynamic'
# use rust-cache
- uses: Swatinem/rust-cache@v2
# we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting
# releases and tags
#if: ${{ !startsWith(github.ref, 'refs/tags/') }}
with:
cache-all-crates: "true"
cache-on-failure: "true"
cache-targets: "true"
- name: Run CI tests
env:
CARGO_PROFILE: "test"
run: |
direnv exec . engage > >(tee -a test_output.log)
- name: Run Complement tests
env:
CARGO_PROFILE: "test"
run: |
# the nix devshell sets $COMPLEMENT_SRC, so "/dev/null" is no-op
direnv exec . bin/complement "/dev/null" complement_test_logs.jsonl complement_test_results.jsonl > >(tee -a test_output.log)
cp -v -f result complement_oci_image.tar.gz
- name: Upload Complement OCI image
uses: actions/upload-artifact@v4
with:
name: complement_oci_image.tar.gz
path: complement_oci_image.tar.gz
if-no-files-found: error
compression-level: 0
- name: Upload Complement logs
uses: actions/upload-artifact@v4
with:
name: complement_test_logs.jsonl
path: complement_test_logs.jsonl
if-no-files-found: error
- name: Upload Complement results
uses: actions/upload-artifact@v4
with:
name: complement_test_results.jsonl
path: complement_test_results.jsonl
if-no-files-found: error
- name: Diff Complement results with checked-in repo results
run: |
diff -u --color=always tests/test_results/complement/test_results.jsonl complement_test_results.jsonl > >(tee -a complement_diff_output.log)
- name: Update Job Summary
env:
GH_JOB_STATUS: ${{ job.status }}
if: success() || failure()
run: |
if [ ${GH_JOB_STATUS} == 'success' ]; then
echo '# ✅ CI completed suwuccessfully' >> $GITHUB_STEP_SUMMARY
else
echo '# ❌ CI failed (last 100 lines of output)' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
tail -n 100 test_output.log | sed 's/\x1b\[[0-9;]*m//g' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
echo '# Complement diff results (last 100 lines)' >> $GITHUB_STEP_SUMMARY
echo '```diff' >> $GITHUB_STEP_SUMMARY
tail -n 100 complement_diff_output.log | sed 's/\x1b\[[0-9;]*m//g' >> $GITHUB_STEP_SUMMARY
echo '```' >> $GITHUB_STEP_SUMMARY
fi
build:
name: Build
runs-on: self-hosted
strategy:
matrix:
include:
- target: aarch64-linux-musl
- target: x86_64-linux-musl
steps:
- name: Sync repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Setup SSH web publish
env:
web_upload_ssh_private_key: ${{ secrets.WEB_UPLOAD_SSH_PRIVATE_KEY }}
if: (startsWith(github.ref, 'refs/tags/v') || github.ref == 'refs/heads/main' || (github.event.pull_request.draft != true)) && (env.web_upload_ssh_private_key != '') && github.event.pull_request.user.login != 'renovate[bot]'
run: |
mkdir -p -v ~/.ssh
echo "${{ secrets.WEB_UPLOAD_SSH_KNOWN_HOSTS }}" >> ~/.ssh/known_hosts
echo "${{ secrets.WEB_UPLOAD_SSH_PRIVATE_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
cat >>~/.ssh/config <<END
Host website
HostName ${{ secrets.WEB_UPLOAD_SSH_HOSTNAME }}
User ${{ secrets.WEB_UPLOAD_SSH_USERNAME }}
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking yes
AddKeysToAgent no
ForwardX11 no
BatchMode yes
END
echo "Checking connection"
ssh -q website "echo test" || ssh -q website "echo test"
echo "SSH_WEBSITE=1" >> "$GITHUB_ENV"
- name: Prepare build environment
run: |
echo 'source $HOME/.nix-profile/share/nix-direnv/direnvrc' > "$HOME/.direnvrc"
direnv allow
nix develop .#all-features --command true --impure
# use rust-cache
- uses: Swatinem/rust-cache@v2
# we want a fresh-state when we do releases/tags to avoid potential cache poisoning attacks impacting
# releases and tags
#if: ${{ !startsWith(github.ref, 'refs/tags/') }}
with:
cache-all-crates: "true"
cache-on-failure: "true"
cache-targets: "true"
- name: Build static ${{ matrix.target }}-all-features
run: |
if [[ ${{ matrix.target }} == "x86_64-linux-musl" ]]
then
CARGO_DEB_TARGET_TUPLE="x86_64-unknown-linux-musl"
elif [[ ${{ matrix.target }} == "aarch64-linux-musl" ]]
then
CARGO_DEB_TARGET_TUPLE="aarch64-unknown-linux-musl"
fi
SOURCE_DATE_EPOCH=$(git log -1 --pretty=%ct)
bin/nix-build-and-cache just .#static-${{ matrix.target }}-all-features
mkdir -v -p target/release/
mkdir -v -p target/$CARGO_DEB_TARGET_TUPLE/release/
cp -v -f result/bin/conduwuit target/release/conduwuit
cp -v -f result/bin/conduwuit target/$CARGO_DEB_TARGET_TUPLE/release/conduwuit
direnv exec . cargo deb --verbose --no-build --no-strip -p conduwuit --target=$CARGO_DEB_TARGET_TUPLE --output target/release/${{ matrix.target }}.deb
mv -v target/release/conduwuit static-${{ matrix.target }}
mv -v target/release/${{ matrix.target }}.deb ${{ matrix.target }}.deb
- name: Build static x86_64-linux-musl-all-features-x86_64-haswell-optimised
if: ${{ matrix.target == 'x86_64-linux-musl' }}
run: |
CARGO_DEB_TARGET_TUPLE="x86_64-unknown-linux-musl"
SOURCE_DATE_EPOCH=$(git log -1 --pretty=%ct)
bin/nix-build-and-cache just .#static-x86_64-linux-musl-all-features-x86_64-haswell-optimised
mkdir -v -p target/release/
mkdir -v -p target/$CARGO_DEB_TARGET_TUPLE/release/
cp -v -f result/bin/conduwuit target/release/conduwuit
cp -v -f result/bin/conduwuit target/$CARGO_DEB_TARGET_TUPLE/release/conduwuit
direnv exec . cargo deb --verbose --no-build --no-strip -p conduwuit --target=$CARGO_DEB_TARGET_TUPLE --output target/release/x86_64-linux-musl-x86_64-haswell-optimised.deb
mv -v target/release/conduwuit static-x86_64-linux-musl-x86_64-haswell-optimised
mv -v target/release/x86_64-linux-musl-x86_64-haswell-optimised.deb x86_64-linux-musl-x86_64-haswell-optimised.deb
# quick smoke test of the x86_64 static release binary
- name: Quick smoke test the x86_64 static release binary
if: ${{ matrix.target == 'x86_64-linux-musl' }}
run: |
# GH actions default runners are x86_64 only
if file result/bin/conduwuit | grep x86-64; then
result/bin/conduwuit --version
result/bin/conduwuit --help
result/bin/conduwuit -Oserver_name="'$(date -u +%s).local'" -Odatabase_path="'/tmp/$(date -u +%s)'" --execute "server admin-notice awawawawawawawawawawa" --execute "server memory-usage" --execute "server shutdown"
fi
- name: Build static debug ${{ matrix.target }}-all-features
run: |
if [[ ${{ matrix.target }} == "x86_64-linux-musl" ]]
then
CARGO_DEB_TARGET_TUPLE="x86_64-unknown-linux-musl"
elif [[ ${{ matrix.target }} == "aarch64-linux-musl" ]]
then
CARGO_DEB_TARGET_TUPLE="aarch64-unknown-linux-musl"
fi
SOURCE_DATE_EPOCH=$(git log -1 --pretty=%ct)
bin/nix-build-and-cache just .#static-${{ matrix.target }}-all-features-debug
# > warning: dev profile is not supported and will be a hard error in the future. cargo-deb is for making releases, and it doesn't make sense to use it with dev profiles.
# so we need to coerce cargo-deb into thinking this is a release binary
mkdir -v -p target/release/
mkdir -v -p target/$CARGO_DEB_TARGET_TUPLE/release/
cp -v -f result/bin/conduwuit target/release/conduwuit
cp -v -f result/bin/conduwuit target/$CARGO_DEB_TARGET_TUPLE/release/conduwuit
direnv exec . cargo deb --verbose --no-build --no-strip -p conduwuit --target=$CARGO_DEB_TARGET_TUPLE --output target/release/${{ matrix.target }}-debug.deb
mv -v target/release/conduwuit static-${{ matrix.target }}-debug
mv -v target/release/${{ matrix.target }}-debug.deb ${{ matrix.target }}-debug.deb
# quick smoke test of the x86_64 static debug binary
- name: Run x86_64 static debug binary
run: |
# GH actions default runners are x86_64 only
if file result/bin/conduwuit | grep x86-64; then
result/bin/conduwuit --version
fi
# check validity of produced deb package, invalid debs will error on these commands
- name: Validate produced deb package
run: |
# List contents
dpkg-deb --contents ${{ matrix.target }}.deb
dpkg-deb --contents ${{ matrix.target }}-debug.deb
# List info
dpkg-deb --info ${{ matrix.target }}.deb
dpkg-deb --info ${{ matrix.target }}-debug.deb
- name: Upload static-x86_64-linux-musl-all-features-x86_64-haswell-optimised to GitHub
uses: actions/upload-artifact@v4
if: ${{ matrix.target == 'x86_64-linux-musl' }}
with:
name: static-x86_64-linux-musl-x86_64-haswell-optimised
path: static-x86_64-linux-musl-x86_64-haswell-optimised
if-no-files-found: error
- name: Upload static-${{ matrix.target }}-all-features to GitHub
uses: actions/upload-artifact@v4
with:
name: static-${{ matrix.target }}
path: static-${{ matrix.target }}
if-no-files-found: error
- name: Upload static deb ${{ matrix.target }}-all-features to GitHub
uses: actions/upload-artifact@v4
with:
name: deb-${{ matrix.target }}
path: ${{ matrix.target }}.deb
if-no-files-found: error
compression-level: 0
- name: Upload static-x86_64-linux-musl-all-features-x86_64-haswell-optimised to webserver
if: ${{ matrix.target == 'x86_64-linux-musl' }}
run: |
if [ ! -z $SSH_WEBSITE ]; then
chmod +x static-x86_64-linux-musl-x86_64-haswell-optimised
scp static-x86_64-linux-musl-x86_64-haswell-optimised website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/static-x86_64-linux-musl-x86_64-haswell-optimised
fi
- name: Upload static-${{ matrix.target }}-all-features to webserver
run: |
if [ ! -z $SSH_WEBSITE ]; then
chmod +x static-${{ matrix.target }}
scp static-${{ matrix.target }} website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/static-${{ matrix.target }}
fi
- name: Upload static deb x86_64-linux-musl-all-features-x86_64-haswell-optimised to webserver
if: ${{ matrix.target == 'x86_64-linux-musl' }}
run: |
if [ ! -z $SSH_WEBSITE ]; then
scp x86_64-linux-musl-x86_64-haswell-optimised.deb website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/x86_64-linux-musl-x86_64-haswell-optimised.deb
fi
- name: Upload static deb ${{ matrix.target }}-all-features to webserver
run: |
if [ ! -z $SSH_WEBSITE ]; then
scp ${{ matrix.target }}.deb website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/${{ matrix.target }}.deb
fi
- name: Upload static-${{ matrix.target }}-debug-all-features to GitHub
uses: actions/upload-artifact@v4
with:
name: static-${{ matrix.target }}-debug
path: static-${{ matrix.target }}-debug
if-no-files-found: error
- name: Upload static deb ${{ matrix.target }}-debug-all-features to GitHub
uses: actions/upload-artifact@v4
with:
name: deb-${{ matrix.target }}-debug
path: ${{ matrix.target }}-debug.deb
if-no-files-found: error
compression-level: 0
- name: Upload static-${{ matrix.target }}-debug-all-features to webserver
run: |
if [ ! -z $SSH_WEBSITE ]; then
scp static-${{ matrix.target }}-debug website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/static-${{ matrix.target }}-debug
fi
- name: Upload static deb ${{ matrix.target }}-debug-all-features to webserver
run: |
if [ ! -z $SSH_WEBSITE ]; then
scp ${{ matrix.target }}-debug.deb website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/${{ matrix.target }}-debug.deb
fi
- name: Build OCI image ${{ matrix.target }}-all-features
run: |
bin/nix-build-and-cache just .#oci-image-${{ matrix.target }}-all-features
cp -v -f result oci-image-${{ matrix.target }}.tar.gz
- name: Build OCI image x86_64-linux-musl-all-features-x86_64-haswell-optimised
if: ${{ matrix.target == 'x86_64-linux-musl' }}
run: |
bin/nix-build-and-cache just .#oci-image-x86_64-linux-musl-all-features-x86_64-haswell-optimised
cp -v -f result oci-image-x86_64-linux-musl-all-features-x86_64-haswell-optimised.tar.gz
- name: Build debug OCI image ${{ matrix.target }}-all-features
run: |
bin/nix-build-and-cache just .#oci-image-${{ matrix.target }}-all-features-debug
cp -v -f result oci-image-${{ matrix.target }}-debug.tar.gz
- name: Upload OCI image x86_64-linux-musl-all-features-x86_64-haswell-optimised to GitHub
if: ${{ matrix.target == 'x86_64-linux-musl' }}
uses: actions/upload-artifact@v4
with:
name: oci-image-x86_64-linux-musl-all-features-x86_64-haswell-optimised
path: oci-image-x86_64-linux-musl-all-features-x86_64-haswell-optimised.tar.gz
if-no-files-found: error
compression-level: 0
- name: Upload OCI image ${{ matrix.target }}-all-features to GitHub
uses: actions/upload-artifact@v4
with:
name: oci-image-${{ matrix.target }}
path: oci-image-${{ matrix.target }}.tar.gz
if-no-files-found: error
compression-level: 0
- name: Upload OCI image ${{ matrix.target }}-debug-all-features to GitHub
uses: actions/upload-artifact@v4
with:
name: oci-image-${{ matrix.target }}-debug
path: oci-image-${{ matrix.target }}-debug.tar.gz
if-no-files-found: error
compression-level: 0
- name: Upload OCI image x86_64-linux-musl-all-features-x86_64-haswell-optimised.tar.gz to webserver
if: ${{ matrix.target == 'x86_64-linux-musl' }}
run: |
if [ ! -z $SSH_WEBSITE ]; then
scp oci-image-x86_64-linux-musl-all-features-x86_64-haswell-optimised.tar.gz website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/oci-image-x86_64-linux-musl-all-features-x86_64-haswell-optimised.tar.gz
fi
- name: Upload OCI image ${{ matrix.target }}-all-features to webserver
run: |
if [ ! -z $SSH_WEBSITE ]; then
scp oci-image-${{ matrix.target }}.tar.gz website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/oci-image-${{ matrix.target }}.tar.gz
fi
- name: Upload OCI image ${{ matrix.target }}-debug-all-features to webserver
run: |
if [ ! -z $SSH_WEBSITE ]; then
scp oci-image-${{ matrix.target }}-debug.tar.gz website:/var/www/girlboss.ceo/~strawberry/conduwuit/ci-bins/${WEBSERVER_DIR_NAME}/oci-image-${{ matrix.target }}-debug.tar.gz
fi
variables:
outputs:
github_repository: ${{ steps.var.outputs.github_repository }}
runs-on: self-hosted
steps:
- name: Setting global variables
uses: actions/github-script@v7
id: var
with:
script: |
core.setOutput('github_repository', '${{ github.repository }}'.toLowerCase())
docker:
name: Docker publish
runs-on: self-hosted
needs: [build, variables, tests]
permissions:
packages: write
contents: read
if: (startsWith(github.ref, 'refs/tags/v') || github.ref == 'refs/heads/main' || (github.event.pull_request.draft != true)) && github.event.pull_request.user.login != 'renovate[bot]'
env:
DOCKER_HUB_REPO: docker.io/${{ needs.variables.outputs.github_repository }}
GHCR_REPO: ghcr.io/${{ needs.variables.outputs.github_repository }}
GLCR_REPO: registry.gitlab.com/conduwuit/conduwuit
UNIQUE_TAG: ${{ (github.head_ref != '' && format('merge-{0}-{1}', github.event.number, github.event.pull_request.user.login)) || github.ref_name }}-${{ github.sha }}
BRANCH_TAG: ${{ (startsWith(github.ref, 'refs/tags/v') && !endsWith(github.ref, '-rc') && 'latest') || (github.head_ref != '' && format('merge-{0}-{1}', github.event.number, github.event.pull_request.user.login)) || github.ref_name }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
GITLAB_TOKEN: ${{ secrets.GITLAB_TOKEN }}
GHCR_ENABLED: "${{ (github.event_name != 'pull_request' || github.event.pull_request.head.repo.fork == false) && 'true' || 'false' }}"
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: ${{ (vars.DOCKER_USERNAME != '') && (env.DOCKERHUB_TOKEN != '') }}
uses: docker/login-action@v3
with:
registry: docker.io
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitLab Container Registry
if: ${{ (vars.GITLAB_USERNAME != '') && (env.GITLAB_TOKEN != '') }}
uses: docker/login-action@v3
with:
registry: registry.gitlab.com
username: ${{ vars.GITLAB_USERNAME }}
password: ${{ secrets.GITLAB_TOKEN }}
- name: Download artifacts
uses: actions/download-artifact@v4
with:
pattern: "oci*"
- name: Move OCI images into position
run: |
mv -v oci-image-x86_64-linux-musl-all-features-x86_64-haswell-optimised/*.tar.gz oci-image-amd64-haswell-optimised.tar.gz
mv -v oci-image-x86_64-linux-musl/*.tar.gz oci-image-amd64.tar.gz
mv -v oci-image-aarch64-linux-musl/*.tar.gz oci-image-arm64v8.tar.gz
mv -v oci-image-x86_64-linux-musl-debug/*.tar.gz oci-image-amd64-debug.tar.gz
mv -v oci-image-aarch64-linux-musl-debug/*.tar.gz oci-image-arm64v8-debug.tar.gz
- name: Load and push amd64 haswell image
run: |
docker load -i oci-image-amd64-haswell-optimised.tar.gz
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-haswell
docker push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-haswell
fi
if [ $GHCR_ENABLED = "true" ]; then
docker tag $(docker images -q conduwuit:main) ${GHCR_REPO}:${UNIQUE_TAG}-haswell
docker push ${GHCR_REPO}:${UNIQUE_TAG}-haswell
fi
if [ ! -z $GITLAB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${GLCR_REPO}:${UNIQUE_TAG}-haswell
docker push ${GLCR_REPO}:${UNIQUE_TAG}-haswell
fi
- name: Load and push amd64 image
run: |
docker load -i oci-image-amd64.tar.gz
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64
docker push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64
fi
if [ $GHCR_ENABLED = "true" ]; then
docker tag $(docker images -q conduwuit:main) ${GHCR_REPO}:${UNIQUE_TAG}-amd64
docker push ${GHCR_REPO}:${UNIQUE_TAG}-amd64
fi
if [ ! -z $GITLAB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${GLCR_REPO}:${UNIQUE_TAG}-amd64
docker push ${GLCR_REPO}:${UNIQUE_TAG}-amd64
fi
- name: Load and push arm64 image
run: |
docker load -i oci-image-arm64v8.tar.gz
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8
docker push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8
fi
if [ $GHCR_ENABLED = "true" ]; then
docker tag $(docker images -q conduwuit:main) ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8
docker push ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8
fi
if [ ! -z $GITLAB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8
docker push ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8
fi
- name: Load and push amd64 debug image
run: |
docker load -i oci-image-amd64-debug.tar.gz
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64-debug
docker push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64-debug
fi
if [ $GHCR_ENABLED = "true" ]; then
docker tag $(docker images -q conduwuit:main) ${GHCR_REPO}:${UNIQUE_TAG}-amd64-debug
docker push ${GHCR_REPO}:${UNIQUE_TAG}-amd64-debug
fi
if [ ! -z $GITLAB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${GLCR_REPO}:${UNIQUE_TAG}-amd64-debug
docker push ${GLCR_REPO}:${UNIQUE_TAG}-amd64-debug
fi
- name: Load and push arm64 debug image
run: |
docker load -i oci-image-arm64v8-debug.tar.gz
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8-debug
docker push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8-debug
fi
if [ $GHCR_ENABLED = "true" ]; then
docker tag $(docker images -q conduwuit:main) ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8-debug
docker push ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8-debug
fi
if [ ! -z $GITLAB_TOKEN ]; then
docker tag $(docker images -q conduwuit:main) ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8-debug
docker push ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8-debug
fi
- name: Create Docker haswell manifests
run: |
# Dockerhub Container Registry
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker manifest create ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-haswell --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-haswell
docker manifest create ${DOCKER_HUB_REPO}:${BRANCH_TAG}-haswell --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-haswell
fi
# GitHub Container Registry
if [ $GHCR_ENABLED = "true" ]; then
docker manifest create ${GHCR_REPO}:${UNIQUE_TAG}-haswell --amend ${GHCR_REPO}:${UNIQUE_TAG}-haswell
docker manifest create ${GHCR_REPO}:${BRANCH_TAG}-haswell --amend ${GHCR_REPO}:${UNIQUE_TAG}-haswell
fi
# GitLab Container Registry
if [ ! -z $GITLAB_TOKEN ]; then
docker manifest create ${GLCR_REPO}:${UNIQUE_TAG}-haswell --amend ${GLCR_REPO}:${UNIQUE_TAG}-haswell
docker manifest create ${GLCR_REPO}:${BRANCH_TAG}-haswell --amend ${GLCR_REPO}:${UNIQUE_TAG}-haswell
fi
- name: Create Docker combined manifests
run: |
# Dockerhub Container Registry
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker manifest create ${DOCKER_HUB_REPO}:${UNIQUE_TAG} --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8 --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64
docker manifest create ${DOCKER_HUB_REPO}:${BRANCH_TAG} --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8 --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64
fi
# GitHub Container Registry
if [ $GHCR_ENABLED = "true" ]; then
docker manifest create ${GHCR_REPO}:${UNIQUE_TAG} --amend ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8 --amend ${GHCR_REPO}:${UNIQUE_TAG}-amd64
docker manifest create ${GHCR_REPO}:${BRANCH_TAG} --amend ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8 --amend ${GHCR_REPO}:${UNIQUE_TAG}-amd64
fi
# GitLab Container Registry
if [ ! -z $GITLAB_TOKEN ]; then
docker manifest create ${GLCR_REPO}:${UNIQUE_TAG} --amend ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8 --amend ${GLCR_REPO}:${UNIQUE_TAG}-amd64
docker manifest create ${GLCR_REPO}:${BRANCH_TAG} --amend ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8 --amend ${GLCR_REPO}:${UNIQUE_TAG}-amd64
fi
- name: Create Docker combined debug manifests
run: |
# Dockerhub Container Registry
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker manifest create ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-debug --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8-debug --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64-debug
docker manifest create ${DOCKER_HUB_REPO}:${BRANCH_TAG}-debug --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-arm64v8-debug --amend ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-amd64-debug
fi
# GitHub Container Registry
if [ $GHCR_ENABLED = "true" ]; then
docker manifest create ${GHCR_REPO}:${UNIQUE_TAG}-debug --amend ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8-debug --amend ${GHCR_REPO}:${UNIQUE_TAG}-amd64-debug
docker manifest create ${GHCR_REPO}:${BRANCH_TAG}-debug --amend ${GHCR_REPO}:${UNIQUE_TAG}-arm64v8-debug --amend ${GHCR_REPO}:${UNIQUE_TAG}-amd64-debug
fi
# GitLab Container Registry
if [ ! -z $GITLAB_TOKEN ]; then
docker manifest create ${GLCR_REPO}:${UNIQUE_TAG}-debug --amend ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8-debug --amend ${GLCR_REPO}:${UNIQUE_TAG}-amd64-debug
docker manifest create ${GLCR_REPO}:${BRANCH_TAG}-debug --amend ${GLCR_REPO}:${UNIQUE_TAG}-arm64v8-debug --amend ${GLCR_REPO}:${UNIQUE_TAG}-amd64-debug
fi
- name: Push manifests to Docker registries
run: |
if [ ! -z $DOCKERHUB_TOKEN ]; then
docker manifest push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}
docker manifest push ${DOCKER_HUB_REPO}:${BRANCH_TAG}
docker manifest push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-debug
docker manifest push ${DOCKER_HUB_REPO}:${BRANCH_TAG}-debug
docker manifest push ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-haswell
docker manifest push ${DOCKER_HUB_REPO}:${BRANCH_TAG}-haswell
fi
if [ $GHCR_ENABLED = "true" ]; then
docker manifest push ${GHCR_REPO}:${UNIQUE_TAG}
docker manifest push ${GHCR_REPO}:${BRANCH_TAG}
docker manifest push ${GHCR_REPO}:${UNIQUE_TAG}-debug
docker manifest push ${GHCR_REPO}:${BRANCH_TAG}-debug
docker manifest push ${GHCR_REPO}:${UNIQUE_TAG}-haswell
docker manifest push ${GHCR_REPO}:${BRANCH_TAG}-haswell
fi
if [ ! -z $GITLAB_TOKEN ]; then
docker manifest push ${GLCR_REPO}:${UNIQUE_TAG}
docker manifest push ${GLCR_REPO}:${BRANCH_TAG}
docker manifest push ${GLCR_REPO}:${UNIQUE_TAG}-debug
docker manifest push ${GLCR_REPO}:${BRANCH_TAG}-debug
docker manifest push ${GLCR_REPO}:${UNIQUE_TAG}-haswell
docker manifest push ${GLCR_REPO}:${BRANCH_TAG}-haswell
fi
- name: Add Image Links to Job Summary
run: |
if [ ! -z $DOCKERHUB_TOKEN ]; then
echo "- \`docker pull ${DOCKER_HUB_REPO}:${UNIQUE_TAG}\`" >> $GITHUB_STEP_SUMMARY
echo "- \`docker pull ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-debug\`" >> $GITHUB_STEP_SUMMARY
echo "- \`docker pull ${DOCKER_HUB_REPO}:${UNIQUE_TAG}-haswell\`" >> $GITHUB_STEP_SUMMARY
fi
if [ $GHCR_ENABLED = "true" ]; then
echo "- \`docker pull ${GHCR_REPO}:${UNIQUE_TAG}\`" >> $GITHUB_STEP_SUMMARY
echo "- \`docker pull ${GHCR_REPO}:${UNIQUE_TAG}-debug\`" >> $GITHUB_STEP_SUMMARY
echo "- \`docker pull ${GHCR_REPO}:${UNIQUE_TAG}-haswell\`" >> $GITHUB_STEP_SUMMARY
fi
if [ ! -z $GITLAB_TOKEN ]; then
echo "- \`docker pull ${GLCR_REPO}:${UNIQUE_TAG}\`" >> $GITHUB_STEP_SUMMARY
echo "- \`docker pull ${GLCR_REPO}:${UNIQUE_TAG}-debug\`" >> $GITHUB_STEP_SUMMARY
echo "- \`docker pull ${GLCR_REPO}:${UNIQUE_TAG}-haswell\`" >> $GITHUB_STEP_SUMMARY
fi

View file

@ -1,41 +0,0 @@
name: Update Docker Hub Description
on:
push:
branches:
- main
paths:
- README.md
- .github/workflows/docker-hub-description.yml
workflow_dispatch:
jobs:
dockerHubDescription:
runs-on: ubuntu-latest
if: ${{ (startsWith(github.ref, 'refs/tags/v') || github.ref == 'refs/heads/main' || (github.event.pull_request.draft != true)) && github.event.pull_request.user.login != 'renovate[bot]' && (vars.DOCKER_USERNAME != '') }}
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- name: Setting variables
uses: actions/github-script@v7
id: var
with:
script: |
const githubRepo = '${{ github.repository }}'.toLowerCase()
const repoId = githubRepo.split('/')[1]
core.setOutput('github_repository', githubRepo)
const dockerRepo = '${{ vars.DOCKER_USERNAME }}'.toLowerCase() + '/' + repoId
core.setOutput('docker_repo', dockerRepo)
- name: Docker Hub Description
uses: peter-evans/dockerhub-description@v4
with:
username: ${{ vars.DOCKER_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
repository: ${{ steps.var.outputs.docker_repo }}
short-description: ${{ github.event.repository.description }}
enable-url-completion: true

View file

@ -1,104 +0,0 @@
name: Documentation and GitHub Pages
on:
pull_request:
push:
branches:
- main
tags:
- '*'
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
env:
# Required to make some things output color
TERM: ansi
# Publishing to my nix binary cache
ATTIC_TOKEN: ${{ secrets.ATTIC_TOKEN }}
# conduwuit.cachix.org
CACHIX_AUTH_TOKEN: ${{ secrets.CACHIX_AUTH_TOKEN }}
# Custom nix binary cache if fork is being used
ATTIC_ENDPOINT: ${{ vars.ATTIC_ENDPOINT }}
ATTIC_PUBLIC_KEY: ${{ vars.ATTIC_PUBLIC_KEY }}
# Get error output from nix that we can actually use, and use our binary caches for the earlier CI steps
NIX_CONFIG: |
show-trace = true
extra-substituters = https://attic.kennel.juneis.dog/conduwuit https://attic.kennel.juneis.dog/conduit https://conduwuit.cachix.org https://aseipp-nix-cache.freetls.fastly.net https://nix-community.cachix.org https://crane.cachix.org
extra-trusted-public-keys = conduit:eEKoUwlQGDdYmAI/Q/0slVlegqh/QmAvQd7HBSm21Wk= conduwuit:BbycGUgTISsltcmH0qNjFR9dbrQNYgdIAcmViSGoVTE= cache.lix.systems:aBnZUw8zA7H35Cz2RyKFVs3H4PlGTLawyY5KRbvJR8o= conduwuit.cachix.org-1:MFRm6jcnfTf0jSAbmvLfhO3KBMt4px+1xaereWXp8Xg= nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs= crane.cachix.org-1:8Scfpmn9w+hGdXH/Q9tTLiYAE/2dnJYRJP7kl80GuRk=
experimental-features = nix-command flakes
extra-experimental-features = nix-command flakes
accept-flake-config = true
# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false
permissions: {}
jobs:
docs:
name: Documentation and GitHub Pages
runs-on: self-hosted
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
steps:
- name: Sync repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Setup GitHub Pages
if: (startsWith(github.ref, 'refs/tags/v') || github.ref == 'refs/heads/main') && (github.event_name != 'pull_request')
uses: actions/configure-pages@v5
- name: Prepare build environment
run: |
echo 'source $HOME/.nix-profile/share/nix-direnv/direnvrc' > "$HOME/.direnvrc"
direnv allow
nix develop --command true
- name: Cache CI dependencies
run: |
bin/nix-build-and-cache ci
- name: Run lychee and markdownlint
run: |
direnv exec . engage just lints lychee
direnv exec . engage just lints markdownlint
- name: Build documentation (book)
run: |
bin/nix-build-and-cache just .#book
cp -r --dereference result public
chmod u+w -R public
- name: Upload generated documentation (book) as normal artifact
uses: actions/upload-artifact@v4
with:
name: public
path: public
if-no-files-found: error
# don't compress again
compression-level: 0
- name: Upload generated documentation (book) as GitHub Pages artifact
if: (startsWith(github.ref, 'refs/tags/v') || github.ref == 'refs/heads/main') && (github.event_name != 'pull_request')
uses: actions/upload-pages-artifact@v3
with:
path: public
- name: Deploy to GitHub Pages
if: (startsWith(github.ref, 'refs/tags/v') || github.ref == 'refs/heads/main') && (github.event_name != 'pull_request')
id: deployment
uses: actions/deploy-pages@v4

View file

@ -1,118 +0,0 @@
name: Upload Release Assets
on:
release:
types: [published]
workflow_dispatch:
inputs:
tag:
description: 'Tag to release'
required: true
type: string
action_id:
description: 'Action ID of the CI run'
required: true
type: string
permissions: {}
jobs:
publish:
runs-on: ubuntu-latest
permissions:
contents: write
env:
GH_EVENT_NAME: ${{ github.event_name }}
GH_EVENT_INPUTS_ACTION_ID: ${{ github.event.inputs.action_id }}
GH_EVENT_INPUTS_TAG: ${{ github.event.inputs.tag }}
GH_REPOSITORY: ${{ github.repository }}
GH_SHA: ${{ github.sha }}
GH_TAG: ${{ github.event.release.tag_name }}
steps:
- name: get latest ci id
id: get_ci_id
env:
GH_TOKEN: ${{ github.token }}
run: |
if [ "${GH_EVENT_NAME}" == "workflow_dispatch" ]; then
id="${GH_EVENT_INPUTS_ACTION_ID}"
tag="${GH_EVENT_INPUTS_TAG}"
else
# get all runs of the ci workflow
json=$(gh api "repos/${GH_REPOSITORY}/actions/workflows/ci.yml/runs")
# find first run that is github sha and status is completed
id=$(echo "$json" | jq ".workflow_runs[] | select(.head_sha == \"${GH_SHA}\" and .status == \"completed\") | .id" | head -n 1)
if [ ! "$id" ]; then
echo "No completed runs found"
echo "ci_id=0" >> "$GITHUB_OUTPUT"
exit 0
fi
tag="${GH_TAG}"
fi
echo "ci_id=$id" >> "$GITHUB_OUTPUT"
echo "tag=$tag" >> "$GITHUB_OUTPUT"
- name: get latest ci artifacts
if: steps.get_ci_id.outputs.ci_id != 0
uses: actions/download-artifact@v4
env:
GH_TOKEN: ${{ github.token }}
with:
merge-multiple: true
run-id: ${{ steps.get_ci_id.outputs.ci_id }}
github-token: ${{ github.token }}
- run: |
ls
- name: upload release assets
if: steps.get_ci_id.outputs.ci_id != 0
env:
GH_TOKEN: ${{ github.token }}
TAG: ${{ steps.get_ci_id.outputs.tag }}
run: |
for file in $(find . -type f); do
case "$file" in
*json*) echo "Skipping $file...";;
*) echo "Uploading $file..."; gh release upload $TAG "$file" --clobber --repo="${GH_REPOSITORY}" || echo "Something went wrong, skipping.";;
esac
done
- name: upload release assets to website
if: steps.get_ci_id.outputs.ci_id != 0
env:
TAG: ${{ steps.get_ci_id.outputs.tag }}
run: |
mkdir -p -v ~/.ssh
echo "${{ secrets.WEB_UPLOAD_SSH_KNOWN_HOSTS }}" >> ~/.ssh/known_hosts
echo "${{ secrets.WEB_UPLOAD_SSH_PRIVATE_KEY }}" >> ~/.ssh/id_ed25519
chmod 600 ~/.ssh/id_ed25519
cat >>~/.ssh/config <<END
Host website
HostName ${{ secrets.WEB_UPLOAD_SSH_HOSTNAME }}
User ${{ secrets.WEB_UPLOAD_SSH_USERNAME }}
IdentityFile ~/.ssh/id_ed25519
StrictHostKeyChecking yes
AddKeysToAgent no
ForwardX11 no
BatchMode yes
END
echo "Creating tag directory on web server"
ssh -q website "rm -rf /var/www/girlboss.ceo/~strawberry/conduwuit/releases/$TAG/"
ssh -q website "mkdir -v /var/www/girlboss.ceo/~strawberry/conduwuit/releases/$TAG/"
for file in $(find . -type f); do
case "$file" in
*json*) echo "Skipping $file...";;
*) echo "Uploading $file to website"; scp $file website:/var/www/girlboss.ceo/~strawberry/conduwuit/releases/$TAG/$file;;
esac
done

View file

@ -1,152 +0,0 @@
stages:
- ci
- artifacts
- publish
variables:
# Makes some things print in color
TERM: ansi
# Faster cache and artifact compression / decompression
FF_USE_FASTZIP: true
# Print progress reports for cache and artifact transfers
TRANSFER_METER_FREQUENCY: 5s
NIX_CONFIG: |
show-trace = true
extra-substituters = https://attic.kennel.juneis.dog/conduit https://attic.kennel.juneis.dog/conduwuit https://conduwuit.cachix.org
extra-trusted-public-keys = conduit:eEKoUwlQGDdYmAI/Q/0slVlegqh/QmAvQd7HBSm21Wk= conduwuit:BbycGUgTISsltcmH0qNjFR9dbrQNYgdIAcmViSGoVTE= conduwuit.cachix.org-1:MFRm6jcnfTf0jSAbmvLfhO3KBMt4px+1xaereWXp8Xg=
experimental-features = nix-command flakes
extra-experimental-features = nix-command flakes
accept-flake-config = true
# Avoid duplicate pipelines
# See: https://docs.gitlab.com/ee/ci/yaml/workflow.html#switch-between-branch-pipelines-and-merge-request-pipelines
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH && $CI_OPEN_MERGE_REQUESTS
when: never
- if: $CI
before_script:
# Enable nix-command and flakes
- if command -v nix > /dev/null; then echo "experimental-features = nix-command flakes" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-experimental-features = nix-command flakes" >> /etc/nix/nix.conf; fi
# Accept flake config from "untrusted" users
- if command -v nix > /dev/null; then echo "accept-flake-config = true" >> /etc/nix/nix.conf; fi
# Add conduwuit binary cache
- if command -v nix > /dev/null; then echo "extra-substituters = https://attic.kennel.juneis.dog/conduwuit" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = conduwuit:BbycGUgTISsltcmH0qNjFR9dbrQNYgdIAcmViSGoVTE=" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-substituters = https://attic.kennel.juneis.dog/conduit" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = conduit:eEKoUwlQGDdYmAI/Q/0slVlegqh/QmAvQd7HBSm21Wk=" >> /etc/nix/nix.conf; fi
# Add alternate binary cache
- if command -v nix > /dev/null && [ -n "$ATTIC_ENDPOINT" ]; then echo "extra-substituters = $ATTIC_ENDPOINT" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null && [ -n "$ATTIC_PUBLIC_KEY" ]; then echo "extra-trusted-public-keys = $ATTIC_PUBLIC_KEY" >> /etc/nix/nix.conf; fi
# Add crane binary cache
- if command -v nix > /dev/null; then echo "extra-substituters = https://crane.cachix.org" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = crane.cachix.org-1:8Scfpmn9w+hGdXH/Q9tTLiYAE/2dnJYRJP7kl80GuRk=" >> /etc/nix/nix.conf; fi
# Add nix-community binary cache
- if command -v nix > /dev/null; then echo "extra-substituters = https://nix-community.cachix.org" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-trusted-public-keys = nix-community.cachix.org-1:mB9FSh9qf2dCimDSUo8Zy7bkq5CX+/rkCWyvRCYg3Fs=" >> /etc/nix/nix.conf; fi
- if command -v nix > /dev/null; then echo "extra-substituters = https://aseipp-nix-cache.freetls.fastly.net" >> /etc/nix/nix.conf; fi
# Install direnv and nix-direnv
- if command -v nix > /dev/null; then nix-env -iA nixpkgs.direnv nixpkgs.nix-direnv; fi
# Allow .envrc
- if command -v nix > /dev/null; then direnv allow; fi
# Set CARGO_HOME to a cacheable path
- export CARGO_HOME="$(git rev-parse --show-toplevel)/.gitlab-ci.d/cargo"
ci:
stage: ci
image: nixos/nix:2.24.9
script:
# Cache CI dependencies
- ./bin/nix-build-and-cache ci
- direnv exec . engage
cache:
key: nix
paths:
- target
- .gitlab-ci.d
rules:
# CI on upstream runners (only available for maintainers)
- if: $CI_PIPELINE_SOURCE == "merge_request_event" && $IS_UPSTREAM_CI == "true"
# Manual CI on unprotected branches that are not MRs
- if: $CI_PIPELINE_SOURCE != "merge_request_event" && $CI_COMMIT_REF_PROTECTED == "false"
when: manual
# Manual CI on forks
- if: $IS_UPSTREAM_CI != "true"
when: manual
- if: $CI
interruptible: true
artifacts:
stage: artifacts
image: nixos/nix:2.24.9
script:
- ./bin/nix-build-and-cache just .#static-x86_64-linux-musl
- cp result/bin/conduit x86_64-linux-musl
- mkdir -p target/release
- cp result/bin/conduit target/release
- direnv exec . cargo deb --no-build --no-strip
- mv target/debian/*.deb x86_64-linux-musl.deb
# Since the OCI image package is based on the binary package, this has the
# fun side effect of uploading the normal binary too. Conduit users who are
# deploying with Nix can leverage this fact by adding our binary cache to
# their systems.
#
# Note that although we have an `oci-image-x86_64-linux-musl`
# output, we don't build it because it would be largely redundant to this
# one since it's all containerized anyway.
- ./bin/nix-build-and-cache just .#oci-image
- cp result oci-image-amd64.tar.gz
- ./bin/nix-build-and-cache just .#static-aarch64-linux-musl
- cp result/bin/conduit aarch64-linux-musl
- ./bin/nix-build-and-cache just .#oci-image-aarch64-linux-musl
- cp result oci-image-arm64v8.tar.gz
- ./bin/nix-build-and-cache just .#book
# We can't just copy the symlink, we need to dereference it https://gitlab.com/gitlab-org/gitlab/-/issues/19746
- cp -r --dereference result public
artifacts:
paths:
- x86_64-linux-musl
- aarch64-linux-musl
- x86_64-linux-musl.deb
- oci-image-amd64.tar.gz
- oci-image-arm64v8.tar.gz
- public
rules:
# CI required for all MRs
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
# Optional CI on forks
- if: $IS_UPSTREAM_CI != "true"
when: manual
allow_failure: true
- if: $CI
interruptible: true
pages:
stage: publish
dependencies:
- artifacts
only:
- next
script:
- "true"
artifacts:
paths:
- public

View file

@ -1,8 +0,0 @@
<!-- Please describe your changes here -->
-----------------------------------------------------------------------------
- [ ] I ran `cargo fmt`, `cargo clippy`, and `cargo test`
- [ ] I agree to release my code and all other changes of this MR under the Apache-2.0 license

View file

@ -1,3 +0,0 @@
# Docs: Map markdown to html files
- source: /docs/(.+)\.md/
public: '\1.html'

15
.mailmap Normal file
View file

@ -0,0 +1,15 @@
AlexPewMaster <git@alex.unbox.at> <68469103+AlexPewMaster@users.noreply.github.com>
Daniel Wiesenberg <weasy@hotmail.de> <weasy666@gmail.com>
Devin Ragotzy <devin.ragotzy@gmail.com> <d6ragotzy@wmich.edu>
Devin Ragotzy <devin.ragotzy@gmail.com> <dragotzy7460@mail.kvcc.edu>
Jonas Platte <jplatte+git@posteo.de> <jplatte+gitlab@posteo.de>
Jonas Zohren <git-pbkyr@jzohren.de> <gitlab-jfowl-0ux98@sh14.de>
Jonathan de Jong <jonathan@automatia.nl> <jonathandejong02@gmail.com>
June Clementine Strawberry <june@3.dog> <june@girlboss.ceo>
June Clementine Strawberry <june@3.dog> <strawberry@pupbrain.dev>
June Clementine Strawberry <june@3.dog> <strawberry@puppygock.gay>
Olivia Lee <olivia@computer.surgery> <benjamin@computer.surgery>
Rudi Floren <rudi.floren@gmail.com> <rudi.floren@googlemail.com>
Tamara Schmitz <tamara.zoe.schmitz@posteo.de> <15906939+tamara-schmitz@users.noreply.github.com>
Timo Kösters <timo@koesters.xyz>
x4u <xi.zhu@protonmail.ch> <14617923-x4u@users.noreply.gitlab.com>

11
.vscode/settings.json vendored Normal file
View file

@ -0,0 +1,11 @@
{
"cSpell.words": [
"Forgejo",
"appservice",
"appservices",
"conduwuit",
"continuwuity",
"homeserver",
"homeservers"
]
}

55
Cargo.lock generated
View file

@ -118,7 +118,7 @@ checksum = "5f093eed78becd229346bf859eec0aa4dd7ddde0757287b2b4107a1f09c80002"
[[package]] [[package]]
name = "async-channel" name = "async-channel"
version = "2.3.1" version = "2.3.1"
source = "git+https://github.com/girlbossceo/async-channel?rev=92e5e74063bf2a3b10414bcc8a0d68b235644280#92e5e74063bf2a3b10414bcc8a0d68b235644280" source = "git+https://forgejo.ellis.link/continuwuation/async-channel?rev=92e5e74063bf2a3b10414bcc8a0d68b235644280#92e5e74063bf2a3b10414bcc8a0d68b235644280"
dependencies = [ dependencies = [
"concurrent-queue", "concurrent-queue",
"event-listener-strategy", "event-listener-strategy",
@ -784,7 +784,6 @@ dependencies = [
"base64 0.22.1", "base64 0.22.1",
"bytes", "bytes",
"conduwuit_core", "conduwuit_core",
"conduwuit_database",
"conduwuit_service", "conduwuit_service",
"const-str", "const-str",
"futures", "futures",
@ -1047,7 +1046,7 @@ checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]] [[package]]
name = "core_affinity" name = "core_affinity"
version = "0.8.1" version = "0.8.1"
source = "git+https://github.com/girlbossceo/core_affinity_rs?rev=9c8e51510c35077df888ee72a36b4b05637147da#9c8e51510c35077df888ee72a36b4b05637147da" source = "git+https://forgejo.ellis.link/continuwuation/core_affinity_rs?rev=9c8e51510c35077df888ee72a36b4b05637147da#9c8e51510c35077df888ee72a36b4b05637147da"
dependencies = [ dependencies = [
"libc", "libc",
"num_cpus", "num_cpus",
@ -1379,7 +1378,7 @@ dependencies = [
[[package]] [[package]]
name = "event-listener" name = "event-listener"
version = "5.3.1" version = "5.3.1"
source = "git+https://github.com/girlbossceo/event-listener?rev=fe4aebeeaae435af60087ddd56b573a2e0be671d#fe4aebeeaae435af60087ddd56b573a2e0be671d" source = "git+https://forgejo.ellis.link/continuwuation/event-listener?rev=fe4aebeeaae435af60087ddd56b573a2e0be671d#fe4aebeeaae435af60087ddd56b573a2e0be671d"
dependencies = [ dependencies = [
"concurrent-queue", "concurrent-queue",
"parking", "parking",
@ -2030,7 +2029,7 @@ dependencies = [
[[package]] [[package]]
name = "hyper-util" name = "hyper-util"
version = "0.1.11" version = "0.1.11"
source = "git+https://github.com/girlbossceo/hyper-util?rev=e4ae7628fe4fcdacef9788c4c8415317a4489941#e4ae7628fe4fcdacef9788c4c8415317a4489941" source = "git+https://forgejo.ellis.link/continuwuation/hyper-util?rev=e4ae7628fe4fcdacef9788c4c8415317a4489941#e4ae7628fe4fcdacef9788c4c8415317a4489941"
dependencies = [ dependencies = [
"bytes", "bytes",
"futures-channel", "futures-channel",
@ -3625,7 +3624,7 @@ dependencies = [
[[package]] [[package]]
name = "resolv-conf" name = "resolv-conf"
version = "0.7.1" version = "0.7.1"
source = "git+https://github.com/girlbossceo/resolv-conf?rev=200e958941d522a70c5877e3d846f55b5586c68d#200e958941d522a70c5877e3d846f55b5586c68d" source = "git+https://forgejo.ellis.link/continuwuation/resolv-conf?rev=200e958941d522a70c5877e3d846f55b5586c68d#200e958941d522a70c5877e3d846f55b5586c68d"
dependencies = [ dependencies = [
"hostname", "hostname",
] ]
@ -3653,7 +3652,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma" name = "ruma"
version = "0.10.1" version = "0.10.1"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"assign", "assign",
"js_int", "js_int",
@ -3673,7 +3672,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-appservice-api" name = "ruma-appservice-api"
version = "0.10.0" version = "0.10.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"js_int", "js_int",
"ruma-common", "ruma-common",
@ -3685,7 +3684,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-client-api" name = "ruma-client-api"
version = "0.18.0" version = "0.18.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"as_variant", "as_variant",
"assign", "assign",
@ -3708,7 +3707,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-common" name = "ruma-common"
version = "0.13.0" version = "0.13.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"as_variant", "as_variant",
"base64 0.22.1", "base64 0.22.1",
@ -3740,7 +3739,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-events" name = "ruma-events"
version = "0.28.1" version = "0.28.1"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"as_variant", "as_variant",
"indexmap 2.8.0", "indexmap 2.8.0",
@ -3765,7 +3764,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-federation-api" name = "ruma-federation-api"
version = "0.9.0" version = "0.9.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"bytes", "bytes",
"headers", "headers",
@ -3787,7 +3786,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-identifiers-validation" name = "ruma-identifiers-validation"
version = "0.9.5" version = "0.9.5"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"js_int", "js_int",
"thiserror 2.0.12", "thiserror 2.0.12",
@ -3796,7 +3795,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-identity-service-api" name = "ruma-identity-service-api"
version = "0.9.0" version = "0.9.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"js_int", "js_int",
"ruma-common", "ruma-common",
@ -3806,7 +3805,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-macros" name = "ruma-macros"
version = "0.13.0" version = "0.13.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"cfg-if", "cfg-if",
"proc-macro-crate", "proc-macro-crate",
@ -3821,7 +3820,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-push-gateway-api" name = "ruma-push-gateway-api"
version = "0.9.0" version = "0.9.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"js_int", "js_int",
"ruma-common", "ruma-common",
@ -3833,7 +3832,7 @@ dependencies = [
[[package]] [[package]]
name = "ruma-signatures" name = "ruma-signatures"
version = "0.15.0" version = "0.15.0"
source = "git+https://github.com/girlbossceo/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4" source = "git+https://forgejo.ellis.link/continuwuation/ruwuma?rev=920148dca1076454ca0ca5d43b5ce1aa708381d4#920148dca1076454ca0ca5d43b5ce1aa708381d4"
dependencies = [ dependencies = [
"base64 0.22.1", "base64 0.22.1",
"ed25519-dalek", "ed25519-dalek",
@ -3849,7 +3848,7 @@ dependencies = [
[[package]] [[package]]
name = "rust-librocksdb-sys" name = "rust-librocksdb-sys"
version = "0.33.0+9.11.1" version = "0.33.0+9.11.1"
source = "git+https://github.com/girlbossceo/rust-rocksdb-zaidoon1?rev=1c267e0bf0cc7b7702e9a329deccd89de79ef4c3#1c267e0bf0cc7b7702e9a329deccd89de79ef4c3" source = "git+https://forgejo.ellis.link/continuwuation/rust-rocksdb-zaidoon1?rev=fc9a99ac54a54208f90fdcba33ae6ee8bc3531dd#fc9a99ac54a54208f90fdcba33ae6ee8bc3531dd"
dependencies = [ dependencies = [
"bindgen 0.71.1", "bindgen 0.71.1",
"bzip2-sys", "bzip2-sys",
@ -3866,7 +3865,7 @@ dependencies = [
[[package]] [[package]]
name = "rust-rocksdb" name = "rust-rocksdb"
version = "0.37.0" version = "0.37.0"
source = "git+https://github.com/girlbossceo/rust-rocksdb-zaidoon1?rev=1c267e0bf0cc7b7702e9a329deccd89de79ef4c3#1c267e0bf0cc7b7702e9a329deccd89de79ef4c3" source = "git+https://forgejo.ellis.link/continuwuation/rust-rocksdb-zaidoon1?rev=fc9a99ac54a54208f90fdcba33ae6ee8bc3531dd#fc9a99ac54a54208f90fdcba33ae6ee8bc3531dd"
dependencies = [ dependencies = [
"libc", "libc",
"rust-librocksdb-sys", "rust-librocksdb-sys",
@ -3979,7 +3978,7 @@ checksum = "eded382c5f5f786b989652c49544c4877d9f015cc22e145a5ea8ea66c2921cd2"
[[package]] [[package]]
name = "rustyline-async" name = "rustyline-async"
version = "0.4.3" version = "0.4.3"
source = "git+https://github.com/girlbossceo/rustyline-async?rev=deaeb0694e2083f53d363b648da06e10fc13900c#deaeb0694e2083f53d363b648da06e10fc13900c" source = "git+https://forgejo.ellis.link/continuwuation/rustyline-async?rev=deaeb0694e2083f53d363b648da06e10fc13900c#deaeb0694e2083f53d363b648da06e10fc13900c"
dependencies = [ dependencies = [
"crossterm", "crossterm",
"futures-channel", "futures-channel",
@ -4675,7 +4674,7 @@ dependencies = [
[[package]] [[package]]
name = "tikv-jemalloc-ctl" name = "tikv-jemalloc-ctl"
version = "0.6.0" version = "0.6.0"
source = "git+https://github.com/girlbossceo/jemallocator?rev=82af58d6a13ddd5dcdc7d4e91eae3b63292995b8#82af58d6a13ddd5dcdc7d4e91eae3b63292995b8" source = "git+https://forgejo.ellis.link/continuwuation/jemallocator?rev=82af58d6a13ddd5dcdc7d4e91eae3b63292995b8#82af58d6a13ddd5dcdc7d4e91eae3b63292995b8"
dependencies = [ dependencies = [
"libc", "libc",
"paste", "paste",
@ -4685,7 +4684,7 @@ dependencies = [
[[package]] [[package]]
name = "tikv-jemalloc-sys" name = "tikv-jemalloc-sys"
version = "0.6.0+5.3.0-1-ge13ca993e8ccb9ba9847cc330696e02839f328f7" version = "0.6.0+5.3.0-1-ge13ca993e8ccb9ba9847cc330696e02839f328f7"
source = "git+https://github.com/girlbossceo/jemallocator?rev=82af58d6a13ddd5dcdc7d4e91eae3b63292995b8#82af58d6a13ddd5dcdc7d4e91eae3b63292995b8" source = "git+https://forgejo.ellis.link/continuwuation/jemallocator?rev=82af58d6a13ddd5dcdc7d4e91eae3b63292995b8#82af58d6a13ddd5dcdc7d4e91eae3b63292995b8"
dependencies = [ dependencies = [
"cc", "cc",
"libc", "libc",
@ -4694,7 +4693,7 @@ dependencies = [
[[package]] [[package]]
name = "tikv-jemallocator" name = "tikv-jemallocator"
version = "0.6.0" version = "0.6.0"
source = "git+https://github.com/girlbossceo/jemallocator?rev=82af58d6a13ddd5dcdc7d4e91eae3b63292995b8#82af58d6a13ddd5dcdc7d4e91eae3b63292995b8" source = "git+https://forgejo.ellis.link/continuwuation/jemallocator?rev=82af58d6a13ddd5dcdc7d4e91eae3b63292995b8#82af58d6a13ddd5dcdc7d4e91eae3b63292995b8"
dependencies = [ dependencies = [
"libc", "libc",
"tikv-jemalloc-sys", "tikv-jemalloc-sys",
@ -4980,7 +4979,7 @@ checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3"
[[package]] [[package]]
name = "tracing" name = "tracing"
version = "0.1.41" version = "0.1.41"
source = "git+https://github.com/girlbossceo/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa" source = "git+https://forgejo.ellis.link/continuwuation/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa"
dependencies = [ dependencies = [
"pin-project-lite", "pin-project-lite",
"tracing-attributes", "tracing-attributes",
@ -4990,7 +4989,7 @@ dependencies = [
[[package]] [[package]]
name = "tracing-attributes" name = "tracing-attributes"
version = "0.1.28" version = "0.1.28"
source = "git+https://github.com/girlbossceo/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa" source = "git+https://forgejo.ellis.link/continuwuation/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -5000,7 +4999,7 @@ dependencies = [
[[package]] [[package]]
name = "tracing-core" name = "tracing-core"
version = "0.1.33" version = "0.1.33"
source = "git+https://github.com/girlbossceo/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa" source = "git+https://forgejo.ellis.link/continuwuation/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa"
dependencies = [ dependencies = [
"once_cell", "once_cell",
"valuable", "valuable",
@ -5020,7 +5019,7 @@ dependencies = [
[[package]] [[package]]
name = "tracing-log" name = "tracing-log"
version = "0.2.0" version = "0.2.0"
source = "git+https://github.com/girlbossceo/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa" source = "git+https://forgejo.ellis.link/continuwuation/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa"
dependencies = [ dependencies = [
"log", "log",
"once_cell", "once_cell",
@ -5048,7 +5047,7 @@ dependencies = [
[[package]] [[package]]
name = "tracing-subscriber" name = "tracing-subscriber"
version = "0.3.19" version = "0.3.19"
source = "git+https://github.com/girlbossceo/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa" source = "git+https://forgejo.ellis.link/continuwuation/tracing?rev=1e64095a8051a1adf0d1faa307f9f030889ec2aa#1e64095a8051a1adf0d1faa307f9f030889ec2aa"
dependencies = [ dependencies = [
"matchers", "matchers",
"nu-ansi-term", "nu-ansi-term",

View file

@ -14,12 +14,12 @@ authors = [
categories = ["network-programming"] categories = ["network-programming"]
description = "a very cool Matrix chat homeserver written in Rust" description = "a very cool Matrix chat homeserver written in Rust"
edition = "2024" edition = "2024"
homepage = "https://conduwuit.puppyirl.gay/" homepage = "https://continuwuity.org/"
keywords = ["chat", "matrix", "networking", "server", "uwu"] keywords = ["chat", "matrix", "networking", "server", "uwu"]
license = "Apache-2.0" license = "Apache-2.0"
# See also `rust-toolchain.toml` # See also `rust-toolchain.toml`
readme = "README.md" readme = "README.md"
repository = "https://github.com/girlbossceo/conduwuit" repository = "https://forgejo.ellis.link/continuwuation/continuwuity"
rust-version = "1.86.0" rust-version = "1.86.0"
version = "0.5.0" version = "0.5.0"
@ -348,7 +348,7 @@ version = "0.1.2"
# Used for matrix spec type definitions and helpers # Used for matrix spec type definitions and helpers
[workspace.dependencies.ruma] [workspace.dependencies.ruma]
git = "https://github.com/girlbossceo/ruwuma" git = "https://forgejo.ellis.link/continuwuation/ruwuma"
#branch = "conduwuit-changes" #branch = "conduwuit-changes"
rev = "920148dca1076454ca0ca5d43b5ce1aa708381d4" rev = "920148dca1076454ca0ca5d43b5ce1aa708381d4"
features = [ features = [
@ -388,8 +388,8 @@ features = [
] ]
[workspace.dependencies.rust-rocksdb] [workspace.dependencies.rust-rocksdb]
git = "https://github.com/girlbossceo/rust-rocksdb-zaidoon1" git = "https://forgejo.ellis.link/continuwuation/rust-rocksdb-zaidoon1"
rev = "1c267e0bf0cc7b7702e9a329deccd89de79ef4c3" rev = "fc9a99ac54a54208f90fdcba33ae6ee8bc3531dd"
default-features = false default-features = false
features = [ features = [
"multi-threaded-cf", "multi-threaded-cf",
@ -449,7 +449,7 @@ version = "0.37.0"
# jemalloc usage # jemalloc usage
[workspace.dependencies.tikv-jemalloc-sys] [workspace.dependencies.tikv-jemalloc-sys]
git = "https://github.com/girlbossceo/jemallocator" git = "https://forgejo.ellis.link/continuwuation/jemallocator"
rev = "82af58d6a13ddd5dcdc7d4e91eae3b63292995b8" rev = "82af58d6a13ddd5dcdc7d4e91eae3b63292995b8"
default-features = false default-features = false
features = [ features = [
@ -457,7 +457,7 @@ features = [
"unprefixed_malloc_on_supported_platforms", "unprefixed_malloc_on_supported_platforms",
] ]
[workspace.dependencies.tikv-jemallocator] [workspace.dependencies.tikv-jemallocator]
git = "https://github.com/girlbossceo/jemallocator" git = "https://forgejo.ellis.link/continuwuation/jemallocator"
rev = "82af58d6a13ddd5dcdc7d4e91eae3b63292995b8" rev = "82af58d6a13ddd5dcdc7d4e91eae3b63292995b8"
default-features = false default-features = false
features = [ features = [
@ -465,7 +465,7 @@ features = [
"unprefixed_malloc_on_supported_platforms", "unprefixed_malloc_on_supported_platforms",
] ]
[workspace.dependencies.tikv-jemalloc-ctl] [workspace.dependencies.tikv-jemalloc-ctl]
git = "https://github.com/girlbossceo/jemallocator" git = "https://forgejo.ellis.link/continuwuation/jemallocator"
rev = "82af58d6a13ddd5dcdc7d4e91eae3b63292995b8" rev = "82af58d6a13ddd5dcdc7d4e91eae3b63292995b8"
default-features = false default-features = false
features = ["use_std"] features = ["use_std"]
@ -542,49 +542,49 @@ version = "1.0.2"
# backport of [https://github.com/tokio-rs/tracing/pull/2956] to the 0.1.x branch of tracing. # backport of [https://github.com/tokio-rs/tracing/pull/2956] to the 0.1.x branch of tracing.
# we can switch back to upstream if #2956 is merged and backported in the upstream repo. # we can switch back to upstream if #2956 is merged and backported in the upstream repo.
# https://github.com/girlbossceo/tracing/commit/b348dca742af641c47bc390261f60711c2af573c # https://forgejo.ellis.link/continuwuation/tracing/commit/b348dca742af641c47bc390261f60711c2af573c
[patch.crates-io.tracing-subscriber] [patch.crates-io.tracing-subscriber]
git = "https://github.com/girlbossceo/tracing" git = "https://forgejo.ellis.link/continuwuation/tracing"
rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
[patch.crates-io.tracing] [patch.crates-io.tracing]
git = "https://github.com/girlbossceo/tracing" git = "https://forgejo.ellis.link/continuwuation/tracing"
rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
[patch.crates-io.tracing-core] [patch.crates-io.tracing-core]
git = "https://github.com/girlbossceo/tracing" git = "https://forgejo.ellis.link/continuwuation/tracing"
rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
[patch.crates-io.tracing-log] [patch.crates-io.tracing-log]
git = "https://github.com/girlbossceo/tracing" git = "https://forgejo.ellis.link/continuwuation/tracing"
rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa" rev = "1e64095a8051a1adf0d1faa307f9f030889ec2aa"
# adds a tab completion callback: https://github.com/girlbossceo/rustyline-async/commit/de26100b0db03e419a3d8e1dd26895d170d1fe50 # adds a tab completion callback: https://forgejo.ellis.link/continuwuation/rustyline-async/commit/de26100b0db03e419a3d8e1dd26895d170d1fe50
# adds event for CTRL+\: https://github.com/girlbossceo/rustyline-async/commit/67d8c49aeac03a5ef4e818f663eaa94dd7bf339b # adds event for CTRL+\: https://forgejo.ellis.link/continuwuation/rustyline-async/commit/67d8c49aeac03a5ef4e818f663eaa94dd7bf339b
[patch.crates-io.rustyline-async] [patch.crates-io.rustyline-async]
git = "https://github.com/girlbossceo/rustyline-async" git = "https://forgejo.ellis.link/continuwuation/rustyline-async"
rev = "deaeb0694e2083f53d363b648da06e10fc13900c" rev = "deaeb0694e2083f53d363b648da06e10fc13900c"
# adds LIFO queue scheduling; this should be updated with PR progress. # adds LIFO queue scheduling; this should be updated with PR progress.
[patch.crates-io.event-listener] [patch.crates-io.event-listener]
git = "https://github.com/girlbossceo/event-listener" git = "https://forgejo.ellis.link/continuwuation/event-listener"
rev = "fe4aebeeaae435af60087ddd56b573a2e0be671d" rev = "fe4aebeeaae435af60087ddd56b573a2e0be671d"
[patch.crates-io.async-channel] [patch.crates-io.async-channel]
git = "https://github.com/girlbossceo/async-channel" git = "https://forgejo.ellis.link/continuwuation/async-channel"
rev = "92e5e74063bf2a3b10414bcc8a0d68b235644280" rev = "92e5e74063bf2a3b10414bcc8a0d68b235644280"
# adds affinity masks for selecting more than one core at a time # adds affinity masks for selecting more than one core at a time
[patch.crates-io.core_affinity] [patch.crates-io.core_affinity]
git = "https://github.com/girlbossceo/core_affinity_rs" git = "https://forgejo.ellis.link/continuwuation/core_affinity_rs"
rev = "9c8e51510c35077df888ee72a36b4b05637147da" rev = "9c8e51510c35077df888ee72a36b4b05637147da"
# reverts hyperium#148 conflicting with our delicate federation resolver hooks # reverts hyperium#148 conflicting with our delicate federation resolver hooks
[patch.crates-io.hyper-util] [patch.crates-io.hyper-util]
git = "https://github.com/girlbossceo/hyper-util" git = "https://forgejo.ellis.link/continuwuation/hyper-util"
rev = "e4ae7628fe4fcdacef9788c4c8415317a4489941" rev = "e4ae7628fe4fcdacef9788c4c8415317a4489941"
# allows no-aaaa option in resolv.conf # allows no-aaaa option in resolv.conf
# bumps rust edition and toolchain to 1.86.0 and 2024 # bumps rust edition and toolchain to 1.86.0 and 2024
# use sat_add on line number errors # use sat_add on line number errors
[patch.crates-io.resolv-conf] [patch.crates-io.resolv-conf]
git = "https://github.com/girlbossceo/resolv-conf" git = "https://forgejo.ellis.link/continuwuation/resolv-conf"
rev = "200e958941d522a70c5877e3d846f55b5586c68d" rev = "200e958941d522a70c5877e3d846f55b5586c68d"
# #

224
README.md
View file

@ -1,178 +1,114 @@
# conduwuit # continuwuity
[![conduwuit main room](https://img.shields.io/matrix/conduwuit%3Apuppygock.gay?server_fqdn=matrix.transfem.dev&style=flat&logo=matrix&logoColor=%23f5b3ff&label=%23conduwuit%3Apuppygock.gay&color=%23f652ff)](https://matrix.to/#/#conduwuit:puppygock.gay) [![conduwuit space](https://img.shields.io/matrix/conduwuit-space%3Apuppygock.gay?server_fqdn=matrix.transfem.dev&style=flat&logo=matrix&logoColor=%23f5b3ff&label=%23conduwuit-space%3Apuppygock.gay&color=%23f652ff)](https://matrix.to/#/#conduwuit-space:puppygock.gay)
[![CI and Artifacts](https://github.com/girlbossceo/conduwuit/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/girlbossceo/conduwuit/actions/workflows/ci.yml)
![GitHub Repo stars](https://img.shields.io/github/stars/girlbossceo/conduwuit?style=flat&color=%23fcba03&link=https%3A%2F%2Fgithub.com%2Fgirlbossceo%2Fconduwuit) ![GitHub commit activity](https://img.shields.io/github/commit-activity/m/girlbossceo/conduwuit?style=flat&color=%2303fcb1&link=https%3A%2F%2Fgithub.com%2Fgirlbossceo%2Fconduwuit%2Fpulse%2Fmonthly) ![GitHub Created At](https://img.shields.io/github/created-at/girlbossceo/conduwuit) ![GitHub Sponsors](https://img.shields.io/github/sponsors/girlbossceo?color=%23fc03ba&link=https%3A%2F%2Fgithub.com%2Fsponsors%2Fgirlbossceo) ![GitHub License](https://img.shields.io/github/license/girlbossceo/conduwuit)
![Docker Image Size (tag)](https://img.shields.io/docker/image-size/girlbossceo/conduwuit/latest?label=image%20size%20(latest)&link=https%3A%2F%2Fhub.docker.com%2Frepository%2Fdocker%2Fgirlbossceo%2Fconduwuit%2Ftags%3Fname%3Dlatest) ![Docker Image Size (tag)](https://img.shields.io/docker/image-size/girlbossceo/conduwuit/main?label=image%20size%20(main)&link=https%3A%2F%2Fhub.docker.com%2Frepository%2Fdocker%2Fgirlbossceo%2Fconduwuit%2Ftags%3Fname%3Dmain)
<!-- ANCHOR: catchphrase --> <!-- ANCHOR: catchphrase -->
### a very cool [Matrix](https://matrix.org/) chat homeserver written in Rust ## A community-driven [Matrix](https://matrix.org/) homeserver in Rust
<!-- ANCHOR_END: catchphrase --> <!-- ANCHOR_END: catchphrase -->
Visit the [conduwuit documentation](https://conduwuit.puppyirl.gay/) for more [continuwuity] is a Matrix homeserver written in Rust.
information and how to deploy/setup conduwuit. It's a community continuation of the [conduwuit](https://github.com/girlbossceo/conduwuit) homeserver.
<!-- ANCHOR: body --> <!-- ANCHOR: body -->
#### What is Matrix?
### Why does this exist?
The original conduwuit project has been archived and is no longer maintained. Rather than letting this Rust-based Matrix homeserver disappear, a group of community contributors have forked the project to continue its development, fix outstanding issues, and add new features.
We aim to provide a stable, well-maintained alternative for current Conduit users and welcome newcomers seeking a lightweight, efficient Matrix homeserver.
### Who are we?
We are a group of Matrix enthusiasts, developers and system administrators who have used conduwuit and believe in its potential. Our team includes both previous
contributors to the original project and new developers who want to help maintain and improve this important piece of Matrix infrastructure.
We operate as an open community project, welcoming contributions from anyone interested in improving continuwuity.
### What is Matrix?
[Matrix](https://matrix.org) is an open, federated, and extensible network for [Matrix](https://matrix.org) is an open, federated, and extensible network for
decentralised communication. Users from any Matrix homeserver can chat with users from all decentralized communication. Users from any Matrix homeserver can chat with users from all
other homeservers over federation. Matrix is designed to be extensible and built on top of. other homeservers over federation. Matrix is designed to be extensible and built on top of.
You can even use bridges such as Matrix Appservices to communicate with users outside of Matrix, like a community on Discord. You can even use bridges such as Matrix Appservices to communicate with users outside of Matrix, like a community on Discord.
#### What is the goal? ### What are the project's goals?
A high-performance, efficient, low-cost, and featureful Matrix homeserver that's Continuwuity aims to:
easy to set up and just works with minimal configuration needed.
#### Can I try it out? - Maintain a stable, reliable Matrix homeserver implementation in Rust
- Improve compatibility and specification compliance with the Matrix protocol
- Fix bugs and performance issues from the original conduwuit
- Add missing features needed by homeserver administrators
- Provide comprehensive documentation and easy deployment options
- Create a sustainable development model for long-term maintenance
- Keep a lightweight, efficient codebase that can run on modest hardware
An official conduwuit server ran by me is available at transfem.dev ### Can I try it out?
([element.transfem.dev](https://element.transfem.dev) /
[cinny.transfem.dev](https://cinny.transfem.dev))
transfem.dev is a public homeserver that can be used, it is not a "test only Not right now. We've still got work to do!
homeserver". This means there are rules, so please read the rules:
[https://transfem.dev/homeserver_rules.txt](https://transfem.dev/homeserver_rules.txt)
transfem.dev is also listed at
[servers.joinmatrix.org](https://servers.joinmatrix.org/), which is a list of
popular public Matrix homeservers, including some others that run conduwuit.
#### What is the current status? ### What are we working on?
conduwuit is technically a hard fork of [Conduit](https://conduit.rs/), which is in beta. We're working our way through all of the issues in the [Forgejo project](https://forgejo.ellis.link/continuwuation/continuwuity/issues).
The beta status initially was inherited from Conduit, however the huge amount of
codebase divergance, changes, fixes, and improvements have effectively made this
beta status not entirely applicable to us anymore.
conduwuit is very stable based on our rapidly growing userbase, has lots of features that users - [Replacing old conduwuit links with working continuwuity links](https://forgejo.ellis.link/continuwuation/continuwuity/issues/742)
expect, and very usable as a daily driver for small, medium, and upper-end medium sized homeservers. - [Getting CI and docs deployment working on the new Forgejo project](https://forgejo.ellis.link/continuwuation/continuwuity/issues/740)
- [Packaging & availability in more places](https://forgejo.ellis.link/continuwuation/continuwuity/issues/747)
- [Appservices bugs & features](https://forgejo.ellis.link/continuwuation/continuwuity/issues?q=&type=all&state=open&labels=178&milestone=0&assignee=0&poster=0)
- [Improving compatibility and spec compliance](https://forgejo.ellis.link/continuwuation/continuwuity/issues?labels=119)
- Automated testing
- [Admin API](https://forgejo.ellis.link/continuwuation/continuwuity/issues/748)
- [Policy-list controlled moderation](https://forgejo.ellis.link/continuwuation/continuwuity/issues/750)
A lot of critical stability and performance issues have been fixed, and a lot of ### Can I migrate my data from x?
necessary groundwork has finished; making this project way better than it was
back in the start at ~early 2024.
#### Where is the differences page? - Conduwuit: Yes
- Conduit: No, database is now incompatible
conduwuit historically had a "differences" page that listed each and every single - Grapevine: No, database is now incompatible
different thing about conduwuit from Conduit, as a way to promote and advertise - Dendrite: No
conduwuit by showing significant amounts of work done. While this was feasible to - Synapse: No
maintain back when the project was new in early-2024, this became impossible
very quickly and has unfortunately became heavily outdated, missing tons of things, etc.
It's difficult to list out what we do differently, what are our notable features, etc
when there's so many things and features and bug fixes and performance optimisations,
the list goes on. We simply recommend folks to just try out conduwuit, or ask us
what features you are looking for and if they're implemented in conduwuit.
#### How is conduwuit funded? Is conduwuit sustainable?
conduwuit has no external funding. This is made possible purely in my freetime with
contributors, also in their free time, and only by user-curated donations.
conduwuit has existed since around November 2023, but [only became more publicly known
in March/April 2024](https://matrix.org/blog/2024/04/26/this-week-in-matrix-2024-04-26/#conduwuit-website)
and we have no plans in stopping or slowing down any time soon!
#### Can I migrate or switch from Conduit?
conduwuit had drop-in migration/replacement support for Conduit for about 12 months before
bugs somewhere along the line broke it. Maintaining this has been difficult and
the majority of Conduit users have already migrated, additionally debugging Conduit
is not one of our interests, and so Conduit migration no longer works. We also
feel that 12 months has been plenty of time for people to seamlessly migrate.
If you are a Conduit user looking to migrate, you will have to wipe and reset
your database. We may fix seamless migration support at some point, but it's not an interest
from us.
#### Can I migrate from Synapse or Dendrite?
Currently there is no known way to seamlessly migrate all user data from the old
homeserver to conduwuit. However it is perfectly acceptable to replace the old
homeserver software with conduwuit using the same server name and there will not
be any issues with federation.
There is an interest in developing a built-in seamless user data migration
method into conduwuit, however there is no concrete ETA or timeline for this.
We haven't written up a guide on migrating from incompatible homeservers yet. Reach out to us if you need to do this!
<!-- ANCHOR_END: body --> <!-- ANCHOR_END: body -->
## Contribution
### Development flow
- Features / changes must developed in a separate branch
- For each change, create a descriptive PR
- Your code will be reviewed by one or more of the continuwuity developers
- The branch will be deployed live on multiple tester's matrix servers to shake out bugs
- Once all testers and reviewers have agreed, the PR will be merged to the main branch
- The main branch will have nightly builds deployed to users on the cutting edge
- Every week or two, a new release is cut.
The main branch is always green!
### Policy on pulling from other forks
We welcome contributions from other forks of conduwuit, subject to our review process.
When incorporating code from other forks:
- All external contributions must go through our standard PR process
- Code must meet our quality standards and pass tests
- Code changes will require testing on multiple test servers before merging
- Attribution will be given to original authors and forks
- We prioritize stability and compatibility when evaluating external contributions
- Features that align with our project goals will be given priority consideration
<!-- ANCHOR: footer --> <!-- ANCHOR: footer -->
#### Contact #### Contact
[`#conduwuit:puppygock.gay`](https://matrix.to/#/#conduwuit:puppygock.gay) <!-- TODO: contact details -->
is the official project Matrix room. You can get support here, ask questions or
concerns, get assistance setting up conduwuit, etc.
This room should stay relevant and focused on conduwuit. An offtopic general
chatter room can be found in the room topic there as well.
Please keep the issue trackers focused on *actual* bug reports and enhancement requests.
General support is extremely difficult to be offered over an issue tracker, and
simple questions should be asked directly in an interactive platform like our
Matrix room above as they can turn into a relevant discussion and/or may not be
simple to answer. If you're not sure, just ask in the Matrix room.
If you have a bug or feature to request: [Open an issue on GitHub](https://github.com/girlbossceo/conduwuit/issues/new)
If you need to contact the primary maintainer, my contact methods are on my website: https://girlboss.ceo
#### Donate
conduwuit development is purely made possible by myself and contributors. I do
not get paid to work on this, and I work on it in my free time. Donations are
heavily appreciated! 💜🥺
- Liberapay: <https://liberapay.com/girlbossceo>
- GitHub Sponsors: <https://github.com/sponsors/girlbossceo>
- Ko-fi: <https://ko-fi.com/puppygock>
I do not and will not accept cryptocurrency donations, including things related.
Note that donations will NOT guarantee you or give you any kind of tangible product,
feature prioritisation, etc. By donating, you are agreeing that conduwuit is NOT
going to provide you any goods or services as part of your donation, and this
donation is purely a generous donation. We will not provide things like paid
personal/direct support, feature request priority, merchandise, etc.
#### Logo
Original repo and Matrix room picture was from bran (<3). Current banner image
and logo is directly from [this cohost
post](https://web.archive.org/web/20241126004041/https://cohost.org/RatBaby/post/1028290-finally-a-flag-for).
An SVG logo made by [@nktnet1](https://github.com/nktnet1) is available here: <https://github.com/girlbossceo/conduwuit/blob/main/docs/assets/>
#### Is it conduwuit or Conduwuit?
Both, but I prefer conduwuit.
#### Mirrors of conduwuit
If GitHub is unavailable in your country, or has poor connectivity, conduwuit's
source code is mirrored onto the following additional platforms I maintain:
- GitHub: <https://github.com/girlbossceo/conduwuit>
- GitLab: <https://gitlab.com/conduwuit/conduwuit>
- git.girlcock.ceo: <https://git.girlcock.ceo/strawberry/conduwuit>
- git.gay: <https://git.gay/june/conduwuit>
- mau.dev: <https://mau.dev/june/conduwuit>
- Codeberg: <https://codeberg.org/arf/conduwuit>
- sourcehut: <https://git.sr.ht/~girlbossceo/conduwuit>
<!-- ANCHOR_END: footer --> <!-- ANCHOR_END: footer -->
[continuwuity]: https://forgejo.ellis.link/continuwuation/continuwuity

View file

@ -1,8 +1,8 @@
[book] [book]
title = "conduwuit 🏳️‍⚧️ 💜 🦴" title = "continuwuity"
description = "conduwuit, which is a well-maintained fork of Conduit, is a simple, fast and reliable chat server for the Matrix protocol" description = "continuwuity is a community continuation of the conduwuit Matrix homeserver, written in Rust."
language = "en" language = "en"
authors = ["strawberry (June)"] authors = ["The continuwuity Community"]
text-direction = "ltr" text-direction = "ltr"
multilingual = false multilingual = false
src = "docs" src = "docs"
@ -16,12 +16,9 @@ extra-watch-dirs = ["debian", "docs"]
edition = "2024" edition = "2024"
[output.html] [output.html]
git-repository-url = "https://github.com/girlbossceo/conduwuit" edit-url-template = "https://forgejo.ellis.link/continuwuation/continuwuity/src/branch/main/{path}"
edit-url-template = "https://github.com/girlbossceo/conduwuit/edit/main/{path}" git-repository-url = "https://forgejo.ellis.link/continuwuation/continuwuity"
git-repository-icon = "fa-github-square" git-repository-icon = "fa-git-alt"
[output.html.redirect]
"/differences.html" = "https://conduwuit.puppyirl.gay/#where-is-the-differences-page"
[output.html.search] [output.html.search]
limit-results = 15 limit-results = 15

View file

@ -112,16 +112,6 @@
# #
#new_user_displayname_suffix = "🏳️‍⚧️" #new_user_displayname_suffix = "🏳️‍⚧️"
# If enabled, conduwuit will send a simple GET request periodically to
# `https://pupbrain.dev/check-for-updates/stable` for any new
# announcements made. Despite the name, this is not an update check
# endpoint, it is simply an announcement check endpoint.
#
# This is disabled by default as this is rarely used except for security
# updates or major updates.
#
#allow_check_for_updates = false
# Set this to any float value to multiply conduwuit's in-memory LRU caches # Set this to any float value to multiply conduwuit's in-memory LRU caches
# with such as "auth_chain_cache_capacity". # with such as "auth_chain_cache_capacity".
# #
@ -1428,7 +1418,7 @@
# Sentry reporting URL, if a custom one is desired. # Sentry reporting URL, if a custom one is desired.
# #
#sentry_endpoint = "https://fe2eb4536aa04949e28eff3128d64757@o4506996327251968.ingest.us.sentry.io/4506996334657536" #sentry_endpoint = ""
# Report your conduwuit server_name in Sentry.io crash reports and # Report your conduwuit server_name in Sentry.io crash reports and
# metrics. # metrics.

216
docker/Dockerfile Normal file
View file

@ -0,0 +1,216 @@
ARG RUST_VERSION=1
FROM --platform=$BUILDPLATFORM docker.io/tonistiigi/xx AS xx
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-bookworm AS base
FROM --platform=$BUILDPLATFORM rust:${RUST_VERSION}-slim-bookworm AS toolchain
# Prevent deletion of apt cache
RUN rm -f /etc/apt/apt.conf.d/docker-clean
# Match Rustc version as close as possible
# rustc -vV
ARG LLVM_VERSION=19
# ENV RUSTUP_TOOLCHAIN=${RUST_VERSION}
# Install repo tools
# Line one: compiler tools
# Line two: curl, for downloading binaries
# Line three: for xx-verify
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
clang-${LLVM_VERSION} lld-${LLVM_VERSION} pkg-config make jq \
curl git \
file
# Create symlinks for LLVM tools
RUN <<EOF
# clang
ln -s /usr/bin/clang-${LLVM_VERSION} /usr/bin/clang
ln -s "/usr/bin/clang++-${LLVM_VERSION}" "/usr/bin/clang++"
# lld
ln -s /usr/bin/ld64.lld-${LLVM_VERSION} /usr/bin/ld64.lld
ln -s /usr/bin/ld.lld-${LLVM_VERSION} /usr/bin/ld.lld
ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/lld
ln -s /usr/bin/lld-link-${LLVM_VERSION} /usr/bin/lld-link
ln -s /usr/bin/wasm-ld-${LLVM_VERSION} /usr/bin/wasm-ld
EOF
# Developer tool versions
# renovate: datasource=github-releases depName=cargo-bins/cargo-binstall
ENV BINSTALL_VERSION=1.12.3
# renovate: datasource=github-releases depName=psastras/sbom-rs
ENV CARGO_SBOM_VERSION=0.9.1
# renovate: datasource=crate depName=lddtree
ENV LDDTREE_VERSION=0.3.7
# renovate: datasource=crate depName=timelord-cli
ENV TIMELORD_VERSION=3.0.1
# Install unpackaged tools
RUN <<EOF
curl --retry 5 -L --proto '=https' --tlsv1.2 -sSf https://raw.githubusercontent.com/cargo-bins/cargo-binstall/main/install-from-binstall-release.sh | bash
cargo binstall --no-confirm cargo-sbom --version $CARGO_SBOM_VERSION
cargo binstall --no-confirm lddtree --version $LDDTREE_VERSION
cargo binstall --no-confirm timelord-cli --version $TIMELORD_VERSION
EOF
# Set up xx (cross-compilation scripts)
COPY --from=xx / /
ARG TARGETPLATFORM
# Install libraries linked by the binary
# xx-* are xx-specific meta-packages
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
xx-apt-get install -y \
xx-c-essentials xx-cxx-essentials pkg-config \
liburing-dev
# Set up Rust toolchain
WORKDIR /app
COPY ./rust-toolchain.toml .
RUN rustc --version \
&& rustup target add $(xx-cargo --print-target-triple)
# Build binary
# We disable incremental compilation to save disk space, as it only produces a minimal speedup for this case.
RUN echo "CARGO_INCREMENTAL=0" >> /etc/environment
# Configure pkg-config
RUN <<EOF
echo "PKG_CONFIG_LIBDIR=/usr/lib/$(xx-info)/pkgconfig" >> /etc/environment
echo "PKG_CONFIG=/usr/bin/$(xx-info)-pkg-config" >> /etc/environment
echo "PKG_CONFIG_ALLOW_CROSS=true" >> /etc/environment
EOF
# Configure cc to use clang version
RUN <<EOF
echo "CC=clang" >> /etc/environment
echo "CXX=clang++" >> /etc/environment
EOF
# Cross-language LTO
RUN <<EOF
echo "CFLAGS=-flto" >> /etc/environment
echo "CXXFLAGS=-flto" >> /etc/environment
# Linker is set to target-compatible clang by xx
echo "RUSTFLAGS='-Clinker-plugin-lto -Clink-arg=-fuse-ld=lld'" >> /etc/environment
EOF
# Apply CPU-specific optimizations if TARGET_CPU is provided
ARG TARGET_CPU=
RUN <<EOF
set -o allexport
. /etc/environment
if [ -n "${TARGET_CPU}" ]; then
echo "CFLAGS='${CFLAGS} -march=${TARGET_CPU}'" >> /etc/environment
echo "CXXFLAGS='${CXXFLAGS} -march=${TARGET_CPU}'" >> /etc/environment
echo "RUSTFLAGS='${RUSTFLAGS} -C target-cpu=${TARGET_CPU}'" >> /etc/environment
fi
EOF
# Prepare output directories
RUN mkdir /out
FROM toolchain AS builder
# Conduwuit version info
ARG COMMIT_SHA=
ARG CONDUWUIT_VERSION_EXTRA=
ENV CONDUWUIT_VERSION_EXTRA=$CONDUWUIT_VERSION_EXTRA
RUN <<EOF
if [ -z "${CONDUWUIT_VERSION_EXTRA}" ]; then
echo "CONDUWUIT_VERSION_EXTRA='$(set -e; git rev-parse --short ${COMMIT_SHA:-HEAD} || echo unknown revision)'" >> /etc/environment
fi
EOF
ARG TARGETPLATFORM
# Verify environment configuration
RUN cat /etc/environment
RUN xx-cargo --print-target-triple
# Get source
COPY . .
# Timelord sync
RUN --mount=type=cache,target=/timelord/ \
timelord sync --source-dir . --cache-dir /timelord/
# Build the binary
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git/db \
--mount=type=cache,target=/app/target \
bash <<'EOF'
set -o allexport
. /etc/environment
TARGET_DIR=($(cargo metadata --no-deps --format-version 1 | \
jq -r ".target_directory"))
mkdir /out/sbin
PACKAGE=conduwuit
xx-cargo build --locked --release \
-p $PACKAGE;
BINARIES=($(cargo metadata --no-deps --format-version 1 | \
jq -r ".packages[] | select(.name == \"$PACKAGE\") | .targets[] | select( .kind | map(. == \"bin\") | any ) | .name"))
for BINARY in "${BINARIES[@]}"; do
echo $BINARY
xx-verify $TARGET_DIR/$(xx-cargo --print-target-triple)/release/$BINARY
cp $TARGET_DIR/$(xx-cargo --print-target-triple)/release/$BINARY /out/sbin/$BINARY
done
EOF
# Generate Software Bill of Materials (SBOM)
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git/db \
bash <<'EOF'
mkdir /out/sbom
typeset -A PACKAGES
for BINARY in /out/sbin/*; do
BINARY_BASE=$(basename ${BINARY})
package=$(cargo metadata --no-deps --format-version 1 | jq -r ".packages[] | select(.targets[] | select( .kind | map(. == \"bin\") | any ) | .name == \"$BINARY_BASE\") | .name")
if [ -z "$package" ]; then
continue
fi
PACKAGES[$package]=1
done
for PACKAGE in $(echo ${!PACKAGES[@]}); do
echo $PACKAGE
cargo sbom --cargo-package $PACKAGE > /out/sbom/$PACKAGE.spdx.json
done
EOF
# Extract dynamically linked dependencies
RUN <<EOF
mkdir /out/libs
mkdir /out/libs-root
for BINARY in /out/sbin/*; do
lddtree "$BINARY" | awk '{print $(NF-0) " " $1}' | sort -u -k 1,1 | awk '{print "install", "-D", $1, (($2 ~ /^\//) ? "/out/libs-root" $2 : "/out/libs/" $2)}' | xargs -I {} sh -c {}
done
EOF
FROM scratch
WORKDIR /
# Copy root certs for tls into image
# You can also mount the certs from the host
# --volume /etc/ssl/certs:/etc/ssl/certs:ro
COPY --from=base /etc/ssl/certs /etc/ssl/certs
# Copy our build
COPY --from=builder /out/sbin/ /sbin/
# Copy SBOM
COPY --from=builder /out/sbom/ /sbom/
# Copy dynamic libraries to root
COPY --from=builder /out/libs-root/ /
COPY --from=builder /out/libs/ /usr/lib/
# Inform linker where to find libraries
ENV LD_LIBRARY_PATH=/usr/lib
# Continuwuity default port
EXPOSE 8008
CMD ["/sbin/conduwuit"]

3
docs/static/_headers vendored Normal file
View file

@ -0,0 +1,3 @@
/.well-known/matrix/*
Access-Control-Allow-Origin: *
Content-Type: application/json

1
docs/static/client vendored Normal file
View file

@ -0,0 +1 @@
{"m.homeserver":{"base_url": "https://matrix.continuwuity.org"},"org.matrix.msc3575.proxy":{"url": "https://matrix.continuwuity.org"}}

1
docs/static/server vendored Normal file
View file

@ -0,0 +1 @@
{"m.server":"matrix.continuwuity.org:443"}

View file

@ -17,12 +17,61 @@ crate-type = [
] ]
[features] [features]
brotli_compression = [
"conduwuit-api/brotli_compression",
"conduwuit-core/brotli_compression",
"conduwuit-service/brotli_compression",
]
gzip_compression = [
"conduwuit-api/gzip_compression",
"conduwuit-core/gzip_compression",
"conduwuit-service/gzip_compression",
]
io_uring = [
"conduwuit-api/io_uring",
"conduwuit-database/io_uring",
"conduwuit-service/io_uring",
]
jemalloc = [
"conduwuit-api/jemalloc",
"conduwuit-core/jemalloc",
"conduwuit-database/jemalloc",
"conduwuit-service/jemalloc",
]
jemalloc_conf = [
"conduwuit-api/jemalloc_conf",
"conduwuit-core/jemalloc_conf",
"conduwuit-database/jemalloc_conf",
"conduwuit-service/jemalloc_conf",
]
jemalloc_prof = [
"conduwuit-api/jemalloc_prof",
"conduwuit-core/jemalloc_prof",
"conduwuit-database/jemalloc_prof",
"conduwuit-service/jemalloc_prof",
]
jemalloc_stats = [
"conduwuit-api/jemalloc_stats",
"conduwuit-core/jemalloc_stats",
"conduwuit-database/jemalloc_stats",
"conduwuit-service/jemalloc_stats",
]
release_max_log_level = [ release_max_log_level = [
"conduwuit-api/release_max_log_level",
"conduwuit-core/release_max_log_level",
"conduwuit-database/release_max_log_level",
"conduwuit-service/release_max_log_level",
"tracing/max_level_trace", "tracing/max_level_trace",
"tracing/release_max_level_info", "tracing/release_max_level_info",
"log/max_level_trace", "log/max_level_trace",
"log/release_max_level_info", "log/release_max_level_info",
] ]
zstd_compression = [
"conduwuit-api/zstd_compression",
"conduwuit-core/zstd_compression",
"conduwuit-database/zstd_compression",
"conduwuit-service/zstd_compression",
]
[dependencies] [dependencies]
clap.workspace = true clap.workspace = true

View file

@ -2,7 +2,7 @@ use clap::Parser;
use conduwuit::Result; use conduwuit::Result;
use crate::{ use crate::{
appservice, appservice::AppserviceCommand, check, check::CheckCommand, command::Command, appservice, appservice::AppserviceCommand, check, check::CheckCommand, context::Context,
debug, debug::DebugCommand, federation, federation::FederationCommand, media, debug, debug::DebugCommand, federation, federation::FederationCommand, media,
media::MediaCommand, query, query::QueryCommand, room, room::RoomCommand, server, media::MediaCommand, query, query::QueryCommand, room, room::RoomCommand, server,
server::ServerCommand, user, user::UserCommand, server::ServerCommand, user, user::UserCommand,
@ -49,20 +49,18 @@ pub(super) enum AdminCommand {
} }
#[tracing::instrument(skip_all, name = "command")] #[tracing::instrument(skip_all, name = "command")]
pub(super) async fn process(command: AdminCommand, context: &Command<'_>) -> Result { pub(super) async fn process(command: AdminCommand, context: &Context<'_>) -> Result {
use AdminCommand::*; use AdminCommand::*;
match command { match command {
| Appservices(command) => appservice::process(command, context).await?, | Appservices(command) => appservice::process(command, context).await,
| Media(command) => media::process(command, context).await?, | Media(command) => media::process(command, context).await,
| Users(command) => user::process(command, context).await?, | Users(command) => user::process(command, context).await,
| Rooms(command) => room::process(command, context).await?, | Rooms(command) => room::process(command, context).await,
| Federation(command) => federation::process(command, context).await?, | Federation(command) => federation::process(command, context).await,
| Server(command) => server::process(command, context).await?, | Server(command) => server::process(command, context).await,
| Debug(command) => debug::process(command, context).await?, | Debug(command) => debug::process(command, context).await,
| Query(command) => query::process(command, context).await?, | Query(command) => query::process(command, context).await,
| Check(command) => check::process(command, context).await?, | Check(command) => check::process(command, context).await,
} }
Ok(())
} }

View file

@ -1,84 +1,80 @@
use ruma::{api::appservice::Registration, events::room::message::RoomMessageEventContent}; use conduwuit::{Err, Result, checked};
use futures::{FutureExt, StreamExt, TryFutureExt};
use crate::{Result, admin_command}; use crate::admin_command;
#[admin_command] #[admin_command]
pub(super) async fn register(&self) -> Result<RoomMessageEventContent> { pub(super) async fn register(&self) -> Result {
if self.body.len() < 2 let body = &self.body;
|| !self.body[0].trim().starts_with("```") let body_len = self.body.len();
|| self.body.last().unwrap_or(&"").trim() != "```" if body_len < 2
|| !body[0].trim().starts_with("```")
|| body.last().unwrap_or(&"").trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.");
"Expected code block in command body. Add --help for details.",
));
} }
let appservice_config_body = self.body[1..self.body.len().checked_sub(1).unwrap()].join("\n"); let range = 1..checked!(body_len - 1)?;
let parsed_config = serde_yaml::from_str::<Registration>(&appservice_config_body); let appservice_config_body = body[range].join("\n");
let parsed_config = serde_yaml::from_str(&appservice_config_body);
match parsed_config { match parsed_config {
| Err(e) => return Err!("Could not parse appservice config as YAML: {e}"),
| Ok(registration) => match self | Ok(registration) => match self
.services .services
.appservice .appservice
.register_appservice(&registration, &appservice_config_body) .register_appservice(&registration, &appservice_config_body)
.await .await
.map(|()| registration.id)
{ {
| Ok(()) => Ok(RoomMessageEventContent::text_plain(format!( | Err(e) => return Err!("Failed to register appservice: {e}"),
"Appservice registered with ID: {}", | Ok(id) => write!(self, "Appservice registered with ID: {id}"),
registration.id
))),
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!(
"Failed to register appservice: {e}"
))),
}, },
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!(
"Could not parse appservice config as YAML: {e}"
))),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn unregister( pub(super) async fn unregister(&self, appservice_identifier: String) -> Result {
&self,
appservice_identifier: String,
) -> Result<RoomMessageEventContent> {
match self match self
.services .services
.appservice .appservice
.unregister_appservice(&appservice_identifier) .unregister_appservice(&appservice_identifier)
.await .await
{ {
| Ok(()) => Ok(RoomMessageEventContent::text_plain("Appservice unregistered.")), | Err(e) => return Err!("Failed to unregister appservice: {e}"),
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!( | Ok(()) => write!(self, "Appservice unregistered."),
"Failed to unregister appservice: {e}"
))),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn show_appservice_config( pub(super) async fn show_appservice_config(&self, appservice_identifier: String) -> Result {
&self,
appservice_identifier: String,
) -> Result<RoomMessageEventContent> {
match self match self
.services .services
.appservice .appservice
.get_registration(&appservice_identifier) .get_registration(&appservice_identifier)
.await .await
{ {
| None => return Err!("Appservice does not exist."),
| Some(config) => { | Some(config) => {
let config_str = serde_yaml::to_string(&config) let config_str = serde_yaml::to_string(&config)?;
.expect("config should've been validated on register"); write!(self, "Config for {appservice_identifier}:\n\n```yaml\n{config_str}\n```")
let output =
format!("Config for {appservice_identifier}:\n\n```yaml\n{config_str}\n```",);
Ok(RoomMessageEventContent::notice_markdown(output))
}, },
| None => Ok(RoomMessageEventContent::text_plain("Appservice does not exist.")),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn list_registered(&self) -> Result<RoomMessageEventContent> { pub(super) async fn list_registered(&self) -> Result {
let appservices = self.services.appservice.iter_ids().await; self.services
let output = format!("Appservices ({}): {}", appservices.len(), appservices.join(", ")); .appservice
Ok(RoomMessageEventContent::text_plain(output)) .iter_ids()
.collect()
.map(Ok)
.and_then(|appservices: Vec<_>| {
let len = appservices.len();
let list = appservices.join(", ");
write!(self, "Appservices ({len}): {list}")
})
.await
} }

View file

@ -1,15 +1,14 @@
use conduwuit::Result; use conduwuit::Result;
use conduwuit_macros::implement; use conduwuit_macros::implement;
use futures::StreamExt; use futures::StreamExt;
use ruma::events::room::message::RoomMessageEventContent;
use crate::Command; use crate::Context;
/// Uses the iterator in `src/database/key_value/users.rs` to iterator over /// Uses the iterator in `src/database/key_value/users.rs` to iterator over
/// every user in our database (remote and local). Reports total count, any /// every user in our database (remote and local). Reports total count, any
/// errors if there were any, etc /// errors if there were any, etc
#[implement(Command, params = "<'_>")] #[implement(Context, params = "<'_>")]
pub(super) async fn check_all_users(&self) -> Result<RoomMessageEventContent> { pub(super) async fn check_all_users(&self) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let users = self.services.users.iter().collect::<Vec<_>>().await; let users = self.services.users.iter().collect::<Vec<_>>().await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
@ -18,11 +17,10 @@ pub(super) async fn check_all_users(&self) -> Result<RoomMessageEventContent> {
let err_count = users.iter().filter(|_user| false).count(); let err_count = users.iter().filter(|_user| false).count();
let ok_count = users.iter().filter(|_user| true).count(); let ok_count = users.iter().filter(|_user| true).count();
let message = format!( self.write_str(&format!(
"Database query completed in {query_time:?}:\n\n```\nTotal entries: \ "Database query completed in {query_time:?}:\n\n```\nTotal entries: \
{total:?}\nFailure/Invalid user count: {err_count:?}\nSuccess/Valid user count: \ {total:?}\nFailure/Invalid user count: {err_count:?}\nSuccess/Valid user count: \
{ok_count:?}\n```" {ok_count:?}\n```"
); ))
.await
Ok(RoomMessageEventContent::notice_markdown(message))
} }

View file

@ -3,13 +3,13 @@ use std::{fmt, time::SystemTime};
use conduwuit::Result; use conduwuit::Result;
use conduwuit_service::Services; use conduwuit_service::Services;
use futures::{ use futures::{
Future, FutureExt, Future, FutureExt, TryFutureExt,
io::{AsyncWriteExt, BufWriter}, io::{AsyncWriteExt, BufWriter},
lock::Mutex, lock::Mutex,
}; };
use ruma::EventId; use ruma::EventId;
pub(crate) struct Command<'a> { pub(crate) struct Context<'a> {
pub(crate) services: &'a Services, pub(crate) services: &'a Services,
pub(crate) body: &'a [&'a str], pub(crate) body: &'a [&'a str],
pub(crate) timer: SystemTime, pub(crate) timer: SystemTime,
@ -17,14 +17,14 @@ pub(crate) struct Command<'a> {
pub(crate) output: Mutex<BufWriter<Vec<u8>>>, pub(crate) output: Mutex<BufWriter<Vec<u8>>>,
} }
impl Command<'_> { impl Context<'_> {
pub(crate) fn write_fmt( pub(crate) fn write_fmt(
&self, &self,
arguments: fmt::Arguments<'_>, arguments: fmt::Arguments<'_>,
) -> impl Future<Output = Result> + Send + '_ + use<'_> { ) -> impl Future<Output = Result> + Send + '_ + use<'_> {
let buf = format!("{arguments}"); let buf = format!("{arguments}");
self.output.lock().then(|mut output| async move { self.output.lock().then(async move |mut output| {
output.write_all(buf.as_bytes()).await.map_err(Into::into) output.write_all(buf.as_bytes()).map_err(Into::into).await
}) })
} }
@ -32,8 +32,8 @@ impl Command<'_> {
&'a self, &'a self,
s: &'a str, s: &'a str,
) -> impl Future<Output = Result> + Send + 'a { ) -> impl Future<Output = Result> + Send + 'a {
self.output.lock().then(move |mut output| async move { self.output.lock().then(async move |mut output| {
output.write_all(s.as_bytes()).await.map_err(Into::into) output.write_all(s.as_bytes()).map_err(Into::into).await
}) })
} }
} }

View file

@ -6,7 +6,7 @@ use std::{
}; };
use conduwuit::{ use conduwuit::{
Error, Result, debug_error, err, info, Err, Result, debug_error, err, info,
matrix::pdu::{PduEvent, PduId, RawPduId}, matrix::pdu::{PduEvent, PduId, RawPduId},
trace, utils, trace, utils,
utils::{ utils::{
@ -17,10 +17,9 @@ use conduwuit::{
}; };
use futures::{FutureExt, StreamExt, TryStreamExt}; use futures::{FutureExt, StreamExt, TryStreamExt};
use ruma::{ use ruma::{
CanonicalJsonObject, EventId, OwnedEventId, OwnedRoomOrAliasId, RoomId, RoomVersionId, CanonicalJsonObject, CanonicalJsonValue, EventId, OwnedEventId, OwnedRoomId,
ServerName, OwnedRoomOrAliasId, OwnedServerName, RoomId, RoomVersionId,
api::{client::error::ErrorKind, federation::event::get_room_state}, api::federation::event::get_room_state,
events::room::message::RoomMessageEventContent,
}; };
use service::rooms::{ use service::rooms::{
short::{ShortEventId, ShortRoomId}, short::{ShortEventId, ShortRoomId},
@ -31,28 +30,24 @@ use tracing_subscriber::EnvFilter;
use crate::admin_command; use crate::admin_command;
#[admin_command] #[admin_command]
pub(super) async fn echo(&self, message: Vec<String>) -> Result<RoomMessageEventContent> { pub(super) async fn echo(&self, message: Vec<String>) -> Result {
let message = message.join(" "); let message = message.join(" ");
self.write_str(&message).await
Ok(RoomMessageEventContent::notice_plain(message))
} }
#[admin_command] #[admin_command]
pub(super) async fn get_auth_chain( pub(super) async fn get_auth_chain(&self, event_id: OwnedEventId) -> Result {
&self,
event_id: Box<EventId>,
) -> Result<RoomMessageEventContent> {
let Ok(event) = self.services.rooms.timeline.get_pdu_json(&event_id).await else { let Ok(event) = self.services.rooms.timeline.get_pdu_json(&event_id).await else {
return Ok(RoomMessageEventContent::notice_plain("Event not found.")); return Err!("Event not found.");
}; };
let room_id_str = event let room_id_str = event
.get("room_id") .get("room_id")
.and_then(|val| val.as_str()) .and_then(CanonicalJsonValue::as_str)
.ok_or_else(|| Error::bad_database("Invalid event in database"))?; .ok_or_else(|| err!(Database("Invalid event in database")))?;
let room_id = <&RoomId>::try_from(room_id_str) let room_id = <&RoomId>::try_from(room_id_str)
.map_err(|_| Error::bad_database("Invalid room id field in event in database"))?; .map_err(|_| err!(Database("Invalid room id field in event in database")))?;
let start = Instant::now(); let start = Instant::now();
let count = self let count = self
@ -65,51 +60,39 @@ pub(super) async fn get_auth_chain(
.await; .await;
let elapsed = start.elapsed(); let elapsed = start.elapsed();
Ok(RoomMessageEventContent::text_plain(format!( let out = format!("Loaded auth chain with length {count} in {elapsed:?}");
"Loaded auth chain with length {count} in {elapsed:?}"
))) self.write_str(&out).await
} }
#[admin_command] #[admin_command]
pub(super) async fn parse_pdu(&self) -> Result<RoomMessageEventContent> { pub(super) async fn parse_pdu(&self) -> Result {
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&EMPTY).trim() != "```" || self.body.last().unwrap_or(&EMPTY).trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.");
"Expected code block in command body. Add --help for details.",
));
} }
let string = self.body[1..self.body.len().saturating_sub(1)].join("\n"); let string = self.body[1..self.body.len().saturating_sub(1)].join("\n");
match serde_json::from_str(&string) { match serde_json::from_str(&string) {
| Err(e) => return Err!("Invalid json in command body: {e}"),
| Ok(value) => match ruma::signatures::reference_hash(&value, &RoomVersionId::V6) { | Ok(value) => match ruma::signatures::reference_hash(&value, &RoomVersionId::V6) {
| Err(e) => return Err!("Could not parse PDU JSON: {e:?}"),
| Ok(hash) => { | Ok(hash) => {
let event_id = OwnedEventId::parse(format!("${hash}")); let event_id = OwnedEventId::parse(format!("${hash}"));
match serde_json::from_value::<PduEvent>(serde_json::to_value(value)?) {
match serde_json::from_value::<PduEvent>( | Err(e) => return Err!("EventId: {event_id:?}\nCould not parse event: {e}"),
serde_json::to_value(value).expect("value is json"), | Ok(pdu) => write!(self, "EventId: {event_id:?}\n{pdu:#?}"),
) {
| Ok(pdu) => Ok(RoomMessageEventContent::text_plain(format!(
"EventId: {event_id:?}\n{pdu:#?}"
))),
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!(
"EventId: {event_id:?}\nCould not parse event: {e}"
))),
} }
}, },
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!(
"Could not parse PDU JSON: {e:?}"
))),
}, },
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!(
"Invalid json in command body: {e}"
))),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn get_pdu(&self, event_id: Box<EventId>) -> Result<RoomMessageEventContent> { pub(super) async fn get_pdu(&self, event_id: OwnedEventId) -> Result {
let mut outlier = false; let mut outlier = false;
let mut pdu_json = self let mut pdu_json = self
.services .services
@ -124,21 +107,18 @@ pub(super) async fn get_pdu(&self, event_id: Box<EventId>) -> Result<RoomMessage
} }
match pdu_json { match pdu_json {
| Err(_) => return Err!("PDU not found locally."),
| Ok(json) => { | Ok(json) => {
let json_text = let text = serde_json::to_string_pretty(&json)?;
serde_json::to_string_pretty(&json).expect("canonical json is valid json"); let msg = if outlier {
Ok(RoomMessageEventContent::notice_markdown(format!( "Outlier (Rejected / Soft Failed) PDU found in our database"
"{}\n```json\n{}\n```", } else {
if outlier { "PDU found in our database"
"Outlier (Rejected / Soft Failed) PDU found in our database" };
} else { write!(self, "{msg}\n```json\n{text}\n```",)
"PDU found in our database"
},
json_text
)))
}, },
| Err(_) => Ok(RoomMessageEventContent::text_plain("PDU not found locally.")),
} }
.await
} }
#[admin_command] #[admin_command]
@ -146,7 +126,7 @@ pub(super) async fn get_short_pdu(
&self, &self,
shortroomid: ShortRoomId, shortroomid: ShortRoomId,
shorteventid: ShortEventId, shorteventid: ShortEventId,
) -> Result<RoomMessageEventContent> { ) -> Result {
let pdu_id: RawPduId = PduId { let pdu_id: RawPduId = PduId {
shortroomid, shortroomid,
shorteventid: shorteventid.into(), shorteventid: shorteventid.into(),
@ -161,41 +141,33 @@ pub(super) async fn get_short_pdu(
.await; .await;
match pdu_json { match pdu_json {
| Err(_) => return Err!("PDU not found locally."),
| Ok(json) => { | Ok(json) => {
let json_text = let json_text = serde_json::to_string_pretty(&json)?;
serde_json::to_string_pretty(&json).expect("canonical json is valid json"); write!(self, "```json\n{json_text}\n```")
Ok(RoomMessageEventContent::notice_markdown(format!("```json\n{json_text}\n```",)))
}, },
| Err(_) => Ok(RoomMessageEventContent::text_plain("PDU not found locally.")),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn get_remote_pdu_list( pub(super) async fn get_remote_pdu_list(&self, server: OwnedServerName, force: bool) -> Result {
&self,
server: Box<ServerName>,
force: bool,
) -> Result<RoomMessageEventContent> {
if !self.services.server.config.allow_federation { if !self.services.server.config.allow_federation {
return Ok(RoomMessageEventContent::text_plain( return Err!("Federation is disabled on this homeserver.",);
"Federation is disabled on this homeserver.",
));
} }
if server == self.services.globals.server_name() { if server == self.services.globals.server_name() {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Not allowed to send federation requests to ourselves. Please use `get-pdu` for \ "Not allowed to send federation requests to ourselves. Please use `get-pdu` for \
fetching local PDUs from the database.", fetching local PDUs from the database.",
)); );
} }
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&EMPTY).trim() != "```" || self.body.last().unwrap_or(&EMPTY).trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.",);
"Expected code block in command body. Add --help for details.",
));
} }
let list = self let list = self
@ -209,18 +181,19 @@ pub(super) async fn get_remote_pdu_list(
let mut failed_count: usize = 0; let mut failed_count: usize = 0;
let mut success_count: usize = 0; let mut success_count: usize = 0;
for pdu in list { for event_id in list {
if force { if force {
match self.get_remote_pdu(Box::from(pdu), server.clone()).await { match self
.get_remote_pdu(event_id.to_owned(), server.clone())
.await
{
| Err(e) => { | Err(e) => {
failed_count = failed_count.saturating_add(1); failed_count = failed_count.saturating_add(1);
self.services self.services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!("Failed to get remote PDU, ignoring error: {e}"))
"Failed to get remote PDU, ignoring error: {e}" .await;
)))
.await
.ok();
warn!("Failed to get remote PDU, ignoring error: {e}"); warn!("Failed to get remote PDU, ignoring error: {e}");
}, },
| _ => { | _ => {
@ -228,44 +201,48 @@ pub(super) async fn get_remote_pdu_list(
}, },
} }
} else { } else {
self.get_remote_pdu(Box::from(pdu), server.clone()).await?; self.get_remote_pdu(event_id.to_owned(), server.clone())
.await?;
success_count = success_count.saturating_add(1); success_count = success_count.saturating_add(1);
} }
} }
Ok(RoomMessageEventContent::text_plain(format!( let out =
"Fetched {success_count} remote PDUs successfully with {failed_count} failures" format!("Fetched {success_count} remote PDUs successfully with {failed_count} failures");
)))
self.write_str(&out).await
} }
#[admin_command] #[admin_command]
pub(super) async fn get_remote_pdu( pub(super) async fn get_remote_pdu(
&self, &self,
event_id: Box<EventId>, event_id: OwnedEventId,
server: Box<ServerName>, server: OwnedServerName,
) -> Result<RoomMessageEventContent> { ) -> Result {
if !self.services.server.config.allow_federation { if !self.services.server.config.allow_federation {
return Ok(RoomMessageEventContent::text_plain( return Err!("Federation is disabled on this homeserver.");
"Federation is disabled on this homeserver.",
));
} }
if server == self.services.globals.server_name() { if server == self.services.globals.server_name() {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Not allowed to send federation requests to ourselves. Please use `get-pdu` for \ "Not allowed to send federation requests to ourselves. Please use `get-pdu` for \
fetching local PDUs.", fetching local PDUs.",
)); );
} }
match self match self
.services .services
.sending .sending
.send_federation_request(&server, ruma::api::federation::event::get_event::v1::Request { .send_federation_request(&server, ruma::api::federation::event::get_event::v1::Request {
event_id: event_id.clone().into(), event_id: event_id.clone(),
include_unredacted_content: None, include_unredacted_content: None,
}) })
.await .await
{ {
| Err(e) =>
return Err!(
"Remote server did not have PDU or failed sending request to remote server: {e}"
),
| Ok(response) => { | Ok(response) => {
let json: CanonicalJsonObject = let json: CanonicalJsonObject =
serde_json::from_str(response.pdu.get()).map_err(|e| { serde_json::from_str(response.pdu.get()).map_err(|e| {
@ -273,10 +250,9 @@ pub(super) async fn get_remote_pdu(
"Requested event ID {event_id} from server but failed to convert from \ "Requested event ID {event_id} from server but failed to convert from \
RawValue to CanonicalJsonObject (malformed event/response?): {e}" RawValue to CanonicalJsonObject (malformed event/response?): {e}"
); );
Error::BadRequest( err!(Request(Unknown(
ErrorKind::Unknown, "Received response from server but failed to parse PDU"
"Received response from server but failed to parse PDU", )))
)
})?; })?;
trace!("Attempting to parse PDU: {:?}", &response.pdu); trace!("Attempting to parse PDU: {:?}", &response.pdu);
@ -286,6 +262,7 @@ pub(super) async fn get_remote_pdu(
.rooms .rooms
.event_handler .event_handler
.parse_incoming_pdu(&response.pdu) .parse_incoming_pdu(&response.pdu)
.boxed()
.await; .await;
let (event_id, value, room_id) = match parsed_result { let (event_id, value, room_id) = match parsed_result {
@ -293,9 +270,7 @@ pub(super) async fn get_remote_pdu(
| Err(e) => { | Err(e) => {
warn!("Failed to parse PDU: {e}"); warn!("Failed to parse PDU: {e}");
info!("Full PDU: {:?}", &response.pdu); info!("Full PDU: {:?}", &response.pdu);
return Ok(RoomMessageEventContent::text_plain(format!( return Err!("Failed to parse PDU remote server {server} sent us: {e}");
"Failed to parse PDU remote server {server} sent us: {e}"
)));
}, },
}; };
@ -307,30 +282,18 @@ pub(super) async fn get_remote_pdu(
.rooms .rooms
.timeline .timeline
.backfill_pdu(&server, response.pdu) .backfill_pdu(&server, response.pdu)
.boxed()
.await?; .await?;
let json_text = let text = serde_json::to_string_pretty(&json)?;
serde_json::to_string_pretty(&json).expect("canonical json is valid json"); let msg = "Got PDU from specified server and handled as backfilled";
write!(self, "{msg}. Event body:\n```json\n{text}\n```")
Ok(RoomMessageEventContent::notice_markdown(format!(
"{}\n```json\n{}\n```",
"Got PDU from specified server and handled as backfilled PDU successfully. \
Event body:",
json_text
)))
}, },
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!(
"Remote server did not have PDU or failed sending request to remote server: {e}"
))),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn get_room_state( pub(super) async fn get_room_state(&self, room: OwnedRoomOrAliasId) -> Result {
&self,
room: OwnedRoomOrAliasId,
) -> Result<RoomMessageEventContent> {
let room_id = self.services.rooms.alias.resolve(&room).await?; let room_id = self.services.rooms.alias.resolve(&room).await?;
let room_state: Vec<_> = self let room_state: Vec<_> = self
.services .services
@ -342,28 +305,24 @@ pub(super) async fn get_room_state(
.await?; .await?;
if room_state.is_empty() { if room_state.is_empty() {
return Ok(RoomMessageEventContent::text_plain( return Err!("Unable to find room state in our database (vector is empty)",);
"Unable to find room state in our database (vector is empty)",
));
} }
let json = serde_json::to_string_pretty(&room_state).map_err(|e| { let json = serde_json::to_string_pretty(&room_state).map_err(|e| {
warn!("Failed converting room state vector in our database to pretty JSON: {e}"); err!(Database(
Error::bad_database(
"Failed to convert room state events to pretty JSON, possible invalid room state \ "Failed to convert room state events to pretty JSON, possible invalid room state \
events in our database", events in our database {e}",
) ))
})?; })?;
Ok(RoomMessageEventContent::notice_markdown(format!("```json\n{json}\n```"))) let out = format!("```json\n{json}\n```");
self.write_str(&out).await
} }
#[admin_command] #[admin_command]
pub(super) async fn ping(&self, server: Box<ServerName>) -> Result<RoomMessageEventContent> { pub(super) async fn ping(&self, server: OwnedServerName) -> Result {
if server == self.services.globals.server_name() { if server == self.services.globals.server_name() {
return Ok(RoomMessageEventContent::text_plain( return Err!("Not allowed to send federation requests to ourselves.");
"Not allowed to send federation requests to ourselves.",
));
} }
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -377,35 +336,27 @@ pub(super) async fn ping(&self, server: Box<ServerName>) -> Result<RoomMessageEv
) )
.await .await
{ {
| Err(e) => {
return Err!("Failed sending federation request to specified server:\n\n{e}");
},
| Ok(response) => { | Ok(response) => {
let ping_time = timer.elapsed(); let ping_time = timer.elapsed();
let json_text_res = serde_json::to_string_pretty(&response.server); let json_text_res = serde_json::to_string_pretty(&response.server);
if let Ok(json) = json_text_res { let out = if let Ok(json) = json_text_res {
return Ok(RoomMessageEventContent::notice_markdown(format!( format!("Got response which took {ping_time:?} time:\n```json\n{json}\n```")
"Got response which took {ping_time:?} time:\n```json\n{json}\n```" } else {
))); format!("Got non-JSON response which took {ping_time:?} time:\n{response:?}")
} };
Ok(RoomMessageEventContent::text_plain(format!( write!(self, "{out}")
"Got non-JSON response which took {ping_time:?} time:\n{response:?}"
)))
},
| Err(e) => {
warn!(
"Failed sending federation request to specified server from ping debug command: \
{e}"
);
Ok(RoomMessageEventContent::text_plain(format!(
"Failed sending federation request to specified server:\n\n{e}",
)))
}, },
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn force_device_list_updates(&self) -> Result<RoomMessageEventContent> { pub(super) async fn force_device_list_updates(&self) -> Result {
// Force E2EE device list updates for all users // Force E2EE device list updates for all users
self.services self.services
.users .users
@ -413,27 +364,17 @@ pub(super) async fn force_device_list_updates(&self) -> Result<RoomMessageEventC
.for_each(|user_id| self.services.users.mark_device_key_update(user_id)) .for_each(|user_id| self.services.users.mark_device_key_update(user_id))
.await; .await;
Ok(RoomMessageEventContent::text_plain( write!(self, "Marked all devices for all users as having new keys to update").await
"Marked all devices for all users as having new keys to update",
))
} }
#[admin_command] #[admin_command]
pub(super) async fn change_log_level( pub(super) async fn change_log_level(&self, filter: Option<String>, reset: bool) -> Result {
&self,
filter: Option<String>,
reset: bool,
) -> Result<RoomMessageEventContent> {
let handles = &["console"]; let handles = &["console"];
if reset { if reset {
let old_filter_layer = match EnvFilter::try_new(&self.services.server.config.log) { let old_filter_layer = match EnvFilter::try_new(&self.services.server.config.log) {
| Ok(s) => s, | Ok(s) => s,
| Err(e) => { | Err(e) => return Err!("Log level from config appears to be invalid now: {e}"),
return Ok(RoomMessageEventContent::text_plain(format!(
"Log level from config appears to be invalid now: {e}"
)));
},
}; };
match self match self
@ -443,16 +384,12 @@ pub(super) async fn change_log_level(
.reload .reload
.reload(&old_filter_layer, Some(handles)) .reload(&old_filter_layer, Some(handles))
{ {
| Err(e) =>
return Err!("Failed to modify and reload the global tracing log level: {e}"),
| Ok(()) => { | Ok(()) => {
return Ok(RoomMessageEventContent::text_plain(format!( let value = &self.services.server.config.log;
"Successfully changed log level back to config value {}", let out = format!("Successfully changed log level back to config value {value}");
self.services.server.config.log return self.write_str(&out).await;
)));
},
| Err(e) => {
return Ok(RoomMessageEventContent::text_plain(format!(
"Failed to modify and reload the global tracing log level: {e}"
)));
}, },
} }
} }
@ -460,11 +397,7 @@ pub(super) async fn change_log_level(
if let Some(filter) = filter { if let Some(filter) = filter {
let new_filter_layer = match EnvFilter::try_new(filter) { let new_filter_layer = match EnvFilter::try_new(filter) {
| Ok(s) => s, | Ok(s) => s,
| Err(e) => { | Err(e) => return Err!("Invalid log level filter specified: {e}"),
return Ok(RoomMessageEventContent::text_plain(format!(
"Invalid log level filter specified: {e}"
)));
},
}; };
match self match self
@ -474,90 +407,75 @@ pub(super) async fn change_log_level(
.reload .reload
.reload(&new_filter_layer, Some(handles)) .reload(&new_filter_layer, Some(handles))
{ {
| Ok(()) => { | Ok(()) => return self.write_str("Successfully changed log level").await,
return Ok(RoomMessageEventContent::text_plain("Successfully changed log level")); | Err(e) =>
}, return Err!("Failed to modify and reload the global tracing log level: {e}"),
| Err(e) => {
return Ok(RoomMessageEventContent::text_plain(format!(
"Failed to modify and reload the global tracing log level: {e}"
)));
},
} }
} }
Ok(RoomMessageEventContent::text_plain("No log level was specified.")) Err!("No log level was specified.")
} }
#[admin_command] #[admin_command]
pub(super) async fn sign_json(&self) -> Result<RoomMessageEventContent> { pub(super) async fn sign_json(&self) -> Result {
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&"").trim() != "```" || self.body.last().unwrap_or(&"").trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.");
"Expected code block in command body. Add --help for details.",
));
} }
let string = self.body[1..self.body.len().checked_sub(1).unwrap()].join("\n"); let string = self.body[1..self.body.len().checked_sub(1).unwrap()].join("\n");
match serde_json::from_str(&string) { match serde_json::from_str(&string) {
| Err(e) => return Err!("Invalid json: {e}"),
| Ok(mut value) => { | Ok(mut value) => {
self.services self.services.server_keys.sign_json(&mut value)?;
.server_keys let json_text = serde_json::to_string_pretty(&value)?;
.sign_json(&mut value) write!(self, "{json_text}")
.expect("our request json is what ruma expects");
let json_text =
serde_json::to_string_pretty(&value).expect("canonical json is valid json");
Ok(RoomMessageEventContent::text_plain(json_text))
}, },
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!("Invalid json: {e}"))),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn verify_json(&self) -> Result<RoomMessageEventContent> { pub(super) async fn verify_json(&self) -> Result {
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&"").trim() != "```" || self.body.last().unwrap_or(&"").trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.");
"Expected code block in command body. Add --help for details.",
));
} }
let string = self.body[1..self.body.len().checked_sub(1).unwrap()].join("\n"); let string = self.body[1..self.body.len().checked_sub(1).unwrap()].join("\n");
match serde_json::from_str::<CanonicalJsonObject>(&string) { match serde_json::from_str::<CanonicalJsonObject>(&string) {
| Err(e) => return Err!("Invalid json: {e}"),
| Ok(value) => match self.services.server_keys.verify_json(&value, None).await { | Ok(value) => match self.services.server_keys.verify_json(&value, None).await {
| Ok(()) => Ok(RoomMessageEventContent::text_plain("Signature correct")), | Err(e) => return Err!("Signature verification failed: {e}"),
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!( | Ok(()) => write!(self, "Signature correct"),
"Signature verification failed: {e}"
))),
}, },
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!("Invalid json: {e}"))),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn verify_pdu(&self, event_id: Box<EventId>) -> Result<RoomMessageEventContent> { pub(super) async fn verify_pdu(&self, event_id: OwnedEventId) -> Result {
use ruma::signatures::Verified;
let mut event = self.services.rooms.timeline.get_pdu_json(&event_id).await?; let mut event = self.services.rooms.timeline.get_pdu_json(&event_id).await?;
event.remove("event_id"); event.remove("event_id");
let msg = match self.services.server_keys.verify_event(&event, None).await { let msg = match self.services.server_keys.verify_event(&event, None).await {
| Ok(ruma::signatures::Verified::Signatures) =>
"signatures OK, but content hash failed (redaction).",
| Ok(ruma::signatures::Verified::All) => "signatures and hashes OK.",
| Err(e) => return Err(e), | Err(e) => return Err(e),
| Ok(Verified::Signatures) => "signatures OK, but content hash failed (redaction).",
| Ok(Verified::All) => "signatures and hashes OK.",
}; };
Ok(RoomMessageEventContent::notice_plain(msg)) self.write_str(msg).await
} }
#[admin_command] #[admin_command]
#[tracing::instrument(skip(self))] #[tracing::instrument(skip(self))]
pub(super) async fn first_pdu_in_room( pub(super) async fn first_pdu_in_room(&self, room_id: OwnedRoomId) -> Result {
&self,
room_id: Box<RoomId>,
) -> Result<RoomMessageEventContent> {
if !self if !self
.services .services
.rooms .rooms
@ -565,9 +483,7 @@ pub(super) async fn first_pdu_in_room(
.server_in_room(&self.services.server.name, &room_id) .server_in_room(&self.services.server.name, &room_id)
.await .await
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("We are not participating in the room / we don't know about the room ID.",);
"We are not participating in the room / we don't know about the room ID.",
));
} }
let first_pdu = self let first_pdu = self
@ -576,17 +492,15 @@ pub(super) async fn first_pdu_in_room(
.timeline .timeline
.first_pdu_in_room(&room_id) .first_pdu_in_room(&room_id)
.await .await
.map_err(|_| Error::bad_database("Failed to find the first PDU in database"))?; .map_err(|_| err!(Database("Failed to find the first PDU in database")))?;
Ok(RoomMessageEventContent::text_plain(format!("{first_pdu:?}"))) let out = format!("{first_pdu:?}");
self.write_str(&out).await
} }
#[admin_command] #[admin_command]
#[tracing::instrument(skip(self))] #[tracing::instrument(skip(self))]
pub(super) async fn latest_pdu_in_room( pub(super) async fn latest_pdu_in_room(&self, room_id: OwnedRoomId) -> Result {
&self,
room_id: Box<RoomId>,
) -> Result<RoomMessageEventContent> {
if !self if !self
.services .services
.rooms .rooms
@ -594,9 +508,7 @@ pub(super) async fn latest_pdu_in_room(
.server_in_room(&self.services.server.name, &room_id) .server_in_room(&self.services.server.name, &room_id)
.await .await
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("We are not participating in the room / we don't know about the room ID.");
"We are not participating in the room / we don't know about the room ID.",
));
} }
let latest_pdu = self let latest_pdu = self
@ -605,18 +517,19 @@ pub(super) async fn latest_pdu_in_room(
.timeline .timeline
.latest_pdu_in_room(&room_id) .latest_pdu_in_room(&room_id)
.await .await
.map_err(|_| Error::bad_database("Failed to find the latest PDU in database"))?; .map_err(|_| err!(Database("Failed to find the latest PDU in database")))?;
Ok(RoomMessageEventContent::text_plain(format!("{latest_pdu:?}"))) let out = format!("{latest_pdu:?}");
self.write_str(&out).await
} }
#[admin_command] #[admin_command]
#[tracing::instrument(skip(self))] #[tracing::instrument(skip(self))]
pub(super) async fn force_set_room_state_from_server( pub(super) async fn force_set_room_state_from_server(
&self, &self,
room_id: Box<RoomId>, room_id: OwnedRoomId,
server_name: Box<ServerName>, server_name: OwnedServerName,
) -> Result<RoomMessageEventContent> { ) -> Result {
if !self if !self
.services .services
.rooms .rooms
@ -624,9 +537,7 @@ pub(super) async fn force_set_room_state_from_server(
.server_in_room(&self.services.server.name, &room_id) .server_in_room(&self.services.server.name, &room_id)
.await .await
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("We are not participating in the room / we don't know about the room ID.");
"We are not participating in the room / we don't know about the room ID.",
));
} }
let first_pdu = self let first_pdu = self
@ -635,7 +546,7 @@ pub(super) async fn force_set_room_state_from_server(
.timeline .timeline
.latest_pdu_in_room(&room_id) .latest_pdu_in_room(&room_id)
.await .await
.map_err(|_| Error::bad_database("Failed to find the latest PDU in database"))?; .map_err(|_| err!(Database("Failed to find the latest PDU in database")))?;
let room_version = self.services.rooms.state.get_room_version(&room_id).await?; let room_version = self.services.rooms.state.get_room_version(&room_id).await?;
@ -645,10 +556,9 @@ pub(super) async fn force_set_room_state_from_server(
.services .services
.sending .sending
.send_federation_request(&server_name, get_room_state::v1::Request { .send_federation_request(&server_name, get_room_state::v1::Request {
room_id: room_id.clone().into(), room_id: room_id.clone(),
event_id: first_pdu.event_id.clone(), event_id: first_pdu.event_id.clone(),
}) })
.boxed()
.await?; .await?;
for pdu in remote_state_response.pdus.clone() { for pdu in remote_state_response.pdus.clone() {
@ -657,7 +567,6 @@ pub(super) async fn force_set_room_state_from_server(
.rooms .rooms
.event_handler .event_handler
.parse_incoming_pdu(&pdu) .parse_incoming_pdu(&pdu)
.boxed()
.await .await
{ {
| Ok(t) => t, | Ok(t) => t,
@ -721,7 +630,6 @@ pub(super) async fn force_set_room_state_from_server(
.rooms .rooms
.event_handler .event_handler
.resolve_state(&room_id, &room_version, state) .resolve_state(&room_id, &room_version, state)
.boxed()
.await?; .await?;
info!("Forcing new room state"); info!("Forcing new room state");
@ -737,6 +645,7 @@ pub(super) async fn force_set_room_state_from_server(
.await?; .await?;
let state_lock = self.services.rooms.state.mutex.lock(&*room_id).await; let state_lock = self.services.rooms.state.mutex.lock(&*room_id).await;
self.services self.services
.rooms .rooms
.state .state
@ -753,21 +662,18 @@ pub(super) async fn force_set_room_state_from_server(
.update_joined_count(&room_id) .update_joined_count(&room_id)
.await; .await;
drop(state_lock); self.write_str("Successfully forced the room state from the requested remote server.")
.await
Ok(RoomMessageEventContent::text_plain(
"Successfully forced the room state from the requested remote server.",
))
} }
#[admin_command] #[admin_command]
pub(super) async fn get_signing_keys( pub(super) async fn get_signing_keys(
&self, &self,
server_name: Option<Box<ServerName>>, server_name: Option<OwnedServerName>,
notary: Option<Box<ServerName>>, notary: Option<OwnedServerName>,
query: bool, query: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
let server_name = server_name.unwrap_or_else(|| self.services.server.name.clone().into()); let server_name = server_name.unwrap_or_else(|| self.services.server.name.clone());
if let Some(notary) = notary { if let Some(notary) = notary {
let signing_keys = self let signing_keys = self
@ -776,9 +682,8 @@ pub(super) async fn get_signing_keys(
.notary_request(&notary, &server_name) .notary_request(&notary, &server_name)
.await?; .await?;
return Ok(RoomMessageEventContent::notice_markdown(format!( let out = format!("```rs\n{signing_keys:#?}\n```");
"```rs\n{signing_keys:#?}\n```" return self.write_str(&out).await;
)));
} }
let signing_keys = if query { let signing_keys = if query {
@ -793,17 +698,13 @@ pub(super) async fn get_signing_keys(
.await? .await?
}; };
Ok(RoomMessageEventContent::notice_markdown(format!( let out = format!("```rs\n{signing_keys:#?}\n```");
"```rs\n{signing_keys:#?}\n```" self.write_str(&out).await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn get_verify_keys( pub(super) async fn get_verify_keys(&self, server_name: Option<OwnedServerName>) -> Result {
&self, let server_name = server_name.unwrap_or_else(|| self.services.server.name.clone());
server_name: Option<Box<ServerName>>,
) -> Result<RoomMessageEventContent> {
let server_name = server_name.unwrap_or_else(|| self.services.server.name.clone().into());
let keys = self let keys = self
.services .services
@ -818,26 +719,24 @@ pub(super) async fn get_verify_keys(
writeln!(out, "| {key_id} | {key:?} |")?; writeln!(out, "| {key_id} | {key:?} |")?;
} }
Ok(RoomMessageEventContent::notice_markdown(out)) self.write_str(&out).await
} }
#[admin_command] #[admin_command]
pub(super) async fn resolve_true_destination( pub(super) async fn resolve_true_destination(
&self, &self,
server_name: Box<ServerName>, server_name: OwnedServerName,
no_cache: bool, no_cache: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
if !self.services.server.config.allow_federation { if !self.services.server.config.allow_federation {
return Ok(RoomMessageEventContent::text_plain( return Err!("Federation is disabled on this homeserver.",);
"Federation is disabled on this homeserver.",
));
} }
if server_name == self.services.server.name { if server_name == self.services.server.name {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Not allowed to send federation requests to ourselves. Please use `get-pdu` for \ "Not allowed to send federation requests to ourselves. Please use `get-pdu` for \
fetching local PDUs.", fetching local PDUs.",
)); );
} }
let actual = self let actual = self
@ -846,13 +745,12 @@ pub(super) async fn resolve_true_destination(
.resolve_actual_dest(&server_name, !no_cache) .resolve_actual_dest(&server_name, !no_cache)
.await?; .await?;
let msg = format!("Destination: {}\nHostname URI: {}", actual.dest, actual.host,); let msg = format!("Destination: {}\nHostname URI: {}", actual.dest, actual.host);
self.write_str(&msg).await
Ok(RoomMessageEventContent::text_markdown(msg))
} }
#[admin_command] #[admin_command]
pub(super) async fn memory_stats(&self, opts: Option<String>) -> Result<RoomMessageEventContent> { pub(super) async fn memory_stats(&self, opts: Option<String>) -> Result {
const OPTS: &str = "abcdefghijklmnopqrstuvwxyz"; const OPTS: &str = "abcdefghijklmnopqrstuvwxyz";
let opts: String = OPTS let opts: String = OPTS
@ -871,13 +769,12 @@ pub(super) async fn memory_stats(&self, opts: Option<String>) -> Result<RoomMess
self.write_str("```\n").await?; self.write_str("```\n").await?;
self.write_str(&stats).await?; self.write_str(&stats).await?;
self.write_str("\n```").await?; self.write_str("\n```").await?;
Ok(())
Ok(RoomMessageEventContent::text_plain(""))
} }
#[cfg(tokio_unstable)] #[cfg(tokio_unstable)]
#[admin_command] #[admin_command]
pub(super) async fn runtime_metrics(&self) -> Result<RoomMessageEventContent> { pub(super) async fn runtime_metrics(&self) -> Result {
let out = self.services.server.metrics.runtime_metrics().map_or_else( let out = self.services.server.metrics.runtime_metrics().map_or_else(
|| "Runtime metrics are not available.".to_owned(), || "Runtime metrics are not available.".to_owned(),
|metrics| { |metrics| {
@ -890,51 +787,51 @@ pub(super) async fn runtime_metrics(&self) -> Result<RoomMessageEventContent> {
}, },
); );
Ok(RoomMessageEventContent::text_markdown(out)) self.write_str(&out).await
} }
#[cfg(not(tokio_unstable))] #[cfg(not(tokio_unstable))]
#[admin_command] #[admin_command]
pub(super) async fn runtime_metrics(&self) -> Result<RoomMessageEventContent> { pub(super) async fn runtime_metrics(&self) -> Result {
Ok(RoomMessageEventContent::text_markdown( self.write_str("Runtime metrics require building with `tokio_unstable`.")
"Runtime metrics require building with `tokio_unstable`.", .await
))
} }
#[cfg(tokio_unstable)] #[cfg(tokio_unstable)]
#[admin_command] #[admin_command]
pub(super) async fn runtime_interval(&self) -> Result<RoomMessageEventContent> { pub(super) async fn runtime_interval(&self) -> Result {
let out = self.services.server.metrics.runtime_interval().map_or_else( let out = self.services.server.metrics.runtime_interval().map_or_else(
|| "Runtime metrics are not available.".to_owned(), || "Runtime metrics are not available.".to_owned(),
|metrics| format!("```rs\n{metrics:#?}\n```"), |metrics| format!("```rs\n{metrics:#?}\n```"),
); );
Ok(RoomMessageEventContent::text_markdown(out)) self.write_str(&out).await
} }
#[cfg(not(tokio_unstable))] #[cfg(not(tokio_unstable))]
#[admin_command] #[admin_command]
pub(super) async fn runtime_interval(&self) -> Result<RoomMessageEventContent> { pub(super) async fn runtime_interval(&self) -> Result {
Ok(RoomMessageEventContent::text_markdown( self.write_str("Runtime metrics require building with `tokio_unstable`.")
"Runtime metrics require building with `tokio_unstable`.", .await
))
} }
#[admin_command] #[admin_command]
pub(super) async fn time(&self) -> Result<RoomMessageEventContent> { pub(super) async fn time(&self) -> Result {
let now = SystemTime::now(); let now = SystemTime::now();
Ok(RoomMessageEventContent::text_markdown(utils::time::format(now, "%+"))) let now = utils::time::format(now, "%+");
self.write_str(&now).await
} }
#[admin_command] #[admin_command]
pub(super) async fn list_dependencies(&self, names: bool) -> Result<RoomMessageEventContent> { pub(super) async fn list_dependencies(&self, names: bool) -> Result {
if names { if names {
let out = info::cargo::dependencies_names().join(" "); let out = info::cargo::dependencies_names().join(" ");
return Ok(RoomMessageEventContent::notice_markdown(out)); return self.write_str(&out).await;
} }
let deps = info::cargo::dependencies();
let mut out = String::new(); let mut out = String::new();
let deps = info::cargo::dependencies();
writeln!(out, "| name | version | features |")?; writeln!(out, "| name | version | features |")?;
writeln!(out, "| ---- | ------- | -------- |")?; writeln!(out, "| ---- | ------- | -------- |")?;
for (name, dep) in deps { for (name, dep) in deps {
@ -945,10 +842,11 @@ pub(super) async fn list_dependencies(&self, names: bool) -> Result<RoomMessageE
} else { } else {
String::new() String::new()
}; };
writeln!(out, "| {name} | {version} | {feats} |")?; writeln!(out, "| {name} | {version} | {feats} |")?;
} }
Ok(RoomMessageEventContent::notice_markdown(out)) self.write_str(&out).await
} }
#[admin_command] #[admin_command]
@ -956,7 +854,7 @@ pub(super) async fn database_stats(
&self, &self,
property: Option<String>, property: Option<String>,
map: Option<String>, map: Option<String>,
) -> Result<RoomMessageEventContent> { ) -> Result {
let map_name = map.as_ref().map_or(EMPTY, String::as_str); let map_name = map.as_ref().map_or(EMPTY, String::as_str);
let property = property.unwrap_or_else(|| "rocksdb.stats".to_owned()); let property = property.unwrap_or_else(|| "rocksdb.stats".to_owned());
self.services self.services
@ -968,17 +866,11 @@ pub(super) async fn database_stats(
let res = map.property(&property).expect("invalid property"); let res = map.property(&property).expect("invalid property");
writeln!(self, "##### {name}:\n```\n{}\n```", res.trim()) writeln!(self, "##### {name}:\n```\n{}\n```", res.trim())
}) })
.await?; .await
Ok(RoomMessageEventContent::notice_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn database_files( pub(super) async fn database_files(&self, map: Option<String>, level: Option<i32>) -> Result {
&self,
map: Option<String>,
level: Option<i32>,
) -> Result<RoomMessageEventContent> {
let mut files: Vec<_> = self.services.db.db.file_list().collect::<Result<_>>()?; let mut files: Vec<_> = self.services.db.db.file_list().collect::<Result<_>>()?;
files.sort_by_key(|f| f.name.clone()); files.sort_by_key(|f| f.name.clone());
@ -1005,16 +897,12 @@ pub(super) async fn database_files(
file.column_family_name, file.column_family_name,
) )
}) })
.await?; .await
Ok(RoomMessageEventContent::notice_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn trim_memory(&self) -> Result<RoomMessageEventContent> { pub(super) async fn trim_memory(&self) -> Result {
conduwuit::alloc::trim(None)?; conduwuit::alloc::trim(None)?;
writeln!(self, "done").await?; writeln!(self, "done").await
Ok(RoomMessageEventContent::notice_plain(""))
} }

View file

@ -3,7 +3,7 @@ pub(crate) mod tester;
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use ruma::{EventId, OwnedRoomOrAliasId, RoomId, ServerName}; use ruma::{OwnedEventId, OwnedRoomId, OwnedRoomOrAliasId, OwnedServerName};
use service::rooms::short::{ShortEventId, ShortRoomId}; use service::rooms::short::{ShortEventId, ShortRoomId};
use self::tester::TesterCommand; use self::tester::TesterCommand;
@ -20,7 +20,7 @@ pub(super) enum DebugCommand {
/// - Get the auth_chain of a PDU /// - Get the auth_chain of a PDU
GetAuthChain { GetAuthChain {
/// An event ID (the $ character followed by the base64 reference hash) /// An event ID (the $ character followed by the base64 reference hash)
event_id: Box<EventId>, event_id: OwnedEventId,
}, },
/// - Parse and print a PDU from a JSON /// - Parse and print a PDU from a JSON
@ -35,7 +35,7 @@ pub(super) enum DebugCommand {
/// - Retrieve and print a PDU by EventID from the conduwuit database /// - Retrieve and print a PDU by EventID from the conduwuit database
GetPdu { GetPdu {
/// An event ID (a $ followed by the base64 reference hash) /// An event ID (a $ followed by the base64 reference hash)
event_id: Box<EventId>, event_id: OwnedEventId,
}, },
/// - Retrieve and print a PDU by PduId from the conduwuit database /// - Retrieve and print a PDU by PduId from the conduwuit database
@ -52,11 +52,11 @@ pub(super) enum DebugCommand {
/// (following normal event auth rules, handles it as an incoming PDU). /// (following normal event auth rules, handles it as an incoming PDU).
GetRemotePdu { GetRemotePdu {
/// An event ID (a $ followed by the base64 reference hash) /// An event ID (a $ followed by the base64 reference hash)
event_id: Box<EventId>, event_id: OwnedEventId,
/// Argument for us to attempt to fetch the event from the /// Argument for us to attempt to fetch the event from the
/// specified remote server. /// specified remote server.
server: Box<ServerName>, server: OwnedServerName,
}, },
/// - Same as `get-remote-pdu` but accepts a codeblock newline delimited /// - Same as `get-remote-pdu` but accepts a codeblock newline delimited
@ -64,7 +64,7 @@ pub(super) enum DebugCommand {
GetRemotePduList { GetRemotePduList {
/// Argument for us to attempt to fetch all the events from the /// Argument for us to attempt to fetch all the events from the
/// specified remote server. /// specified remote server.
server: Box<ServerName>, server: OwnedServerName,
/// If set, ignores errors, else stops at the first error/failure. /// If set, ignores errors, else stops at the first error/failure.
#[arg(short, long)] #[arg(short, long)]
@ -88,10 +88,10 @@ pub(super) enum DebugCommand {
/// - Get and display signing keys from local cache or remote server. /// - Get and display signing keys from local cache or remote server.
GetSigningKeys { GetSigningKeys {
server_name: Option<Box<ServerName>>, server_name: Option<OwnedServerName>,
#[arg(long)] #[arg(long)]
notary: Option<Box<ServerName>>, notary: Option<OwnedServerName>,
#[arg(short, long)] #[arg(short, long)]
query: bool, query: bool,
@ -99,14 +99,14 @@ pub(super) enum DebugCommand {
/// - Get and display signing keys from local cache or remote server. /// - Get and display signing keys from local cache or remote server.
GetVerifyKeys { GetVerifyKeys {
server_name: Option<Box<ServerName>>, server_name: Option<OwnedServerName>,
}, },
/// - Sends a federation request to the remote server's /// - Sends a federation request to the remote server's
/// `/_matrix/federation/v1/version` endpoint and measures the latency it /// `/_matrix/federation/v1/version` endpoint and measures the latency it
/// took for the server to respond /// took for the server to respond
Ping { Ping {
server: Box<ServerName>, server: OwnedServerName,
}, },
/// - Forces device lists for all local and remote users to be updated (as /// - Forces device lists for all local and remote users to be updated (as
@ -141,21 +141,21 @@ pub(super) enum DebugCommand {
/// ///
/// This re-verifies a PDU existing in the database found by ID. /// This re-verifies a PDU existing in the database found by ID.
VerifyPdu { VerifyPdu {
event_id: Box<EventId>, event_id: OwnedEventId,
}, },
/// - Prints the very first PDU in the specified room (typically /// - Prints the very first PDU in the specified room (typically
/// m.room.create) /// m.room.create)
FirstPduInRoom { FirstPduInRoom {
/// The room ID /// The room ID
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - Prints the latest ("last") PDU in the specified room (typically a /// - Prints the latest ("last") PDU in the specified room (typically a
/// message) /// message)
LatestPduInRoom { LatestPduInRoom {
/// The room ID /// The room ID
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - Forcefully replaces the room state of our local copy of the specified /// - Forcefully replaces the room state of our local copy of the specified
@ -174,9 +174,9 @@ pub(super) enum DebugCommand {
/// `/_matrix/federation/v1/state/{roomId}`. /// `/_matrix/federation/v1/state/{roomId}`.
ForceSetRoomStateFromServer { ForceSetRoomStateFromServer {
/// The impacted room ID /// The impacted room ID
room_id: Box<RoomId>, room_id: OwnedRoomId,
/// The server we will use to query the room state for /// The server we will use to query the room state for
server_name: Box<ServerName>, server_name: OwnedServerName,
}, },
/// - Runs a server name through conduwuit's true destination resolution /// - Runs a server name through conduwuit's true destination resolution
@ -184,7 +184,7 @@ pub(super) enum DebugCommand {
/// ///
/// Useful for debugging well-known issues /// Useful for debugging well-known issues
ResolveTrueDestination { ResolveTrueDestination {
server_name: Box<ServerName>, server_name: OwnedServerName,
#[arg(short, long)] #[arg(short, long)]
no_cache: bool, no_cache: bool,

View file

@ -1,7 +1,6 @@
use conduwuit::Err; use conduwuit::{Err, Result};
use ruma::events::room::message::RoomMessageEventContent;
use crate::{Result, admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
#[admin_command_dispatch] #[admin_command_dispatch]
#[derive(Debug, clap::Subcommand)] #[derive(Debug, clap::Subcommand)]
@ -14,14 +13,14 @@ pub(crate) enum TesterCommand {
#[rustfmt::skip] #[rustfmt::skip]
#[admin_command] #[admin_command]
async fn panic(&self) -> Result<RoomMessageEventContent> { async fn panic(&self) -> Result {
panic!("panicked") panic!("panicked")
} }
#[rustfmt::skip] #[rustfmt::skip]
#[admin_command] #[admin_command]
async fn failure(&self) -> Result<RoomMessageEventContent> { async fn failure(&self) -> Result {
Err!("failed") Err!("failed")
} }
@ -29,20 +28,20 @@ async fn failure(&self) -> Result<RoomMessageEventContent> {
#[inline(never)] #[inline(never)]
#[rustfmt::skip] #[rustfmt::skip]
#[admin_command] #[admin_command]
async fn tester(&self) -> Result<RoomMessageEventContent> { async fn tester(&self) -> Result {
Ok(RoomMessageEventContent::notice_plain("legacy")) self.write_str("Ok").await
} }
#[inline(never)] #[inline(never)]
#[rustfmt::skip] #[rustfmt::skip]
#[admin_command] #[admin_command]
async fn timer(&self) -> Result<RoomMessageEventContent> { async fn timer(&self) -> Result {
let started = std::time::Instant::now(); let started = std::time::Instant::now();
timed(self.body); timed(self.body);
let elapsed = started.elapsed(); let elapsed = started.elapsed();
Ok(RoomMessageEventContent::notice_plain(format!("completed in {elapsed:#?}"))) self.write_str(&format!("completed in {elapsed:#?}")).await
} }
#[inline(never)] #[inline(never)]

View file

@ -1,49 +1,48 @@
use std::fmt::Write; use std::fmt::Write;
use conduwuit::Result; use conduwuit::{Err, Result};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{OwnedRoomId, OwnedServerName, OwnedUserId};
OwnedRoomId, RoomId, ServerName, UserId, events::room::message::RoomMessageEventContent,
};
use crate::{admin_command, get_room_info}; use crate::{admin_command, get_room_info};
#[admin_command] #[admin_command]
pub(super) async fn disable_room(&self, room_id: Box<RoomId>) -> Result<RoomMessageEventContent> { pub(super) async fn disable_room(&self, room_id: OwnedRoomId) -> Result {
self.services.rooms.metadata.disable_room(&room_id, true); self.services.rooms.metadata.disable_room(&room_id, true);
Ok(RoomMessageEventContent::text_plain("Room disabled.")) self.write_str("Room disabled.").await
} }
#[admin_command] #[admin_command]
pub(super) async fn enable_room(&self, room_id: Box<RoomId>) -> Result<RoomMessageEventContent> { pub(super) async fn enable_room(&self, room_id: OwnedRoomId) -> Result {
self.services.rooms.metadata.disable_room(&room_id, false); self.services.rooms.metadata.disable_room(&room_id, false);
Ok(RoomMessageEventContent::text_plain("Room enabled.")) self.write_str("Room enabled.").await
} }
#[admin_command] #[admin_command]
pub(super) async fn incoming_federation(&self) -> Result<RoomMessageEventContent> { pub(super) async fn incoming_federation(&self) -> Result {
let map = self let msg = {
.services let map = self
.rooms .services
.event_handler .rooms
.federation_handletime .event_handler
.read() .federation_handletime
.expect("locked"); .read()
let mut msg = format!("Handling {} incoming pdus:\n", map.len()); .expect("locked");
for (r, (e, i)) in map.iter() { let mut msg = format!("Handling {} incoming pdus:\n", map.len());
let elapsed = i.elapsed(); for (r, (e, i)) in map.iter() {
writeln!(msg, "{} {}: {}m{}s", r, e, elapsed.as_secs() / 60, elapsed.as_secs() % 60)?; let elapsed = i.elapsed();
} writeln!(msg, "{} {}: {}m{}s", r, e, elapsed.as_secs() / 60, elapsed.as_secs() % 60)?;
}
Ok(RoomMessageEventContent::text_plain(&msg)) msg
};
self.write_str(&msg).await
} }
#[admin_command] #[admin_command]
pub(super) async fn fetch_support_well_known( pub(super) async fn fetch_support_well_known(&self, server_name: OwnedServerName) -> Result {
&self,
server_name: Box<ServerName>,
) -> Result<RoomMessageEventContent> {
let response = self let response = self
.services .services
.client .client
@ -55,54 +54,44 @@ pub(super) async fn fetch_support_well_known(
let text = response.text().await?; let text = response.text().await?;
if text.is_empty() { if text.is_empty() {
return Ok(RoomMessageEventContent::text_plain("Response text/body is empty.")); return Err!("Response text/body is empty.");
} }
if text.len() > 1500 { if text.len() > 1500 {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Response text/body is over 1500 characters, assuming no support well-known.", "Response text/body is over 1500 characters, assuming no support well-known.",
)); );
} }
let json: serde_json::Value = match serde_json::from_str(&text) { let json: serde_json::Value = match serde_json::from_str(&text) {
| Ok(json) => json, | Ok(json) => json,
| Err(_) => { | Err(_) => {
return Ok(RoomMessageEventContent::text_plain( return Err!("Response text/body is not valid JSON.",);
"Response text/body is not valid JSON.",
));
}, },
}; };
let pretty_json: String = match serde_json::to_string_pretty(&json) { let pretty_json: String = match serde_json::to_string_pretty(&json) {
| Ok(json) => json, | Ok(json) => json,
| Err(_) => { | Err(_) => {
return Ok(RoomMessageEventContent::text_plain( return Err!("Response text/body is not valid JSON.",);
"Response text/body is not valid JSON.",
));
}, },
}; };
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Got JSON response:\n\n```json\n{pretty_json}\n```"))
"Got JSON response:\n\n```json\n{pretty_json}\n```" .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn remote_user_in_rooms( pub(super) async fn remote_user_in_rooms(&self, user_id: OwnedUserId) -> Result {
&self,
user_id: Box<UserId>,
) -> Result<RoomMessageEventContent> {
if user_id.server_name() == self.services.server.name { if user_id.server_name() == self.services.server.name {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"User belongs to our server, please use `list-joined-rooms` user admin command \ "User belongs to our server, please use `list-joined-rooms` user admin command \
instead.", instead.",
)); );
} }
if !self.services.users.exists(&user_id).await { if !self.services.users.exists(&user_id).await {
return Ok(RoomMessageEventContent::text_plain( return Err!("Remote user does not exist in our database.",);
"Remote user does not exist in our database.",
));
} }
let mut rooms: Vec<(OwnedRoomId, u64, String)> = self let mut rooms: Vec<(OwnedRoomId, u64, String)> = self
@ -115,21 +104,19 @@ pub(super) async fn remote_user_in_rooms(
.await; .await;
if rooms.is_empty() { if rooms.is_empty() {
return Ok(RoomMessageEventContent::text_plain("User is not in any rooms.")); return Err!("User is not in any rooms.");
} }
rooms.sort_by_key(|r| r.1); rooms.sort_by_key(|r| r.1);
rooms.reverse(); rooms.reverse();
let output = format!( let num = rooms.len();
"Rooms {user_id} shares with us ({}):\n```\n{}\n```", let body = rooms
rooms.len(), .iter()
rooms .map(|(id, members, name)| format!("{id} | Members: {members} | Name: {name}"))
.iter() .collect::<Vec<_>>()
.map(|(id, members, name)| format!("{id} | Members: {members} | Name: {name}")) .join("\n");
.collect::<Vec<_>>()
.join("\n")
);
Ok(RoomMessageEventContent::text_markdown(output)) self.write_str(&format!("Rooms {user_id} shares with us ({num}):\n```\n{body}\n```",))
.await
} }

View file

@ -2,7 +2,7 @@ mod commands;
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use ruma::{RoomId, ServerName, UserId}; use ruma::{OwnedRoomId, OwnedServerName, OwnedUserId};
use crate::admin_command_dispatch; use crate::admin_command_dispatch;
@ -14,12 +14,12 @@ pub(super) enum FederationCommand {
/// - Disables incoming federation handling for a room. /// - Disables incoming federation handling for a room.
DisableRoom { DisableRoom {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - Enables incoming federation handling for a room again. /// - Enables incoming federation handling for a room again.
EnableRoom { EnableRoom {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - Fetch `/.well-known/matrix/support` from the specified server /// - Fetch `/.well-known/matrix/support` from the specified server
@ -32,11 +32,11 @@ pub(super) enum FederationCommand {
/// moderation, and security inquiries. This command provides a way to /// moderation, and security inquiries. This command provides a way to
/// easily fetch that information. /// easily fetch that information.
FetchSupportWellKnown { FetchSupportWellKnown {
server_name: Box<ServerName>, server_name: OwnedServerName,
}, },
/// - Lists all the rooms we share/track with the specified *remote* user /// - Lists all the rooms we share/track with the specified *remote* user
RemoteUserInRooms { RemoteUserInRooms {
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
} }

View file

@ -1,26 +1,22 @@
use std::time::Duration; use std::time::Duration;
use conduwuit::{ use conduwuit::{
Result, debug, debug_info, debug_warn, error, info, trace, utils::time::parse_timepoint_ago, Err, Result, debug, debug_info, debug_warn, error, info, trace,
utils::time::parse_timepoint_ago, warn,
}; };
use conduwuit_service::media::Dim; use conduwuit_service::media::Dim;
use ruma::{ use ruma::{Mxc, OwnedEventId, OwnedMxcUri, OwnedServerName};
EventId, Mxc, MxcUri, OwnedMxcUri, OwnedServerName, ServerName,
events::room::message::RoomMessageEventContent,
};
use crate::{admin_command, utils::parse_local_user_id}; use crate::{admin_command, utils::parse_local_user_id};
#[admin_command] #[admin_command]
pub(super) async fn delete( pub(super) async fn delete(
&self, &self,
mxc: Option<Box<MxcUri>>, mxc: Option<OwnedMxcUri>,
event_id: Option<Box<EventId>>, event_id: Option<OwnedEventId>,
) -> Result<RoomMessageEventContent> { ) -> Result {
if event_id.is_some() && mxc.is_some() { if event_id.is_some() && mxc.is_some() {
return Ok(RoomMessageEventContent::text_plain( return Err!("Please specify either an MXC or an event ID, not both.",);
"Please specify either an MXC or an event ID, not both.",
));
} }
if let Some(mxc) = mxc { if let Some(mxc) = mxc {
@ -30,9 +26,7 @@ pub(super) async fn delete(
.delete(&mxc.as_str().try_into()?) .delete(&mxc.as_str().try_into()?)
.await?; .await?;
return Ok(RoomMessageEventContent::text_plain( return Err!("Deleted the MXC from our database and on our filesystem.",);
"Deleted the MXC from our database and on our filesystem.",
));
} }
if let Some(event_id) = event_id { if let Some(event_id) = event_id {
@ -113,41 +107,36 @@ pub(super) async fn delete(
let final_url = url.to_string().replace('"', ""); let final_url = url.to_string().replace('"', "");
mxc_urls.push(final_url); mxc_urls.push(final_url);
} else { } else {
info!( warn!(
"Found a URL in the event ID {event_id} but did not \ "Found a URL in the event ID {event_id} but did not \
start with mxc://, ignoring" start with mxc://, ignoring"
); );
} }
} else { } else {
info!("No \"url\" key in \"file\" key."); error!("No \"url\" key in \"file\" key.");
} }
} }
} }
} else { } else {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Event ID does not have a \"content\" key or failed parsing the \ "Event ID does not have a \"content\" key or failed parsing the \
event ID JSON.", event ID JSON.",
)); );
} }
} else { } else {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Event ID does not have a \"content\" key, this is not a message or an \ "Event ID does not have a \"content\" key, this is not a message or an \
event type that contains media.", event type that contains media.",
)); );
} }
}, },
| _ => { | _ => {
return Ok(RoomMessageEventContent::text_plain( return Err!("Event ID does not exist or is not known to us.",);
"Event ID does not exist or is not known to us.",
));
}, },
} }
if mxc_urls.is_empty() { if mxc_urls.is_empty() {
info!("Parsed event ID {event_id} but did not contain any MXC URLs."); return Err!("Parsed event ID but found no MXC URLs.",);
return Ok(RoomMessageEventContent::text_plain(
"Parsed event ID but found no MXC URLs.",
));
} }
let mut mxc_deletion_count: usize = 0; let mut mxc_deletion_count: usize = 0;
@ -170,27 +159,27 @@ pub(super) async fn delete(
} }
} }
return Ok(RoomMessageEventContent::text_plain(format!( return self
"Deleted {mxc_deletion_count} total MXCs from our database and the filesystem from \ .write_str(&format!(
event ID {event_id}." "Deleted {mxc_deletion_count} total MXCs from our database and the filesystem \
))); from event ID {event_id}."
))
.await;
} }
Ok(RoomMessageEventContent::text_plain( Err!(
"Please specify either an MXC using --mxc or an event ID using --event-id of the \ "Please specify either an MXC using --mxc or an event ID using --event-id of the \
message containing an image. See --help for details.", message containing an image. See --help for details."
)) )
} }
#[admin_command] #[admin_command]
pub(super) async fn delete_list(&self) -> Result<RoomMessageEventContent> { pub(super) async fn delete_list(&self) -> Result {
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&"").trim() != "```" || self.body.last().unwrap_or(&"").trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.",);
"Expected code block in command body. Add --help for details.",
));
} }
let mut failed_parsed_mxcs: usize = 0; let mut failed_parsed_mxcs: usize = 0;
@ -204,7 +193,6 @@ pub(super) async fn delete_list(&self) -> Result<RoomMessageEventContent> {
.try_into() .try_into()
.inspect_err(|e| { .inspect_err(|e| {
debug_warn!("Failed to parse user-provided MXC URI: {e}"); debug_warn!("Failed to parse user-provided MXC URI: {e}");
failed_parsed_mxcs = failed_parsed_mxcs.saturating_add(1); failed_parsed_mxcs = failed_parsed_mxcs.saturating_add(1);
}) })
.ok() .ok()
@ -227,10 +215,11 @@ pub(super) async fn delete_list(&self) -> Result<RoomMessageEventContent> {
} }
} }
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!(
"Finished bulk MXC deletion, deleted {mxc_deletion_count} total MXCs from our database \ "Finished bulk MXC deletion, deleted {mxc_deletion_count} total MXCs from our database \
and the filesystem. {failed_parsed_mxcs} MXCs failed to be parsed from the database.", and the filesystem. {failed_parsed_mxcs} MXCs failed to be parsed from the database.",
))) ))
.await
} }
#[admin_command] #[admin_command]
@ -240,11 +229,9 @@ pub(super) async fn delete_past_remote_media(
before: bool, before: bool,
after: bool, after: bool,
yes_i_want_to_delete_local_media: bool, yes_i_want_to_delete_local_media: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
if before && after { if before && after {
return Ok(RoomMessageEventContent::text_plain( return Err!("Please only pick one argument, --before or --after.",);
"Please only pick one argument, --before or --after.",
));
} }
assert!(!(before && after), "--before and --after should not be specified together"); assert!(!(before && after), "--before and --after should not be specified together");
@ -260,35 +247,28 @@ pub(super) async fn delete_past_remote_media(
) )
.await?; .await?;
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!("Deleted {deleted_count} total files.",))
"Deleted {deleted_count} total files.", .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn delete_all_from_user( pub(super) async fn delete_all_from_user(&self, username: String) -> Result {
&self,
username: String,
) -> Result<RoomMessageEventContent> {
let user_id = parse_local_user_id(self.services, &username)?; let user_id = parse_local_user_id(self.services, &username)?;
let deleted_count = self.services.media.delete_from_user(&user_id).await?; let deleted_count = self.services.media.delete_from_user(&user_id).await?;
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!("Deleted {deleted_count} total files.",))
"Deleted {deleted_count} total files.", .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn delete_all_from_server( pub(super) async fn delete_all_from_server(
&self, &self,
server_name: Box<ServerName>, server_name: OwnedServerName,
yes_i_want_to_delete_local_media: bool, yes_i_want_to_delete_local_media: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
if server_name == self.services.globals.server_name() && !yes_i_want_to_delete_local_media { if server_name == self.services.globals.server_name() && !yes_i_want_to_delete_local_media {
return Ok(RoomMessageEventContent::text_plain( return Err!("This command only works for remote media by default.",);
"This command only works for remote media by default.",
));
} }
let Ok(all_mxcs) = self let Ok(all_mxcs) = self
@ -298,9 +278,7 @@ pub(super) async fn delete_all_from_server(
.await .await
.inspect_err(|e| error!("Failed to get MXC URIs from our database: {e}")) .inspect_err(|e| error!("Failed to get MXC URIs from our database: {e}"))
else { else {
return Ok(RoomMessageEventContent::text_plain( return Err!("Failed to get MXC URIs from our database",);
"Failed to get MXC URIs from our database",
));
}; };
let mut deleted_count: usize = 0; let mut deleted_count: usize = 0;
@ -336,17 +314,16 @@ pub(super) async fn delete_all_from_server(
} }
} }
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!("Deleted {deleted_count} total files.",))
"Deleted {deleted_count} total files.", .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn get_file_info(&self, mxc: OwnedMxcUri) -> Result<RoomMessageEventContent> { pub(super) async fn get_file_info(&self, mxc: OwnedMxcUri) -> Result {
let mxc: Mxc<'_> = mxc.as_str().try_into()?; let mxc: Mxc<'_> = mxc.as_str().try_into()?;
let metadata = self.services.media.get_metadata(&mxc).await; let metadata = self.services.media.get_metadata(&mxc).await;
Ok(RoomMessageEventContent::notice_markdown(format!("```\n{metadata:#?}\n```"))) self.write_str(&format!("```\n{metadata:#?}\n```")).await
} }
#[admin_command] #[admin_command]
@ -355,7 +332,7 @@ pub(super) async fn get_remote_file(
mxc: OwnedMxcUri, mxc: OwnedMxcUri,
server: Option<OwnedServerName>, server: Option<OwnedServerName>,
timeout: u32, timeout: u32,
) -> Result<RoomMessageEventContent> { ) -> Result {
let mxc: Mxc<'_> = mxc.as_str().try_into()?; let mxc: Mxc<'_> = mxc.as_str().try_into()?;
let timeout = Duration::from_millis(timeout.into()); let timeout = Duration::from_millis(timeout.into());
let mut result = self let mut result = self
@ -368,8 +345,8 @@ pub(super) async fn get_remote_file(
let len = result.content.as_ref().expect("content").len(); let len = result.content.as_ref().expect("content").len();
result.content.as_mut().expect("content").clear(); result.content.as_mut().expect("content").clear();
let out = format!("```\n{result:#?}\nreceived {len} bytes for file content.\n```"); self.write_str(&format!("```\n{result:#?}\nreceived {len} bytes for file content.\n```"))
Ok(RoomMessageEventContent::notice_markdown(out)) .await
} }
#[admin_command] #[admin_command]
@ -380,7 +357,7 @@ pub(super) async fn get_remote_thumbnail(
timeout: u32, timeout: u32,
width: u32, width: u32,
height: u32, height: u32,
) -> Result<RoomMessageEventContent> { ) -> Result {
let mxc: Mxc<'_> = mxc.as_str().try_into()?; let mxc: Mxc<'_> = mxc.as_str().try_into()?;
let timeout = Duration::from_millis(timeout.into()); let timeout = Duration::from_millis(timeout.into());
let dim = Dim::new(width, height, None); let dim = Dim::new(width, height, None);
@ -394,6 +371,6 @@ pub(super) async fn get_remote_thumbnail(
let len = result.content.as_ref().expect("content").len(); let len = result.content.as_ref().expect("content").len();
result.content.as_mut().expect("content").clear(); result.content.as_mut().expect("content").clear();
let out = format!("```\n{result:#?}\nreceived {len} bytes for file content.\n```"); self.write_str(&format!("```\n{result:#?}\nreceived {len} bytes for file content.\n```"))
Ok(RoomMessageEventContent::notice_markdown(out)) .await
} }

View file

@ -3,7 +3,7 @@ mod commands;
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use ruma::{EventId, MxcUri, OwnedMxcUri, OwnedServerName, ServerName}; use ruma::{OwnedEventId, OwnedMxcUri, OwnedServerName};
use crate::admin_command_dispatch; use crate::admin_command_dispatch;
@ -15,12 +15,12 @@ pub(super) enum MediaCommand {
Delete { Delete {
/// The MXC URL to delete /// The MXC URL to delete
#[arg(long)] #[arg(long)]
mxc: Option<Box<MxcUri>>, mxc: Option<OwnedMxcUri>,
/// - The message event ID which contains the media and thumbnail MXC /// - The message event ID which contains the media and thumbnail MXC
/// URLs /// URLs
#[arg(long)] #[arg(long)]
event_id: Option<Box<EventId>>, event_id: Option<OwnedEventId>,
}, },
/// - Deletes a codeblock list of MXC URLs from our database and on the /// - Deletes a codeblock list of MXC URLs from our database and on the
@ -57,7 +57,7 @@ pub(super) enum MediaCommand {
/// - Deletes all remote media from the specified remote server. This will /// - Deletes all remote media from the specified remote server. This will
/// always ignore errors by default. /// always ignore errors by default.
DeleteAllFromServer { DeleteAllFromServer {
server_name: Box<ServerName>, server_name: OwnedServerName,
/// Long argument to delete local media /// Long argument to delete local media
#[arg(long)] #[arg(long)]

View file

@ -4,7 +4,7 @@
#![allow(clippy::too_many_arguments)] #![allow(clippy::too_many_arguments)]
pub(crate) mod admin; pub(crate) mod admin;
pub(crate) mod command; pub(crate) mod context;
pub(crate) mod processor; pub(crate) mod processor;
mod tests; mod tests;
pub(crate) mod utils; pub(crate) mod utils;
@ -23,13 +23,9 @@ extern crate conduwuit_api as api;
extern crate conduwuit_core as conduwuit; extern crate conduwuit_core as conduwuit;
extern crate conduwuit_service as service; extern crate conduwuit_service as service;
pub(crate) use conduwuit::Result;
pub(crate) use conduwuit_macros::{admin_command, admin_command_dispatch}; pub(crate) use conduwuit_macros::{admin_command, admin_command_dispatch};
pub(crate) use crate::{ pub(crate) use crate::{context::Context, utils::get_room_info};
command::Command,
utils::{escape_html, get_room_info},
};
pub(crate) const PAGE_SIZE: usize = 100; pub(crate) const PAGE_SIZE: usize = 100;

View file

@ -33,7 +33,7 @@ use service::{
use tracing::Level; use tracing::Level;
use tracing_subscriber::{EnvFilter, filter::LevelFilter}; use tracing_subscriber::{EnvFilter, filter::LevelFilter};
use crate::{Command, admin, admin::AdminCommand}; use crate::{admin, admin::AdminCommand, context::Context};
#[must_use] #[must_use]
pub(super) fn complete(line: &str) -> String { complete_command(AdminCommand::command(), line) } pub(super) fn complete(line: &str) -> String { complete_command(AdminCommand::command(), line) }
@ -58,7 +58,7 @@ async fn process_command(services: Arc<Services>, input: &CommandInput) -> Proce
| Ok(parsed) => parsed, | Ok(parsed) => parsed,
}; };
let context = Command { let context = Context {
services: &services, services: &services,
body: &body, body: &body,
timer: SystemTime::now(), timer: SystemTime::now(),
@ -103,7 +103,7 @@ fn handle_panic(error: &Error, command: &CommandInput) -> ProcessorResult {
/// Parse and process a message from the admin room /// Parse and process a message from the admin room
async fn process( async fn process(
context: &Command<'_>, context: &Context<'_>,
command: AdminCommand, command: AdminCommand,
args: &[String], args: &[String],
) -> (Result, String) { ) -> (Result, String) {
@ -132,7 +132,7 @@ async fn process(
(result, output) (result, output)
} }
fn capture_create(context: &Command<'_>) -> (Arc<Capture>, Arc<Mutex<String>>) { fn capture_create(context: &Context<'_>) -> (Arc<Capture>, Arc<Mutex<String>>) {
let env_config = &context.services.server.config.admin_log_capture; let env_config = &context.services.server.config.admin_log_capture;
let env_filter = EnvFilter::try_new(env_config).unwrap_or_else(|e| { let env_filter = EnvFilter::try_new(env_config).unwrap_or_else(|e| {
warn!("admin_log_capture filter invalid: {e:?}"); warn!("admin_log_capture filter invalid: {e:?}");

View file

@ -1,7 +1,7 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use futures::StreamExt; use futures::StreamExt;
use ruma::{RoomId, UserId, events::room::message::RoomMessageEventContent}; use ruma::{OwnedRoomId, OwnedUserId};
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
@ -12,31 +12,31 @@ pub(crate) enum AccountDataCommand {
/// - Returns all changes to the account data that happened after `since`. /// - Returns all changes to the account data that happened after `since`.
ChangesSince { ChangesSince {
/// Full user ID /// Full user ID
user_id: Box<UserId>, user_id: OwnedUserId,
/// UNIX timestamp since (u64) /// UNIX timestamp since (u64)
since: u64, since: u64,
/// Optional room ID of the account data /// Optional room ID of the account data
room_id: Option<Box<RoomId>>, room_id: Option<OwnedRoomId>,
}, },
/// - Searches the account data for a specific kind. /// - Searches the account data for a specific kind.
AccountDataGet { AccountDataGet {
/// Full user ID /// Full user ID
user_id: Box<UserId>, user_id: OwnedUserId,
/// Account data event type /// Account data event type
kind: String, kind: String,
/// Optional room ID of the account data /// Optional room ID of the account data
room_id: Option<Box<RoomId>>, room_id: Option<OwnedRoomId>,
}, },
} }
#[admin_command] #[admin_command]
async fn changes_since( async fn changes_since(
&self, &self,
user_id: Box<UserId>, user_id: OwnedUserId,
since: u64, since: u64,
room_id: Option<Box<RoomId>>, room_id: Option<OwnedRoomId>,
) -> Result<RoomMessageEventContent> { ) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results: Vec<_> = self let results: Vec<_> = self
.services .services
@ -46,18 +46,17 @@ async fn changes_since(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn account_data_get( async fn account_data_get(
&self, &self,
user_id: Box<UserId>, user_id: OwnedUserId,
kind: String, kind: String,
room_id: Option<Box<RoomId>>, room_id: Option<OwnedRoomId>,
) -> Result<RoomMessageEventContent> { ) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results = self let results = self
.services .services
@ -66,7 +65,6 @@ async fn account_data_get(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .await
)))
} }

View file

@ -1,7 +1,8 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use futures::TryStreamExt;
use crate::Command; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
/// All the getters and iterators from src/database/key_value/appservice.rs /// All the getters and iterators from src/database/key_value/appservice.rs
@ -9,7 +10,7 @@ pub(crate) enum AppserviceCommand {
/// - Gets the appservice registration info/details from the ID as a string /// - Gets the appservice registration info/details from the ID as a string
GetRegistration { GetRegistration {
/// Appservice registration ID /// Appservice registration ID
appservice_id: Box<str>, appservice_id: String,
}, },
/// - Gets all appservice registrations with their ID and registration info /// - Gets all appservice registrations with their ID and registration info
@ -17,7 +18,7 @@ pub(crate) enum AppserviceCommand {
} }
/// All the getters and iterators from src/database/key_value/appservice.rs /// All the getters and iterators from src/database/key_value/appservice.rs
pub(super) async fn process(subcommand: AppserviceCommand, context: &Command<'_>) -> Result { pub(super) async fn process(subcommand: AppserviceCommand, context: &Context<'_>) -> Result {
let services = context.services; let services = context.services;
match subcommand { match subcommand {
@ -31,7 +32,7 @@ pub(super) async fn process(subcommand: AppserviceCommand, context: &Command<'_>
}, },
| AppserviceCommand::All => { | AppserviceCommand::All => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results = services.appservice.all().await; let results: Vec<_> = services.appservice.iter_db_ids().try_collect().await?;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
write!(context, "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```") write!(context, "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```")

View file

@ -1,8 +1,8 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use ruma::ServerName; use ruma::OwnedServerName;
use crate::Command; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
/// All the getters and iterators from src/database/key_value/globals.rs /// All the getters and iterators from src/database/key_value/globals.rs
@ -11,17 +11,15 @@ pub(crate) enum GlobalsCommand {
CurrentCount, CurrentCount,
LastCheckForUpdatesId,
/// - This returns an empty `Ok(BTreeMap<..>)` when there are no keys found /// - This returns an empty `Ok(BTreeMap<..>)` when there are no keys found
/// for the server. /// for the server.
SigningKeysFor { SigningKeysFor {
origin: Box<ServerName>, origin: OwnedServerName,
}, },
} }
/// All the getters and iterators from src/database/key_value/globals.rs /// All the getters and iterators from src/database/key_value/globals.rs
pub(super) async fn process(subcommand: GlobalsCommand, context: &Command<'_>) -> Result { pub(super) async fn process(subcommand: GlobalsCommand, context: &Context<'_>) -> Result {
let services = context.services; let services = context.services;
match subcommand { match subcommand {
@ -39,13 +37,6 @@ pub(super) async fn process(subcommand: GlobalsCommand, context: &Command<'_>) -
write!(context, "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```") write!(context, "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```")
}, },
| GlobalsCommand::LastCheckForUpdatesId => {
let timer = tokio::time::Instant::now();
let results = services.updates.last_check_for_updates_id().await;
let query_time = timer.elapsed();
write!(context, "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```")
},
| GlobalsCommand::SigningKeysFor { origin } => { | GlobalsCommand::SigningKeysFor { origin } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results = services.server_keys.verify_keys_for(&origin).await; let results = services.server_keys.verify_keys_for(&origin).await;

View file

@ -1,9 +1,9 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use futures::StreamExt; use futures::StreamExt;
use ruma::UserId; use ruma::OwnedUserId;
use crate::Command; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
/// All the getters and iterators from src/database/key_value/presence.rs /// All the getters and iterators from src/database/key_value/presence.rs
@ -11,7 +11,7 @@ pub(crate) enum PresenceCommand {
/// - Returns the latest presence event for the given user. /// - Returns the latest presence event for the given user.
GetPresence { GetPresence {
/// Full user ID /// Full user ID
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
/// - Iterator of the most recent presence updates that happened after the /// - Iterator of the most recent presence updates that happened after the
@ -23,7 +23,7 @@ pub(crate) enum PresenceCommand {
} }
/// All the getters and iterators in key_value/presence.rs /// All the getters and iterators in key_value/presence.rs
pub(super) async fn process(subcommand: PresenceCommand, context: &Command<'_>) -> Result { pub(super) async fn process(subcommand: PresenceCommand, context: &Context<'_>) -> Result {
let services = context.services; let services = context.services;
match subcommand { match subcommand {

View file

@ -1,19 +1,19 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use ruma::UserId; use ruma::OwnedUserId;
use crate::Command; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
pub(crate) enum PusherCommand { pub(crate) enum PusherCommand {
/// - Returns all the pushers for the user. /// - Returns all the pushers for the user.
GetPushers { GetPushers {
/// Full user ID /// Full user ID
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
} }
pub(super) async fn process(subcommand: PusherCommand, context: &Command<'_>) -> Result { pub(super) async fn process(subcommand: PusherCommand, context: &Context<'_>) -> Result {
let services = context.services; let services = context.services;
match subcommand { match subcommand {

View file

@ -11,7 +11,6 @@ use conduwuit::{
use conduwuit_database::Map; use conduwuit_database::Map;
use conduwuit_service::Services; use conduwuit_service::Services;
use futures::{FutureExt, Stream, StreamExt, TryStreamExt}; use futures::{FutureExt, Stream, StreamExt, TryStreamExt};
use ruma::events::room::message::RoomMessageEventContent;
use tokio::time::Instant; use tokio::time::Instant;
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
@ -170,7 +169,7 @@ pub(super) async fn compact(
into: Option<usize>, into: Option<usize>,
parallelism: Option<usize>, parallelism: Option<usize>,
exhaustive: bool, exhaustive: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
use conduwuit_database::compact::Options; use conduwuit_database::compact::Options;
let default_all_maps: Option<_> = map.is_none().then(|| { let default_all_maps: Option<_> = map.is_none().then(|| {
@ -221,17 +220,11 @@ pub(super) async fn compact(
let results = results.await; let results = results.await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
self.write_str(&format!("Jobs completed in {query_time:?}:\n\n```rs\n{results:#?}\n```")) self.write_str(&format!("Jobs completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"))
.await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_count( pub(super) async fn raw_count(&self, map: Option<String>, prefix: Option<String>) -> Result {
&self,
map: Option<String>,
prefix: Option<String>,
) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let timer = Instant::now(); let timer = Instant::now();
@ -242,17 +235,11 @@ pub(super) async fn raw_count(
let query_time = timer.elapsed(); let query_time = timer.elapsed();
self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{count:#?}\n```")) self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{count:#?}\n```"))
.await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_keys( pub(super) async fn raw_keys(&self, map: String, prefix: Option<String>) -> Result {
&self,
map: String,
prefix: Option<String>,
) -> Result<RoomMessageEventContent> {
writeln!(self, "```").boxed().await?; writeln!(self, "```").boxed().await?;
let map = self.services.db.get(map.as_str())?; let map = self.services.db.get(map.as_str())?;
@ -266,18 +253,12 @@ pub(super) async fn raw_keys(
.await?; .await?;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
let out = format!("\n```\n\nQuery completed in {query_time:?}"); self.write_str(&format!("\n```\n\nQuery completed in {query_time:?}"))
self.write_str(out.as_str()).await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_keys_sizes( pub(super) async fn raw_keys_sizes(&self, map: Option<String>, prefix: Option<String>) -> Result {
&self,
map: Option<String>,
prefix: Option<String>,
) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let timer = Instant::now(); let timer = Instant::now();
@ -294,18 +275,12 @@ pub(super) async fn raw_keys_sizes(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
let result = format!("```\n{result:#?}\n```\n\nQuery completed in {query_time:?}"); self.write_str(&format!("```\n{result:#?}\n```\n\nQuery completed in {query_time:?}"))
self.write_str(result.as_str()).await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_keys_total( pub(super) async fn raw_keys_total(&self, map: Option<String>, prefix: Option<String>) -> Result {
&self,
map: Option<String>,
prefix: Option<String>,
) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let timer = Instant::now(); let timer = Instant::now();
@ -318,19 +293,12 @@ pub(super) async fn raw_keys_total(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
self.write_str(&format!("```\n{result:#?}\n\n```\n\nQuery completed in {query_time:?}")) self.write_str(&format!("```\n{result:#?}\n\n```\n\nQuery completed in {query_time:?}"))
.await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_vals_sizes( pub(super) async fn raw_vals_sizes(&self, map: Option<String>, prefix: Option<String>) -> Result {
&self,
map: Option<String>,
prefix: Option<String>,
) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let timer = Instant::now(); let timer = Instant::now();
@ -348,18 +316,12 @@ pub(super) async fn raw_vals_sizes(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
let result = format!("```\n{result:#?}\n```\n\nQuery completed in {query_time:?}"); self.write_str(&format!("```\n{result:#?}\n```\n\nQuery completed in {query_time:?}"))
self.write_str(result.as_str()).await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_vals_total( pub(super) async fn raw_vals_total(&self, map: Option<String>, prefix: Option<String>) -> Result {
&self,
map: Option<String>,
prefix: Option<String>,
) -> Result<RoomMessageEventContent> {
let prefix = prefix.as_deref().unwrap_or(EMPTY); let prefix = prefix.as_deref().unwrap_or(EMPTY);
let timer = Instant::now(); let timer = Instant::now();
@ -373,19 +335,12 @@ pub(super) async fn raw_vals_total(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
self.write_str(&format!("```\n{result:#?}\n\n```\n\nQuery completed in {query_time:?}")) self.write_str(&format!("```\n{result:#?}\n\n```\n\nQuery completed in {query_time:?}"))
.await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_iter( pub(super) async fn raw_iter(&self, map: String, prefix: Option<String>) -> Result {
&self,
map: String,
prefix: Option<String>,
) -> Result<RoomMessageEventContent> {
writeln!(self, "```").await?; writeln!(self, "```").await?;
let map = self.services.db.get(&map)?; let map = self.services.db.get(&map)?;
@ -401,9 +356,7 @@ pub(super) async fn raw_iter(
let query_time = timer.elapsed(); let query_time = timer.elapsed();
self.write_str(&format!("\n```\n\nQuery completed in {query_time:?}")) self.write_str(&format!("\n```\n\nQuery completed in {query_time:?}"))
.await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
@ -412,7 +365,7 @@ pub(super) async fn raw_keys_from(
map: String, map: String,
start: String, start: String,
limit: Option<usize>, limit: Option<usize>,
) -> Result<RoomMessageEventContent> { ) -> Result {
writeln!(self, "```").await?; writeln!(self, "```").await?;
let map = self.services.db.get(&map)?; let map = self.services.db.get(&map)?;
@ -426,9 +379,7 @@ pub(super) async fn raw_keys_from(
let query_time = timer.elapsed(); let query_time = timer.elapsed();
self.write_str(&format!("\n```\n\nQuery completed in {query_time:?}")) self.write_str(&format!("\n```\n\nQuery completed in {query_time:?}"))
.await?; .await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
@ -437,7 +388,7 @@ pub(super) async fn raw_iter_from(
map: String, map: String,
start: String, start: String,
limit: Option<usize>, limit: Option<usize>,
) -> Result<RoomMessageEventContent> { ) -> Result {
let map = self.services.db.get(&map)?; let map = self.services.db.get(&map)?;
let timer = Instant::now(); let timer = Instant::now();
let result = map let result = map
@ -449,41 +400,38 @@ pub(super) async fn raw_iter_from(
.await?; .await?;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_del(&self, map: String, key: String) -> Result<RoomMessageEventContent> { pub(super) async fn raw_del(&self, map: String, key: String) -> Result {
let map = self.services.db.get(&map)?; let map = self.services.db.get(&map)?;
let timer = Instant::now(); let timer = Instant::now();
map.remove(&key); map.remove(&key);
let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( let query_time = timer.elapsed();
"Operation completed in {query_time:?}" self.write_str(&format!("Operation completed in {query_time:?}"))
))) .await
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_get(&self, map: String, key: String) -> Result<RoomMessageEventContent> { pub(super) async fn raw_get(&self, map: String, key: String) -> Result {
let map = self.services.db.get(&map)?; let map = self.services.db.get(&map)?;
let timer = Instant::now(); let timer = Instant::now();
let handle = map.get(&key).await?; let handle = map.get(&key).await?;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
let result = String::from_utf8_lossy(&handle); let result = String::from_utf8_lossy(&handle);
self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:?}\n```"))
Ok(RoomMessageEventContent::notice_markdown(format!( .await
"Query completed in {query_time:?}:\n\n```rs\n{result:?}\n```"
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn raw_maps(&self) -> Result<RoomMessageEventContent> { pub(super) async fn raw_maps(&self) -> Result {
let list: Vec<_> = self.services.db.iter().map(at!(0)).copied().collect(); let list: Vec<_> = self.services.db.iter().map(at!(0)).copied().collect();
Ok(RoomMessageEventContent::notice_markdown(format!("{list:#?}"))) self.write_str(&format!("{list:#?}")).await
} }
fn with_maps_or<'a>( fn with_maps_or<'a>(

View file

@ -1,7 +1,7 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::{Result, utils::time}; use conduwuit::{Result, utils::time};
use futures::StreamExt; use futures::StreamExt;
use ruma::{OwnedServerName, events::room::message::RoomMessageEventContent}; use ruma::OwnedServerName;
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
@ -21,10 +21,7 @@ pub(crate) enum ResolverCommand {
} }
#[admin_command] #[admin_command]
async fn destinations_cache( async fn destinations_cache(&self, server_name: Option<OwnedServerName>) -> Result {
&self,
server_name: Option<OwnedServerName>,
) -> Result<RoomMessageEventContent> {
use service::resolver::cache::CachedDest; use service::resolver::cache::CachedDest;
writeln!(self, "| Server Name | Destination | Hostname | Expires |").await?; writeln!(self, "| Server Name | Destination | Hostname | Expires |").await?;
@ -44,11 +41,11 @@ async fn destinations_cache(
.await?; .await?;
} }
Ok(RoomMessageEventContent::notice_plain("")) Ok(())
} }
#[admin_command] #[admin_command]
async fn overrides_cache(&self, server_name: Option<String>) -> Result<RoomMessageEventContent> { async fn overrides_cache(&self, server_name: Option<String>) -> Result {
use service::resolver::cache::CachedOverride; use service::resolver::cache::CachedOverride;
writeln!(self, "| Server Name | IP | Port | Expires | Overriding |").await?; writeln!(self, "| Server Name | IP | Port | Expires | Overriding |").await?;
@ -70,5 +67,5 @@ async fn overrides_cache(&self, server_name: Option<String>) -> Result<RoomMessa
.await?; .await?;
} }
Ok(RoomMessageEventContent::notice_plain("")) Ok(())
} }

View file

@ -1,22 +1,22 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use futures::StreamExt; use futures::StreamExt;
use ruma::{RoomAliasId, RoomId}; use ruma::{OwnedRoomAliasId, OwnedRoomId};
use crate::Command; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
/// All the getters and iterators from src/database/key_value/rooms/alias.rs /// All the getters and iterators from src/database/key_value/rooms/alias.rs
pub(crate) enum RoomAliasCommand { pub(crate) enum RoomAliasCommand {
ResolveLocalAlias { ResolveLocalAlias {
/// Full room alias /// Full room alias
alias: Box<RoomAliasId>, alias: OwnedRoomAliasId,
}, },
/// - Iterator of all our local room aliases for the room ID /// - Iterator of all our local room aliases for the room ID
LocalAliasesForRoom { LocalAliasesForRoom {
/// Full room ID /// Full room ID
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - Iterator of all our local aliases in our database with their room IDs /// - Iterator of all our local aliases in our database with their room IDs
@ -24,7 +24,7 @@ pub(crate) enum RoomAliasCommand {
} }
/// All the getters and iterators in src/database/key_value/rooms/alias.rs /// All the getters and iterators in src/database/key_value/rooms/alias.rs
pub(super) async fn process(subcommand: RoomAliasCommand, context: &Command<'_>) -> Result { pub(super) async fn process(subcommand: RoomAliasCommand, context: &Context<'_>) -> Result {
let services = context.services; let services = context.services;
match subcommand { match subcommand {

View file

@ -1,85 +1,85 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::{Error, Result}; use conduwuit::Result;
use futures::StreamExt; use futures::StreamExt;
use ruma::{RoomId, ServerName, UserId, events::room::message::RoomMessageEventContent}; use ruma::{OwnedRoomId, OwnedServerName, OwnedUserId};
use crate::Command; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
pub(crate) enum RoomStateCacheCommand { pub(crate) enum RoomStateCacheCommand {
ServerInRoom { ServerInRoom {
server: Box<ServerName>, server: OwnedServerName,
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
RoomServers { RoomServers {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
ServerRooms { ServerRooms {
server: Box<ServerName>, server: OwnedServerName,
}, },
RoomMembers { RoomMembers {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
LocalUsersInRoom { LocalUsersInRoom {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
ActiveLocalUsersInRoom { ActiveLocalUsersInRoom {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
RoomJoinedCount { RoomJoinedCount {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
RoomInvitedCount { RoomInvitedCount {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
RoomUserOnceJoined { RoomUserOnceJoined {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
RoomMembersInvited { RoomMembersInvited {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
GetInviteCount { GetInviteCount {
room_id: Box<RoomId>, room_id: OwnedRoomId,
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
GetLeftCount { GetLeftCount {
room_id: Box<RoomId>, room_id: OwnedRoomId,
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
RoomsJoined { RoomsJoined {
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
RoomsLeft { RoomsLeft {
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
RoomsInvited { RoomsInvited {
user_id: Box<UserId>, user_id: OwnedUserId,
}, },
InviteState { InviteState {
user_id: Box<UserId>, user_id: OwnedUserId,
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
} }
pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command<'_>) -> Result { pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Context<'_>) -> Result {
let services = context.services; let services = context.services;
let c = match subcommand { match subcommand {
| RoomStateCacheCommand::ServerInRoom { server, room_id } => { | RoomStateCacheCommand::ServerInRoom { server, room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = services let result = services
@ -89,9 +89,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomServers { room_id } => { | RoomStateCacheCommand::RoomServers { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -104,9 +106,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::ServerRooms { server } => { | RoomStateCacheCommand::ServerRooms { server } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -119,9 +123,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomMembers { room_id } => { | RoomStateCacheCommand::RoomMembers { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -134,9 +140,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::LocalUsersInRoom { room_id } => { | RoomStateCacheCommand::LocalUsersInRoom { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -149,9 +157,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::ActiveLocalUsersInRoom { room_id } => { | RoomStateCacheCommand::ActiveLocalUsersInRoom { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -164,18 +174,22 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomJoinedCount { room_id } => { | RoomStateCacheCommand::RoomJoinedCount { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results = services.rooms.state_cache.room_joined_count(&room_id).await; let results = services.rooms.state_cache.room_joined_count(&room_id).await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomInvitedCount { room_id } => { | RoomStateCacheCommand::RoomInvitedCount { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -186,9 +200,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomUserOnceJoined { room_id } => { | RoomStateCacheCommand::RoomUserOnceJoined { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -201,9 +217,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomMembersInvited { room_id } => { | RoomStateCacheCommand::RoomMembersInvited { room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -216,9 +234,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::GetInviteCount { room_id, user_id } => { | RoomStateCacheCommand::GetInviteCount { room_id, user_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -229,9 +249,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::GetLeftCount { room_id, user_id } => { | RoomStateCacheCommand::GetLeftCount { room_id, user_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -242,9 +264,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomsJoined { user_id } => { | RoomStateCacheCommand::RoomsJoined { user_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -257,9 +281,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomsInvited { user_id } => { | RoomStateCacheCommand::RoomsInvited { user_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -271,9 +297,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::RoomsLeft { user_id } => { | RoomStateCacheCommand::RoomsLeft { user_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -285,9 +313,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
| RoomStateCacheCommand::InviteState { user_id, room_id } => { | RoomStateCacheCommand::InviteState { user_id, room_id } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
@ -298,13 +328,11 @@ pub(super) async fn process(subcommand: RoomStateCacheCommand, context: &Command
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Result::<_, Error>::Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
}?; }
context.write_str(c.body()).await?;
Ok(())
} }

View file

@ -1,7 +1,7 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::{PduCount, Result, utils::stream::TryTools}; use conduwuit::{PduCount, Result, utils::stream::TryTools};
use futures::TryStreamExt; use futures::TryStreamExt;
use ruma::{OwnedRoomOrAliasId, events::room::message::RoomMessageEventContent}; use ruma::OwnedRoomOrAliasId;
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
@ -24,7 +24,7 @@ pub(crate) enum RoomTimelineCommand {
} }
#[admin_command] #[admin_command]
pub(super) async fn last(&self, room_id: OwnedRoomOrAliasId) -> Result<RoomMessageEventContent> { pub(super) async fn last(&self, room_id: OwnedRoomOrAliasId) -> Result {
let room_id = self.services.rooms.alias.resolve(&room_id).await?; let room_id = self.services.rooms.alias.resolve(&room_id).await?;
let result = self let result = self
@ -34,7 +34,7 @@ pub(super) async fn last(&self, room_id: OwnedRoomOrAliasId) -> Result<RoomMessa
.last_timeline_count(None, &room_id) .last_timeline_count(None, &room_id)
.await?; .await?;
Ok(RoomMessageEventContent::notice_markdown(format!("{result:#?}"))) self.write_str(&format!("{result:#?}")).await
} }
#[admin_command] #[admin_command]
@ -43,7 +43,7 @@ pub(super) async fn pdus(
room_id: OwnedRoomOrAliasId, room_id: OwnedRoomOrAliasId,
from: Option<String>, from: Option<String>,
limit: Option<usize>, limit: Option<usize>,
) -> Result<RoomMessageEventContent> { ) -> Result {
let room_id = self.services.rooms.alias.resolve(&room_id).await?; let room_id = self.services.rooms.alias.resolve(&room_id).await?;
let from: Option<PduCount> = from.as_deref().map(str::parse).transpose()?; let from: Option<PduCount> = from.as_deref().map(str::parse).transpose()?;
@ -57,5 +57,5 @@ pub(super) async fn pdus(
.try_collect() .try_collect()
.await?; .await?;
Ok(RoomMessageEventContent::notice_markdown(format!("{result:#?}"))) self.write_str(&format!("{result:#?}")).await
} }

View file

@ -1,10 +1,10 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::{Err, Result};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ServerName, UserId, events::room::message::RoomMessageEventContent}; use ruma::{OwnedServerName, OwnedUserId};
use service::sending::Destination; use service::sending::Destination;
use crate::Command; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
/// All the getters and iterators from src/database/key_value/sending.rs /// All the getters and iterators from src/database/key_value/sending.rs
@ -27,9 +27,9 @@ pub(crate) enum SendingCommand {
#[arg(short, long)] #[arg(short, long)]
appservice_id: Option<String>, appservice_id: Option<String>,
#[arg(short, long)] #[arg(short, long)]
server_name: Option<Box<ServerName>>, server_name: Option<OwnedServerName>,
#[arg(short, long)] #[arg(short, long)]
user_id: Option<Box<UserId>>, user_id: Option<OwnedUserId>,
#[arg(short, long)] #[arg(short, long)]
push_key: Option<String>, push_key: Option<String>,
}, },
@ -49,30 +49,20 @@ pub(crate) enum SendingCommand {
#[arg(short, long)] #[arg(short, long)]
appservice_id: Option<String>, appservice_id: Option<String>,
#[arg(short, long)] #[arg(short, long)]
server_name: Option<Box<ServerName>>, server_name: Option<OwnedServerName>,
#[arg(short, long)] #[arg(short, long)]
user_id: Option<Box<UserId>>, user_id: Option<OwnedUserId>,
#[arg(short, long)] #[arg(short, long)]
push_key: Option<String>, push_key: Option<String>,
}, },
GetLatestEduCount { GetLatestEduCount {
server_name: Box<ServerName>, server_name: OwnedServerName,
}, },
} }
/// All the getters and iterators in key_value/sending.rs /// All the getters and iterators in key_value/sending.rs
pub(super) async fn process(subcommand: SendingCommand, context: &Command<'_>) -> Result { pub(super) async fn process(subcommand: SendingCommand, context: &Context<'_>) -> Result {
let c = reprocess(subcommand, context).await?;
context.write_str(c.body()).await?;
Ok(())
}
/// All the getters and iterators in key_value/sending.rs
pub(super) async fn reprocess(
subcommand: SendingCommand,
context: &Command<'_>,
) -> Result<RoomMessageEventContent> {
let services = context.services; let services = context.services;
match subcommand { match subcommand {
@ -82,9 +72,11 @@ pub(super) async fn reprocess(
let active_requests = results.collect::<Vec<_>>().await; let active_requests = results.collect::<Vec<_>>().await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{active_requests:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{active_requests:#?}\n```"
))
.await
}, },
| SendingCommand::QueuedRequests { | SendingCommand::QueuedRequests {
appservice_id, appservice_id,
@ -97,19 +89,19 @@ pub(super) async fn reprocess(
&& user_id.is_none() && user_id.is_none()
&& push_key.is_none() && push_key.is_none()
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
} }
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results = match (appservice_id, server_name, user_id, push_key) { let results = match (appservice_id, server_name, user_id, push_key) {
| (Some(appservice_id), None, None, None) => { | (Some(appservice_id), None, None, None) => {
if appservice_id.is_empty() { if appservice_id.is_empty() {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
} }
services services
@ -120,40 +112,42 @@ pub(super) async fn reprocess(
| (None, Some(server_name), None, None) => services | (None, Some(server_name), None, None) => services
.sending .sending
.db .db
.queued_requests(&Destination::Federation(server_name.into())), .queued_requests(&Destination::Federation(server_name)),
| (None, None, Some(user_id), Some(push_key)) => { | (None, None, Some(user_id), Some(push_key)) => {
if push_key.is_empty() { if push_key.is_empty() {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
} }
services services
.sending .sending
.db .db
.queued_requests(&Destination::Push(user_id.into(), push_key)) .queued_requests(&Destination::Push(user_id, push_key))
}, },
| (Some(_), Some(_), Some(_), Some(_)) => { | (Some(_), Some(_), Some(_), Some(_)) => {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. Not all of them See --help for more details.", specified via arguments. Not all of them See --help for more details.",
)); );
}, },
| _ => { | _ => {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
}, },
}; };
let queued_requests = results.collect::<Vec<_>>().await; let queued_requests = results.collect::<Vec<_>>().await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{queued_requests:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{queued_requests:#?}\n```"
))
.await
}, },
| SendingCommand::ActiveRequestsFor { | SendingCommand::ActiveRequestsFor {
appservice_id, appservice_id,
@ -166,20 +160,20 @@ pub(super) async fn reprocess(
&& user_id.is_none() && user_id.is_none()
&& push_key.is_none() && push_key.is_none()
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
} }
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results = match (appservice_id, server_name, user_id, push_key) { let results = match (appservice_id, server_name, user_id, push_key) {
| (Some(appservice_id), None, None, None) => { | (Some(appservice_id), None, None, None) => {
if appservice_id.is_empty() { if appservice_id.is_empty() {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
} }
services services
@ -190,49 +184,53 @@ pub(super) async fn reprocess(
| (None, Some(server_name), None, None) => services | (None, Some(server_name), None, None) => services
.sending .sending
.db .db
.active_requests_for(&Destination::Federation(server_name.into())), .active_requests_for(&Destination::Federation(server_name)),
| (None, None, Some(user_id), Some(push_key)) => { | (None, None, Some(user_id), Some(push_key)) => {
if push_key.is_empty() { if push_key.is_empty() {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
} }
services services
.sending .sending
.db .db
.active_requests_for(&Destination::Push(user_id.into(), push_key)) .active_requests_for(&Destination::Push(user_id, push_key))
}, },
| (Some(_), Some(_), Some(_), Some(_)) => { | (Some(_), Some(_), Some(_), Some(_)) => {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. Not all of them See --help for more details.", specified via arguments. Not all of them See --help for more details.",
)); );
}, },
| _ => { | _ => {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"An appservice ID, server name, or a user ID with push key must be \ "An appservice ID, server name, or a user ID with push key must be \
specified via arguments. See --help for more details.", specified via arguments. See --help for more details.",
)); );
}, },
}; };
let active_requests = results.collect::<Vec<_>>().await; let active_requests = results.collect::<Vec<_>>().await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{active_requests:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{active_requests:#?}\n```"
))
.await
}, },
| SendingCommand::GetLatestEduCount { server_name } => { | SendingCommand::GetLatestEduCount { server_name } => {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let results = services.sending.db.get_latest_educount(&server_name).await; let results = services.sending.db.get_latest_educount(&server_name).await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( context
"Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```" .write_str(&format!(
))) "Query completed in {query_time:?}:\n\n```rs\n{results:#?}\n```"
))
.await
}, },
} }
} }

View file

@ -1,6 +1,6 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use ruma::{OwnedEventId, OwnedRoomOrAliasId, events::room::message::RoomMessageEventContent}; use ruma::{OwnedEventId, OwnedRoomOrAliasId};
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
@ -18,10 +18,7 @@ pub(crate) enum ShortCommand {
} }
#[admin_command] #[admin_command]
pub(super) async fn short_event_id( pub(super) async fn short_event_id(&self, event_id: OwnedEventId) -> Result {
&self,
event_id: OwnedEventId,
) -> Result<RoomMessageEventContent> {
let shortid = self let shortid = self
.services .services
.rooms .rooms
@ -29,17 +26,14 @@ pub(super) async fn short_event_id(
.get_shorteventid(&event_id) .get_shorteventid(&event_id)
.await?; .await?;
Ok(RoomMessageEventContent::notice_markdown(format!("{shortid:#?}"))) self.write_str(&format!("{shortid:#?}")).await
} }
#[admin_command] #[admin_command]
pub(super) async fn short_room_id( pub(super) async fn short_room_id(&self, room_id: OwnedRoomOrAliasId) -> Result {
&self,
room_id: OwnedRoomOrAliasId,
) -> Result<RoomMessageEventContent> {
let room_id = self.services.rooms.alias.resolve(&room_id).await?; let room_id = self.services.rooms.alias.resolve(&room_id).await?;
let shortid = self.services.rooms.short.get_shortroomid(&room_id).await?; let shortid = self.services.rooms.short.get_shortroomid(&room_id).await?;
Ok(RoomMessageEventContent::notice_markdown(format!("{shortid:#?}"))) self.write_str(&format!("{shortid:#?}")).await
} }

View file

@ -1,9 +1,7 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use futures::stream::StreamExt; use futures::stream::StreamExt;
use ruma::{ use ruma::{OwnedDeviceId, OwnedRoomId, OwnedUserId};
OwnedDeviceId, OwnedRoomId, OwnedUserId, events::room::message::RoomMessageEventContent,
};
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
@ -99,11 +97,7 @@ pub(crate) enum UsersCommand {
} }
#[admin_command] #[admin_command]
async fn get_shared_rooms( async fn get_shared_rooms(&self, user_a: OwnedUserId, user_b: OwnedUserId) -> Result {
&self,
user_a: OwnedUserId,
user_b: OwnedUserId,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result: Vec<_> = self let result: Vec<_> = self
.services .services
@ -115,9 +109,8 @@ async fn get_shared_rooms(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
@ -127,7 +120,7 @@ async fn get_backup_session(
version: String, version: String,
room_id: OwnedRoomId, room_id: OwnedRoomId,
session_id: String, session_id: String,
) -> Result<RoomMessageEventContent> { ) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -136,9 +129,8 @@ async fn get_backup_session(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
@ -147,7 +139,7 @@ async fn get_room_backups(
user_id: OwnedUserId, user_id: OwnedUserId,
version: String, version: String,
room_id: OwnedRoomId, room_id: OwnedRoomId,
) -> Result<RoomMessageEventContent> { ) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -156,32 +148,22 @@ async fn get_room_backups(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_all_backups( async fn get_all_backups(&self, user_id: OwnedUserId, version: String) -> Result {
&self,
user_id: OwnedUserId,
version: String,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self.services.key_backups.get_all(&user_id, &version).await; let result = self.services.key_backups.get_all(&user_id, &version).await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_backup_algorithm( async fn get_backup_algorithm(&self, user_id: OwnedUserId, version: String) -> Result {
&self,
user_id: OwnedUserId,
version: String,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -190,16 +172,12 @@ async fn get_backup_algorithm(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_latest_backup_version( async fn get_latest_backup_version(&self, user_id: OwnedUserId) -> Result {
&self,
user_id: OwnedUserId,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -208,36 +186,33 @@ async fn get_latest_backup_version(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_latest_backup(&self, user_id: OwnedUserId) -> Result<RoomMessageEventContent> { async fn get_latest_backup(&self, user_id: OwnedUserId) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self.services.key_backups.get_latest_backup(&user_id).await; let result = self.services.key_backups.get_latest_backup(&user_id).await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn iter_users(&self) -> Result<RoomMessageEventContent> { async fn iter_users(&self) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result: Vec<OwnedUserId> = self.services.users.stream().map(Into::into).collect().await; let result: Vec<OwnedUserId> = self.services.users.stream().map(Into::into).collect().await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn iter_users2(&self) -> Result<RoomMessageEventContent> { async fn iter_users2(&self) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result: Vec<_> = self.services.users.stream().collect().await; let result: Vec<_> = self.services.users.stream().collect().await;
let result: Vec<_> = result let result: Vec<_> = result
@ -248,35 +223,32 @@ async fn iter_users2(&self) -> Result<RoomMessageEventContent> {
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn count_users(&self) -> Result<RoomMessageEventContent> { async fn count_users(&self) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self.services.users.count().await; let result = self.services.users.count().await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn password_hash(&self, user_id: OwnedUserId) -> Result<RoomMessageEventContent> { async fn password_hash(&self, user_id: OwnedUserId) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self.services.users.password_hash(&user_id).await; let result = self.services.users.password_hash(&user_id).await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn list_devices(&self, user_id: OwnedUserId) -> Result<RoomMessageEventContent> { async fn list_devices(&self, user_id: OwnedUserId) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let devices = self let devices = self
.services .services
@ -288,13 +260,12 @@ async fn list_devices(&self, user_id: OwnedUserId) -> Result<RoomMessageEventCon
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{devices:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{devices:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn list_devices_metadata(&self, user_id: OwnedUserId) -> Result<RoomMessageEventContent> { async fn list_devices_metadata(&self, user_id: OwnedUserId) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let devices = self let devices = self
.services .services
@ -304,17 +275,12 @@ async fn list_devices_metadata(&self, user_id: OwnedUserId) -> Result<RoomMessag
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{devices:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{devices:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_device_metadata( async fn get_device_metadata(&self, user_id: OwnedUserId, device_id: OwnedDeviceId) -> Result {
&self,
user_id: OwnedUserId,
device_id: OwnedDeviceId,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let device = self let device = self
.services .services
@ -323,28 +289,22 @@ async fn get_device_metadata(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{device:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{device:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_devices_version(&self, user_id: OwnedUserId) -> Result<RoomMessageEventContent> { async fn get_devices_version(&self, user_id: OwnedUserId) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let device = self.services.users.get_devicelist_version(&user_id).await; let device = self.services.users.get_devicelist_version(&user_id).await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{device:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{device:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn count_one_time_keys( async fn count_one_time_keys(&self, user_id: OwnedUserId, device_id: OwnedDeviceId) -> Result {
&self,
user_id: OwnedUserId,
device_id: OwnedDeviceId,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -353,17 +313,12 @@ async fn count_one_time_keys(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_device_keys( async fn get_device_keys(&self, user_id: OwnedUserId, device_id: OwnedDeviceId) -> Result {
&self,
user_id: OwnedUserId,
device_id: OwnedDeviceId,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -372,24 +327,22 @@ async fn get_device_keys(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_user_signing_key(&self, user_id: OwnedUserId) -> Result<RoomMessageEventContent> { async fn get_user_signing_key(&self, user_id: OwnedUserId) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self.services.users.get_user_signing_key(&user_id).await; let result = self.services.users.get_user_signing_key(&user_id).await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_master_key(&self, user_id: OwnedUserId) -> Result<RoomMessageEventContent> { async fn get_master_key(&self, user_id: OwnedUserId) -> Result {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -398,17 +351,12 @@ async fn get_master_key(&self, user_id: OwnedUserId) -> Result<RoomMessageEventC
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }
#[admin_command] #[admin_command]
async fn get_to_device_events( async fn get_to_device_events(&self, user_id: OwnedUserId, device_id: OwnedDeviceId) -> Result {
&self,
user_id: OwnedUserId,
device_id: OwnedDeviceId,
) -> Result<RoomMessageEventContent> {
let timer = tokio::time::Instant::now(); let timer = tokio::time::Instant::now();
let result = self let result = self
.services .services
@ -418,7 +366,6 @@ async fn get_to_device_events(
.await; .await;
let query_time = timer.elapsed(); let query_time = timer.elapsed();
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```"))
"Query completed in {query_time:?}:\n\n```rs\n{result:#?}\n```" .await
)))
} }

View file

@ -1,13 +1,11 @@
use std::fmt::Write; use std::fmt::Write;
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::{Err, Result};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{OwnedRoomAliasId, OwnedRoomId};
OwnedRoomAliasId, OwnedRoomId, RoomId, events::room::message::RoomMessageEventContent,
};
use crate::{Command, escape_html}; use crate::Context;
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
pub(crate) enum RoomAliasCommand { pub(crate) enum RoomAliasCommand {
@ -18,7 +16,7 @@ pub(crate) enum RoomAliasCommand {
force: bool, force: bool,
/// The room id to set the alias on /// The room id to set the alias on
room_id: Box<RoomId>, room_id: OwnedRoomId,
/// The alias localpart to use (`alias`, not `#alias:servername.tld`) /// The alias localpart to use (`alias`, not `#alias:servername.tld`)
room_alias_localpart: String, room_alias_localpart: String,
@ -40,21 +38,11 @@ pub(crate) enum RoomAliasCommand {
/// - List aliases currently being used /// - List aliases currently being used
List { List {
/// If set, only list the aliases for this room /// If set, only list the aliases for this room
room_id: Option<Box<RoomId>>, room_id: Option<OwnedRoomId>,
}, },
} }
pub(super) async fn process(command: RoomAliasCommand, context: &Command<'_>) -> Result { pub(super) async fn process(command: RoomAliasCommand, context: &Context<'_>) -> Result {
let c = reprocess(command, context).await?;
context.write_str(c.body()).await?;
Ok(())
}
pub(super) async fn reprocess(
command: RoomAliasCommand,
context: &Command<'_>,
) -> Result<RoomMessageEventContent> {
let services = context.services; let services = context.services;
let server_user = &services.globals.server_user; let server_user = &services.globals.server_user;
@ -67,9 +55,7 @@ pub(super) async fn reprocess(
let room_alias = match OwnedRoomAliasId::parse(room_alias_str) { let room_alias = match OwnedRoomAliasId::parse(room_alias_str) {
| Ok(alias) => alias, | Ok(alias) => alias,
| Err(err) => { | Err(err) => {
return Ok(RoomMessageEventContent::text_plain(format!( return Err!("Failed to parse alias: {err}");
"Failed to parse alias: {err}"
)));
}, },
}; };
match command { match command {
@ -81,60 +67,50 @@ pub(super) async fn reprocess(
&room_id, &room_id,
server_user, server_user,
) { ) {
| Ok(()) => Ok(RoomMessageEventContent::text_plain(format!( | Err(err) => Err!("Failed to remove alias: {err}"),
"Successfully overwrote alias (formerly {id})" | Ok(()) =>
))), context
| Err(err) => Ok(RoomMessageEventContent::text_plain(format!( .write_str(&format!(
"Failed to remove alias: {err}" "Successfully overwrote alias (formerly {id})"
))), ))
.await,
} }
}, },
| (false, Ok(id)) => Ok(RoomMessageEventContent::text_plain(format!( | (false, Ok(id)) => Err!(
"Refusing to overwrite in use alias for {id}, use -f or --force to \ "Refusing to overwrite in use alias for {id}, use -f or --force to \
overwrite" overwrite"
))), ),
| (_, Err(_)) => { | (_, Err(_)) => {
match services.rooms.alias.set_alias( match services.rooms.alias.set_alias(
&room_alias, &room_alias,
&room_id, &room_id,
server_user, server_user,
) { ) {
| Ok(()) => Ok(RoomMessageEventContent::text_plain( | Err(err) => Err!("Failed to remove alias: {err}"),
"Successfully set alias", | Ok(()) => context.write_str("Successfully set alias").await,
)),
| Err(err) => Ok(RoomMessageEventContent::text_plain(format!(
"Failed to remove alias: {err}"
))),
} }
}, },
} }
}, },
| RoomAliasCommand::Remove { .. } => { | RoomAliasCommand::Remove { .. } => {
match services.rooms.alias.resolve_local_alias(&room_alias).await { match services.rooms.alias.resolve_local_alias(&room_alias).await {
| Err(_) => Err!("Alias isn't in use."),
| Ok(id) => match services | Ok(id) => match services
.rooms .rooms
.alias .alias
.remove_alias(&room_alias, server_user) .remove_alias(&room_alias, server_user)
.await .await
{ {
| Ok(()) => Ok(RoomMessageEventContent::text_plain(format!( | Err(err) => Err!("Failed to remove alias: {err}"),
"Removed alias from {id}" | Ok(()) =>
))), context.write_str(&format!("Removed alias from {id}")).await,
| Err(err) => Ok(RoomMessageEventContent::text_plain(format!(
"Failed to remove alias: {err}"
))),
}, },
| Err(_) =>
Ok(RoomMessageEventContent::text_plain("Alias isn't in use.")),
} }
}, },
| RoomAliasCommand::Which { .. } => { | RoomAliasCommand::Which { .. } => {
match services.rooms.alias.resolve_local_alias(&room_alias).await { match services.rooms.alias.resolve_local_alias(&room_alias).await {
| Ok(id) => Ok(RoomMessageEventContent::text_plain(format!( | Err(_) => Err!("Alias isn't in use."),
"Alias resolves to {id}" | Ok(id) => context.write_str(&format!("Alias resolves to {id}")).await,
))),
| Err(_) =>
Ok(RoomMessageEventContent::text_plain("Alias isn't in use.")),
} }
}, },
| RoomAliasCommand::List { .. } => unreachable!(), | RoomAliasCommand::List { .. } => unreachable!(),
@ -156,15 +132,8 @@ pub(super) async fn reprocess(
output output
}); });
let html_list = aliases.iter().fold(String::new(), |mut output, alias| {
writeln!(output, "<li>{}</li>", escape_html(alias.as_ref()))
.expect("should be able to write to string buffer");
output
});
let plain = format!("Aliases for {room_id}:\n{plain_list}"); let plain = format!("Aliases for {room_id}:\n{plain_list}");
let html = format!("Aliases for {room_id}:\n<ul>{html_list}</ul>"); context.write_str(&plain).await
Ok(RoomMessageEventContent::text_html(plain, html))
} else { } else {
let aliases = services let aliases = services
.rooms .rooms
@ -183,23 +152,8 @@ pub(super) async fn reprocess(
output output
}); });
let html_list = aliases
.iter()
.fold(String::new(), |mut output, (alias, id)| {
writeln!(
output,
"<li><code>{}</code> -> #{}:{}</li>",
escape_html(alias.as_ref()),
escape_html(id),
server_name
)
.expect("should be able to write to string buffer");
output
});
let plain = format!("Aliases:\n{plain_list}"); let plain = format!("Aliases:\n{plain_list}");
let html = format!("Aliases:\n<ul>{html_list}</ul>"); context.write_str(&plain).await
Ok(RoomMessageEventContent::text_html(plain, html))
}, },
} }
} }

View file

@ -1,6 +1,6 @@
use conduwuit::Result; use conduwuit::{Err, Result};
use futures::StreamExt; use futures::StreamExt;
use ruma::{OwnedRoomId, events::room::message::RoomMessageEventContent}; use ruma::OwnedRoomId;
use crate::{PAGE_SIZE, admin_command, get_room_info}; use crate::{PAGE_SIZE, admin_command, get_room_info};
@ -11,7 +11,7 @@ pub(super) async fn list_rooms(
exclude_disabled: bool, exclude_disabled: bool,
exclude_banned: bool, exclude_banned: bool,
no_details: bool, no_details: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
// TODO: i know there's a way to do this with clap, but i can't seem to find it // TODO: i know there's a way to do this with clap, but i can't seem to find it
let page = page.unwrap_or(1); let page = page.unwrap_or(1);
let mut rooms = self let mut rooms = self
@ -41,29 +41,28 @@ pub(super) async fn list_rooms(
.collect::<Vec<_>>(); .collect::<Vec<_>>();
if rooms.is_empty() { if rooms.is_empty() {
return Ok(RoomMessageEventContent::text_plain("No more rooms.")); return Err!("No more rooms.");
} }
let output_plain = format!( let body = rooms
"Rooms ({}):\n```\n{}\n```", .iter()
rooms.len(), .map(|(id, members, name)| {
rooms if no_details {
.iter()
.map(|(id, members, name)| if no_details {
format!("{id}") format!("{id}")
} else { } else {
format!("{id}\tMembers: {members}\tName: {name}") format!("{id}\tMembers: {members}\tName: {name}")
}) }
.collect::<Vec<_>>() })
.join("\n") .collect::<Vec<_>>()
); .join("\n");
Ok(RoomMessageEventContent::notice_markdown(output_plain)) self.write_str(&format!("Rooms ({}):\n```\n{body}\n```", rooms.len(),))
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn exists(&self, room_id: OwnedRoomId) -> Result<RoomMessageEventContent> { pub(super) async fn exists(&self, room_id: OwnedRoomId) -> Result {
let result = self.services.rooms.metadata.exists(&room_id).await; let result = self.services.rooms.metadata.exists(&room_id).await;
Ok(RoomMessageEventContent::notice_markdown(format!("{result}"))) self.write_str(&format!("{result}")).await
} }

View file

@ -1,22 +1,22 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::{Err, Result};
use futures::StreamExt; use futures::StreamExt;
use ruma::{RoomId, events::room::message::RoomMessageEventContent}; use ruma::OwnedRoomId;
use crate::{Command, PAGE_SIZE, get_room_info}; use crate::{Context, PAGE_SIZE, get_room_info};
#[derive(Debug, Subcommand)] #[derive(Debug, Subcommand)]
pub(crate) enum RoomDirectoryCommand { pub(crate) enum RoomDirectoryCommand {
/// - Publish a room to the room directory /// - Publish a room to the room directory
Publish { Publish {
/// The room id of the room to publish /// The room id of the room to publish
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - Unpublish a room to the room directory /// - Unpublish a room to the room directory
Unpublish { Unpublish {
/// The room id of the room to unpublish /// The room id of the room to unpublish
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - List rooms that are published /// - List rooms that are published
@ -25,25 +25,16 @@ pub(crate) enum RoomDirectoryCommand {
}, },
} }
pub(super) async fn process(command: RoomDirectoryCommand, context: &Command<'_>) -> Result { pub(super) async fn process(command: RoomDirectoryCommand, context: &Context<'_>) -> Result {
let c = reprocess(command, context).await?;
context.write_str(c.body()).await?;
Ok(())
}
pub(super) async fn reprocess(
command: RoomDirectoryCommand,
context: &Command<'_>,
) -> Result<RoomMessageEventContent> {
let services = context.services; let services = context.services;
match command { match command {
| RoomDirectoryCommand::Publish { room_id } => { | RoomDirectoryCommand::Publish { room_id } => {
services.rooms.directory.set_public(&room_id); services.rooms.directory.set_public(&room_id);
Ok(RoomMessageEventContent::notice_plain("Room published")) context.write_str("Room published").await
}, },
| RoomDirectoryCommand::Unpublish { room_id } => { | RoomDirectoryCommand::Unpublish { room_id } => {
services.rooms.directory.set_not_public(&room_id); services.rooms.directory.set_not_public(&room_id);
Ok(RoomMessageEventContent::notice_plain("Room unpublished")) context.write_str("Room unpublished").await
}, },
| RoomDirectoryCommand::List { page } => { | RoomDirectoryCommand::List { page } => {
// TODO: i know there's a way to do this with clap, but i can't seem to find it // TODO: i know there's a way to do this with clap, but i can't seem to find it
@ -66,20 +57,18 @@ pub(super) async fn reprocess(
.collect(); .collect();
if rooms.is_empty() { if rooms.is_empty() {
return Ok(RoomMessageEventContent::text_plain("No more rooms.")); return Err!("No more rooms.");
} }
let output = format!( let body = rooms
"Rooms (page {page}):\n```\n{}\n```", .iter()
rooms .map(|(id, members, name)| format!("{id} | Members: {members} | Name: {name}"))
.iter() .collect::<Vec<_>>()
.map(|(id, members, name)| format!( .join("\n");
"{id} | Members: {members} | Name: {name}"
)) context
.collect::<Vec<_>>() .write_str(&format!("Rooms (page {page}):\n```\n{body}\n```",))
.join("\n") .await
);
Ok(RoomMessageEventContent::text_markdown(output))
}, },
} }
} }

View file

@ -1,7 +1,7 @@
use clap::Subcommand; use clap::Subcommand;
use conduwuit::{Result, utils::ReadyExt}; use conduwuit::{Err, Result, utils::ReadyExt};
use futures::StreamExt; use futures::StreamExt;
use ruma::{RoomId, events::room::message::RoomMessageEventContent}; use ruma::OwnedRoomId;
use crate::{admin_command, admin_command_dispatch}; use crate::{admin_command, admin_command_dispatch};
@ -10,7 +10,7 @@ use crate::{admin_command, admin_command_dispatch};
pub(crate) enum RoomInfoCommand { pub(crate) enum RoomInfoCommand {
/// - List joined members in a room /// - List joined members in a room
ListJoinedMembers { ListJoinedMembers {
room_id: Box<RoomId>, room_id: OwnedRoomId,
/// Lists only our local users in the specified room /// Lists only our local users in the specified room
#[arg(long)] #[arg(long)]
@ -22,16 +22,12 @@ pub(crate) enum RoomInfoCommand {
/// Room topics can be huge, so this is in its /// Room topics can be huge, so this is in its
/// own separate command /// own separate command
ViewRoomTopic { ViewRoomTopic {
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
} }
#[admin_command] #[admin_command]
async fn list_joined_members( async fn list_joined_members(&self, room_id: OwnedRoomId, local_only: bool) -> Result {
&self,
room_id: Box<RoomId>,
local_only: bool,
) -> Result<RoomMessageEventContent> {
let room_name = self let room_name = self
.services .services
.rooms .rooms
@ -64,22 +60,19 @@ async fn list_joined_members(
.collect() .collect()
.await; .await;
let output_plain = format!( let num = member_info.len();
"{} Members in Room \"{}\":\n```\n{}\n```", let body = member_info
member_info.len(), .into_iter()
room_name, .map(|(displayname, mxid)| format!("{mxid} | {displayname}"))
member_info .collect::<Vec<_>>()
.into_iter() .join("\n");
.map(|(displayname, mxid)| format!("{mxid} | {displayname}"))
.collect::<Vec<_>>()
.join("\n")
);
Ok(RoomMessageEventContent::notice_markdown(output_plain)) self.write_str(&format!("{num} Members in Room \"{room_name}\":\n```\n{body}\n```",))
.await
} }
#[admin_command] #[admin_command]
async fn view_room_topic(&self, room_id: Box<RoomId>) -> Result<RoomMessageEventContent> { async fn view_room_topic(&self, room_id: OwnedRoomId) -> Result {
let Ok(room_topic) = self let Ok(room_topic) = self
.services .services
.rooms .rooms
@ -87,10 +80,9 @@ async fn view_room_topic(&self, room_id: Box<RoomId>) -> Result<RoomMessageEvent
.get_room_topic(&room_id) .get_room_topic(&room_id)
.await .await
else { else {
return Ok(RoomMessageEventContent::text_plain("Room does not have a room topic set.")); return Err!("Room does not have a room topic set.");
}; };
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("Room topic:\n```\n{room_topic}\n```"))
"Room topic:\n```\n{room_topic}\n```" .await
)))
} }

View file

@ -1,15 +1,12 @@
use api::client::leave_room; use api::client::leave_room;
use clap::Subcommand; use clap::Subcommand;
use conduwuit::{ use conduwuit::{
Result, debug, Err, Result, debug,
utils::{IterStream, ReadyExt}, utils::{IterStream, ReadyExt},
warn, warn,
}; };
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{OwnedRoomId, OwnedRoomOrAliasId, RoomAliasId, RoomId, RoomOrAliasId};
OwnedRoomId, RoomAliasId, RoomId, RoomOrAliasId,
events::room::message::RoomMessageEventContent,
};
use crate::{admin_command, admin_command_dispatch, get_room_info}; use crate::{admin_command, admin_command_dispatch, get_room_info};
@ -24,7 +21,7 @@ pub(crate) enum RoomModerationCommand {
BanRoom { BanRoom {
/// The room in the format of `!roomid:example.com` or a room alias in /// The room in the format of `!roomid:example.com` or a room alias in
/// the format of `#roomalias:example.com` /// the format of `#roomalias:example.com`
room: Box<RoomOrAliasId>, room: OwnedRoomOrAliasId,
}, },
/// - Bans a list of rooms (room IDs and room aliases) from a newline /// - Bans a list of rooms (room IDs and room aliases) from a newline
@ -36,7 +33,7 @@ pub(crate) enum RoomModerationCommand {
UnbanRoom { UnbanRoom {
/// The room in the format of `!roomid:example.com` or a room alias in /// The room in the format of `!roomid:example.com` or a room alias in
/// the format of `#roomalias:example.com` /// the format of `#roomalias:example.com`
room: Box<RoomOrAliasId>, room: OwnedRoomOrAliasId,
}, },
/// - List of all rooms we have banned /// - List of all rooms we have banned
@ -49,14 +46,14 @@ pub(crate) enum RoomModerationCommand {
} }
#[admin_command] #[admin_command]
async fn ban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventContent> { async fn ban_room(&self, room: OwnedRoomOrAliasId) -> Result {
debug!("Got room alias or ID: {}", room); debug!("Got room alias or ID: {}", room);
let admin_room_alias = &self.services.globals.admin_alias; let admin_room_alias = &self.services.globals.admin_alias;
if let Ok(admin_room_id) = self.services.admin.get_admin_room().await { if let Ok(admin_room_id) = self.services.admin.get_admin_room().await {
if room.to_string().eq(&admin_room_id) || room.to_string().eq(admin_room_alias) { if room.to_string().eq(&admin_room_id) || room.to_string().eq(admin_room_alias) {
return Ok(RoomMessageEventContent::text_plain("Not allowed to ban the admin room.")); return Err!("Not allowed to ban the admin room.");
} }
} }
@ -64,11 +61,11 @@ async fn ban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventCon
let room_id = match RoomId::parse(&room) { let room_id = match RoomId::parse(&room) {
| Ok(room_id) => room_id, | Ok(room_id) => room_id,
| Err(e) => { | Err(e) => {
return Ok(RoomMessageEventContent::text_plain(format!( return Err!(
"Failed to parse room ID {room}. Please note that this requires a full room \ "Failed to parse room ID {room}. Please note that this requires a full room \
ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \ ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \
(`#roomalias:example.com`): {e}" (`#roomalias:example.com`): {e}"
))); );
}, },
}; };
@ -80,11 +77,11 @@ async fn ban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventCon
let room_alias = match RoomAliasId::parse(&room) { let room_alias = match RoomAliasId::parse(&room) {
| Ok(room_alias) => room_alias, | Ok(room_alias) => room_alias,
| Err(e) => { | Err(e) => {
return Ok(RoomMessageEventContent::text_plain(format!( return Err!(
"Failed to parse room ID {room}. Please note that this requires a full room \ "Failed to parse room ID {room}. Please note that this requires a full room \
ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \ ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \
(`#roomalias:example.com`): {e}" (`#roomalias:example.com`): {e}"
))); );
}, },
}; };
@ -123,9 +120,9 @@ async fn ban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventCon
room_id room_id
}, },
| Err(e) => { | Err(e) => {
return Ok(RoomMessageEventContent::notice_plain(format!( return Err!(
"Failed to resolve room alias {room_alias} to a room ID: {e}" "Failed to resolve room alias {room_alias} to a room ID: {e}"
))); );
}, },
} }
}, },
@ -135,11 +132,11 @@ async fn ban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventCon
room_id room_id
} else { } else {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Room specified is not a room ID or room alias. Please note that this requires a \ "Room specified is not a room ID or room alias. Please note that this requires a \
full room ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \ full room ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \
(`#roomalias:example.com`)", (`#roomalias:example.com`)",
)); );
}; };
debug!("Making all users leave the room {room_id} and forgetting it"); debug!("Making all users leave the room {room_id} and forgetting it");
@ -185,20 +182,19 @@ async fn ban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventCon
self.services.rooms.metadata.disable_room(&room_id, true); self.services.rooms.metadata.disable_room(&room_id, true);
Ok(RoomMessageEventContent::text_plain( self.write_str(
"Room banned, removed all our local users, and disabled incoming federation with room.", "Room banned, removed all our local users, and disabled incoming federation with room.",
)) )
.await
} }
#[admin_command] #[admin_command]
async fn ban_list_of_rooms(&self) -> Result<RoomMessageEventContent> { async fn ban_list_of_rooms(&self) -> Result {
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&"").trim() != "```" || self.body.last().unwrap_or(&"").trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.",);
"Expected code block in command body. Add --help for details.",
));
} }
let rooms_s = self let rooms_s = self
@ -356,23 +352,24 @@ async fn ban_list_of_rooms(&self) -> Result<RoomMessageEventContent> {
self.services.rooms.metadata.disable_room(&room_id, true); self.services.rooms.metadata.disable_room(&room_id, true);
} }
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!(
"Finished bulk room ban, banned {room_ban_count} total rooms, evicted all users, and \ "Finished bulk room ban, banned {room_ban_count} total rooms, evicted all users, and \
disabled incoming federation with the room." disabled incoming federation with the room."
))) ))
.await
} }
#[admin_command] #[admin_command]
async fn unban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventContent> { async fn unban_room(&self, room: OwnedRoomOrAliasId) -> Result {
let room_id = if room.is_room_id() { let room_id = if room.is_room_id() {
let room_id = match RoomId::parse(&room) { let room_id = match RoomId::parse(&room) {
| Ok(room_id) => room_id, | Ok(room_id) => room_id,
| Err(e) => { | Err(e) => {
return Ok(RoomMessageEventContent::text_plain(format!( return Err!(
"Failed to parse room ID {room}. Please note that this requires a full room \ "Failed to parse room ID {room}. Please note that this requires a full room \
ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \ ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \
(`#roomalias:example.com`): {e}" (`#roomalias:example.com`): {e}"
))); );
}, },
}; };
@ -384,11 +381,11 @@ async fn unban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventC
let room_alias = match RoomAliasId::parse(&room) { let room_alias = match RoomAliasId::parse(&room) {
| Ok(room_alias) => room_alias, | Ok(room_alias) => room_alias,
| Err(e) => { | Err(e) => {
return Ok(RoomMessageEventContent::text_plain(format!( return Err!(
"Failed to parse room ID {room}. Please note that this requires a full room \ "Failed to parse room ID {room}. Please note that this requires a full room \
ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \ ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \
(`#roomalias:example.com`): {e}" (`#roomalias:example.com`): {e}"
))); );
}, },
}; };
@ -427,9 +424,7 @@ async fn unban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventC
room_id room_id
}, },
| Err(e) => { | Err(e) => {
return Ok(RoomMessageEventContent::text_plain(format!( return Err!("Failed to resolve room alias {room} to a room ID: {e}");
"Failed to resolve room alias {room} to a room ID: {e}"
)));
}, },
} }
}, },
@ -439,19 +434,20 @@ async fn unban_room(&self, room: Box<RoomOrAliasId>) -> Result<RoomMessageEventC
room_id room_id
} else { } else {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Room specified is not a room ID or room alias. Please note that this requires a \ "Room specified is not a room ID or room alias. Please note that this requires a \
full room ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \ full room ID (`!awIh6gGInaS5wLQJwa:example.com`) or a room alias \
(`#roomalias:example.com`)", (`#roomalias:example.com`)",
)); );
}; };
self.services.rooms.metadata.disable_room(&room_id, false); self.services.rooms.metadata.disable_room(&room_id, false);
Ok(RoomMessageEventContent::text_plain("Room unbanned and federation re-enabled.")) self.write_str("Room unbanned and federation re-enabled.")
.await
} }
#[admin_command] #[admin_command]
async fn list_banned_rooms(&self, no_details: bool) -> Result<RoomMessageEventContent> { async fn list_banned_rooms(&self, no_details: bool) -> Result {
let room_ids: Vec<OwnedRoomId> = self let room_ids: Vec<OwnedRoomId> = self
.services .services
.rooms .rooms
@ -462,7 +458,7 @@ async fn list_banned_rooms(&self, no_details: bool) -> Result<RoomMessageEventCo
.await; .await;
if room_ids.is_empty() { if room_ids.is_empty() {
return Ok(RoomMessageEventContent::text_plain("No rooms are banned.")); return Err!("No rooms are banned.");
} }
let mut rooms = room_ids let mut rooms = room_ids
@ -475,19 +471,20 @@ async fn list_banned_rooms(&self, no_details: bool) -> Result<RoomMessageEventCo
rooms.sort_by_key(|r| r.1); rooms.sort_by_key(|r| r.1);
rooms.reverse(); rooms.reverse();
let output_plain = format!( let num = rooms.len();
"Rooms Banned ({}):\n```\n{}\n```",
rooms.len(), let body = rooms
rooms .iter()
.iter() .map(|(id, members, name)| {
.map(|(id, members, name)| if no_details { if no_details {
format!("{id}") format!("{id}")
} else { } else {
format!("{id}\tMembers: {members}\tName: {name}") format!("{id}\tMembers: {members}\tName: {name}")
}) }
.collect::<Vec<_>>() })
.join("\n") .collect::<Vec<_>>()
); .join("\n");
Ok(RoomMessageEventContent::notice_markdown(output_plain)) self.write_str(&format!("Rooms Banned ({num}):\n```\n{body}\n```",))
.await
} }

View file

@ -1,12 +1,16 @@
use std::{fmt::Write, path::PathBuf, sync::Arc}; use std::{fmt::Write, path::PathBuf, sync::Arc};
use conduwuit::{Err, Result, info, utils::time, warn}; use conduwuit::{
use ruma::events::room::message::RoomMessageEventContent; Err, Result, info,
utils::{stream::IterStream, time},
warn,
};
use futures::TryStreamExt;
use crate::admin_command; use crate::admin_command;
#[admin_command] #[admin_command]
pub(super) async fn uptime(&self) -> Result<RoomMessageEventContent> { pub(super) async fn uptime(&self) -> Result {
let elapsed = self let elapsed = self
.services .services
.server .server
@ -15,47 +19,36 @@ pub(super) async fn uptime(&self) -> Result<RoomMessageEventContent> {
.expect("standard duration"); .expect("standard duration");
let result = time::pretty(elapsed); let result = time::pretty(elapsed);
Ok(RoomMessageEventContent::notice_plain(format!("{result}."))) self.write_str(&format!("{result}.")).await
} }
#[admin_command] #[admin_command]
pub(super) async fn show_config(&self) -> Result<RoomMessageEventContent> { pub(super) async fn show_config(&self) -> Result {
// Construct and send the response self.write_str(&format!("{}", *self.services.server.config))
Ok(RoomMessageEventContent::text_markdown(format!( .await
"{}",
*self.services.server.config
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn reload_config( pub(super) async fn reload_config(&self, path: Option<PathBuf>) -> Result {
&self,
path: Option<PathBuf>,
) -> Result<RoomMessageEventContent> {
let path = path.as_deref().into_iter(); let path = path.as_deref().into_iter();
self.services.config.reload(path)?; self.services.config.reload(path)?;
Ok(RoomMessageEventContent::text_plain("Successfully reconfigured.")) self.write_str("Successfully reconfigured.").await
} }
#[admin_command] #[admin_command]
pub(super) async fn list_features( pub(super) async fn list_features(&self, available: bool, enabled: bool, comma: bool) -> Result {
&self,
available: bool,
enabled: bool,
comma: bool,
) -> Result<RoomMessageEventContent> {
let delim = if comma { "," } else { " " }; let delim = if comma { "," } else { " " };
if enabled && !available { if enabled && !available {
let features = info::rustc::features().join(delim); let features = info::rustc::features().join(delim);
let out = format!("`\n{features}\n`"); let out = format!("`\n{features}\n`");
return Ok(RoomMessageEventContent::text_markdown(out)); return self.write_str(&out).await;
} }
if available && !enabled { if available && !enabled {
let features = info::cargo::features().join(delim); let features = info::cargo::features().join(delim);
let out = format!("`\n{features}\n`"); let out = format!("`\n{features}\n`");
return Ok(RoomMessageEventContent::text_markdown(out)); return self.write_str(&out).await;
} }
let mut features = String::new(); let mut features = String::new();
@ -68,77 +61,76 @@ pub(super) async fn list_features(
writeln!(features, "{emoji} {feature} {remark}")?; writeln!(features, "{emoji} {feature} {remark}")?;
} }
Ok(RoomMessageEventContent::text_markdown(features)) self.write_str(&features).await
} }
#[admin_command] #[admin_command]
pub(super) async fn memory_usage(&self) -> Result<RoomMessageEventContent> { pub(super) async fn memory_usage(&self) -> Result {
let services_usage = self.services.memory_usage().await?; let services_usage = self.services.memory_usage().await?;
let database_usage = self.services.db.db.memory_usage()?; let database_usage = self.services.db.db.memory_usage()?;
let allocator_usage = let allocator_usage =
conduwuit::alloc::memory_usage().map_or(String::new(), |s| format!("\nAllocator:\n{s}")); conduwuit::alloc::memory_usage().map_or(String::new(), |s| format!("\nAllocator:\n{s}"));
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!(
"Services:\n{services_usage}\nDatabase:\n{database_usage}{allocator_usage}", "Services:\n{services_usage}\nDatabase:\n{database_usage}{allocator_usage}",
))) ))
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn clear_caches(&self) -> Result<RoomMessageEventContent> { pub(super) async fn clear_caches(&self) -> Result {
self.services.clear_cache().await; self.services.clear_cache().await;
Ok(RoomMessageEventContent::text_plain("Done.")) self.write_str("Done.").await
} }
#[admin_command] #[admin_command]
pub(super) async fn list_backups(&self) -> Result<RoomMessageEventContent> { pub(super) async fn list_backups(&self) -> Result {
let result = self.services.db.db.backup_list()?; self.services
.db
if result.is_empty() { .db
Ok(RoomMessageEventContent::text_plain("No backups found.")) .backup_list()?
} else { .try_stream()
Ok(RoomMessageEventContent::text_plain(result)) .try_for_each(|result| write!(self, "{result}"))
} .await
} }
#[admin_command] #[admin_command]
pub(super) async fn backup_database(&self) -> Result<RoomMessageEventContent> { pub(super) async fn backup_database(&self) -> Result {
let db = Arc::clone(&self.services.db); let db = Arc::clone(&self.services.db);
let mut result = self let result = self
.services .services
.server .server
.runtime() .runtime()
.spawn_blocking(move || match db.db.backup() { .spawn_blocking(move || match db.db.backup() {
| Ok(()) => String::new(), | Ok(()) => "Done".to_owned(),
| Err(e) => e.to_string(), | Err(e) => format!("Failed: {e}"),
}) })
.await?; .await?;
if result.is_empty() { let count = self.services.db.db.backup_count()?;
result = self.services.db.db.backup_list()?; self.write_str(&format!("{result}. Currently have {count} backups."))
} .await
Ok(RoomMessageEventContent::notice_markdown(result))
} }
#[admin_command] #[admin_command]
pub(super) async fn admin_notice(&self, message: Vec<String>) -> Result<RoomMessageEventContent> { pub(super) async fn admin_notice(&self, message: Vec<String>) -> Result {
let message = message.join(" "); let message = message.join(" ");
self.services.admin.send_text(&message).await; self.services.admin.send_text(&message).await;
Ok(RoomMessageEventContent::notice_plain("Notice was sent to #admins")) self.write_str("Notice was sent to #admins").await
} }
#[admin_command] #[admin_command]
pub(super) async fn reload_mods(&self) -> Result<RoomMessageEventContent> { pub(super) async fn reload_mods(&self) -> Result {
self.services.server.reload()?; self.services.server.reload()?;
Ok(RoomMessageEventContent::notice_plain("Reloading server...")) self.write_str("Reloading server...").await
} }
#[admin_command] #[admin_command]
#[cfg(unix)] #[cfg(unix)]
pub(super) async fn restart(&self, force: bool) -> Result<RoomMessageEventContent> { pub(super) async fn restart(&self, force: bool) -> Result {
use conduwuit::utils::sys::current_exe_deleted; use conduwuit::utils::sys::current_exe_deleted;
if !force && current_exe_deleted() { if !force && current_exe_deleted() {
@ -150,13 +142,13 @@ pub(super) async fn restart(&self, force: bool) -> Result<RoomMessageEventConten
self.services.server.restart()?; self.services.server.restart()?;
Ok(RoomMessageEventContent::notice_plain("Restarting server...")) self.write_str("Restarting server...").await
} }
#[admin_command] #[admin_command]
pub(super) async fn shutdown(&self) -> Result<RoomMessageEventContent> { pub(super) async fn shutdown(&self) -> Result {
warn!("shutdown command"); warn!("shutdown command");
self.services.server.shutdown()?; self.services.server.shutdown()?;
Ok(RoomMessageEventContent::notice_plain("Shutting down server...")) self.write_str("Shutting down server...").await
} }

View file

@ -2,7 +2,7 @@ use std::{collections::BTreeMap, fmt::Write as _};
use api::client::{full_user_deactivate, join_room_by_id_helper, leave_room}; use api::client::{full_user_deactivate, join_room_by_id_helper, leave_room};
use conduwuit::{ use conduwuit::{
Result, debug, debug_warn, error, info, is_equal_to, Err, Result, debug, debug_warn, error, info, is_equal_to,
matrix::pdu::PduBuilder, matrix::pdu::PduBuilder,
utils::{self, ReadyExt}, utils::{self, ReadyExt},
warn, warn,
@ -10,11 +10,10 @@ use conduwuit::{
use conduwuit_api::client::{leave_all_rooms, update_avatar_url, update_displayname}; use conduwuit_api::client::{leave_all_rooms, update_avatar_url, update_displayname};
use futures::StreamExt; use futures::StreamExt;
use ruma::{ use ruma::{
EventId, OwnedRoomId, OwnedRoomOrAliasId, OwnedUserId, RoomId, UserId, OwnedEventId, OwnedRoomId, OwnedRoomOrAliasId, OwnedUserId, UserId,
events::{ events::{
RoomAccountDataEventType, StateEventType, RoomAccountDataEventType, StateEventType,
room::{ room::{
message::RoomMessageEventContent,
power_levels::{RoomPowerLevels, RoomPowerLevelsEventContent}, power_levels::{RoomPowerLevels, RoomPowerLevelsEventContent},
redaction::RoomRedactionEventContent, redaction::RoomRedactionEventContent,
}, },
@ -31,7 +30,7 @@ const AUTO_GEN_PASSWORD_LENGTH: usize = 25;
const BULK_JOIN_REASON: &str = "Bulk force joining this room as initiated by the server admin."; const BULK_JOIN_REASON: &str = "Bulk force joining this room as initiated by the server admin.";
#[admin_command] #[admin_command]
pub(super) async fn list_users(&self) -> Result<RoomMessageEventContent> { pub(super) async fn list_users(&self) -> Result {
let users: Vec<_> = self let users: Vec<_> = self
.services .services
.users .users
@ -44,30 +43,22 @@ pub(super) async fn list_users(&self) -> Result<RoomMessageEventContent> {
plain_msg += users.join("\n").as_str(); plain_msg += users.join("\n").as_str();
plain_msg += "\n```"; plain_msg += "\n```";
self.write_str(plain_msg.as_str()).await?; self.write_str(&plain_msg).await
Ok(RoomMessageEventContent::text_plain(""))
} }
#[admin_command] #[admin_command]
pub(super) async fn create_user( pub(super) async fn create_user(&self, username: String, password: Option<String>) -> Result {
&self,
username: String,
password: Option<String>,
) -> Result<RoomMessageEventContent> {
// Validate user id // Validate user id
let user_id = parse_local_user_id(self.services, &username)?; let user_id = parse_local_user_id(self.services, &username)?;
if let Err(e) = user_id.validate_strict() { if let Err(e) = user_id.validate_strict() {
if self.services.config.emergency_password.is_none() { if self.services.config.emergency_password.is_none() {
return Ok(RoomMessageEventContent::text_plain(format!( return Err!("Username {user_id} contains disallowed characters or spaces: {e}");
"Username {user_id} contains disallowed characters or spaces: {e}"
)));
} }
} }
if self.services.users.exists(&user_id).await { if self.services.users.exists(&user_id).await {
return Ok(RoomMessageEventContent::text_plain(format!("User {user_id} already exists"))); return Err!("User {user_id} already exists");
} }
let password = password.unwrap_or_else(|| utils::random_string(AUTO_GEN_PASSWORD_LENGTH)); let password = password.unwrap_or_else(|| utils::random_string(AUTO_GEN_PASSWORD_LENGTH));
@ -89,8 +80,7 @@ pub(super) async fn create_user(
.new_user_displayname_suffix .new_user_displayname_suffix
.is_empty() .is_empty()
{ {
write!(displayname, " {}", self.services.server.config.new_user_displayname_suffix) write!(displayname, " {}", self.services.server.config.new_user_displayname_suffix)?;
.expect("should be able to write to string buffer");
} }
self.services self.services
@ -110,15 +100,17 @@ pub(super) async fn create_user(
content: ruma::events::push_rules::PushRulesEventContent { content: ruma::events::push_rules::PushRulesEventContent {
global: ruma::push::Ruleset::server_default(&user_id), global: ruma::push::Ruleset::server_default(&user_id),
}, },
}) })?,
.expect("to json value always works"),
) )
.await?; .await?;
if !self.services.server.config.auto_join_rooms.is_empty() { if !self.services.server.config.auto_join_rooms.is_empty() {
for room in &self.services.server.config.auto_join_rooms { for room in &self.services.server.config.auto_join_rooms {
let Ok(room_id) = self.services.rooms.alias.resolve(room).await else { let Ok(room_id) = self.services.rooms.alias.resolve(room).await else {
error!(%user_id, "Failed to resolve room alias to room ID when attempting to auto join {room}, skipping"); error!(
%user_id,
"Failed to resolve room alias to room ID when attempting to auto join {room}, skipping"
);
continue; continue;
}; };
@ -154,18 +146,17 @@ pub(super) async fn create_user(
info!("Automatically joined room {room} for user {user_id}"); info!("Automatically joined room {room} for user {user_id}");
}, },
| Err(e) => { | Err(e) => {
self.services
.admin
.send_message(RoomMessageEventContent::text_plain(format!(
"Failed to automatically join room {room} for user {user_id}: \
{e}"
)))
.await
.ok();
// don't return this error so we don't fail registrations // don't return this error so we don't fail registrations
error!( error!(
"Failed to automatically join room {room} for user {user_id}: {e}" "Failed to automatically join room {room} for user {user_id}: {e}"
); );
self.services
.admin
.send_text(&format!(
"Failed to automatically join room {room} for user {user_id}: \
{e}"
))
.await;
}, },
} }
} }
@ -192,25 +183,18 @@ pub(super) async fn create_user(
debug!("create_user admin command called without an admin room being available"); debug!("create_user admin command called without an admin room being available");
} }
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!("Created user with user_id: {user_id} and password: `{password}`"))
"Created user with user_id: {user_id} and password: `{password}`" .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn deactivate( pub(super) async fn deactivate(&self, no_leave_rooms: bool, user_id: String) -> Result {
&self,
no_leave_rooms: bool,
user_id: String,
) -> Result<RoomMessageEventContent> {
// Validate user id // Validate user id
let user_id = parse_local_user_id(self.services, &user_id)?; let user_id = parse_local_user_id(self.services, &user_id)?;
// don't deactivate the server service account // don't deactivate the server service account
if user_id == self.services.globals.server_user { if user_id == self.services.globals.server_user {
return Ok(RoomMessageEventContent::text_plain( return Err!("Not allowed to deactivate the server service account.",);
"Not allowed to deactivate the server service account.",
));
} }
self.services.users.deactivate_account(&user_id).await?; self.services.users.deactivate_account(&user_id).await?;
@ -218,11 +202,8 @@ pub(super) async fn deactivate(
if !no_leave_rooms { if !no_leave_rooms {
self.services self.services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!("Making {user_id} leave all rooms after deactivation..."))
"Making {user_id} leave all rooms after deactivation..." .await;
)))
.await
.ok();
let all_joined_rooms: Vec<OwnedRoomId> = self let all_joined_rooms: Vec<OwnedRoomId> = self
.services .services
@ -239,24 +220,19 @@ pub(super) async fn deactivate(
leave_all_rooms(self.services, &user_id).await; leave_all_rooms(self.services, &user_id).await;
} }
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!("User {user_id} has been deactivated"))
"User {user_id} has been deactivated" .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn reset_password( pub(super) async fn reset_password(&self, username: String, password: Option<String>) -> Result {
&self,
username: String,
password: Option<String>,
) -> Result<RoomMessageEventContent> {
let user_id = parse_local_user_id(self.services, &username)?; let user_id = parse_local_user_id(self.services, &username)?;
if user_id == self.services.globals.server_user { if user_id == self.services.globals.server_user {
return Ok(RoomMessageEventContent::text_plain( return Err!(
"Not allowed to set the password for the server account. Please use the emergency \ "Not allowed to set the password for the server account. Please use the emergency \
password config option.", password config option.",
)); );
} }
let new_password = password.unwrap_or_else(|| utils::random_string(AUTO_GEN_PASSWORD_LENGTH)); let new_password = password.unwrap_or_else(|| utils::random_string(AUTO_GEN_PASSWORD_LENGTH));
@ -266,28 +242,20 @@ pub(super) async fn reset_password(
.users .users
.set_password(&user_id, Some(new_password.as_str())) .set_password(&user_id, Some(new_password.as_str()))
{ {
| Ok(()) => Ok(RoomMessageEventContent::text_plain(format!( | Err(e) => return Err!("Couldn't reset the password for user {user_id}: {e}"),
"Successfully reset the password for user {user_id}: `{new_password}`" | Ok(()) =>
))), write!(self, "Successfully reset the password for user {user_id}: `{new_password}`"),
| Err(e) => Ok(RoomMessageEventContent::text_plain(format!(
"Couldn't reset the password for user {user_id}: {e}"
))),
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn deactivate_all( pub(super) async fn deactivate_all(&self, no_leave_rooms: bool, force: bool) -> Result {
&self,
no_leave_rooms: bool,
force: bool,
) -> Result<RoomMessageEventContent> {
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&"").trim() != "```" || self.body.last().unwrap_or(&"").trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.",);
"Expected code block in command body. Add --help for details.",
));
} }
let usernames = self let usernames = self
@ -301,15 +269,23 @@ pub(super) async fn deactivate_all(
for username in usernames { for username in usernames {
match parse_active_local_user_id(self.services, username).await { match parse_active_local_user_id(self.services, username).await {
| Err(e) => {
self.services
.admin
.send_text(&format!("{username} is not a valid username, skipping over: {e}"))
.await;
continue;
},
| Ok(user_id) => { | Ok(user_id) => {
if self.services.users.is_admin(&user_id).await && !force { if self.services.users.is_admin(&user_id).await && !force {
self.services self.services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!(
"{username} is an admin and --force is not set, skipping over" "{username} is an admin and --force is not set, skipping over"
))) ))
.await .await;
.ok();
admins.push(username); admins.push(username);
continue; continue;
} }
@ -318,26 +294,16 @@ pub(super) async fn deactivate_all(
if user_id == self.services.globals.server_user { if user_id == self.services.globals.server_user {
self.services self.services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!(
"{username} is the server service account, skipping over" "{username} is the server service account, skipping over"
))) ))
.await .await;
.ok();
continue; continue;
} }
user_ids.push(user_id); user_ids.push(user_id);
}, },
| Err(e) => {
self.services
.admin
.send_message(RoomMessageEventContent::text_plain(format!(
"{username} is not a valid username, skipping over: {e}"
)))
.await
.ok();
continue;
},
} }
} }
@ -345,6 +311,12 @@ pub(super) async fn deactivate_all(
for user_id in user_ids { for user_id in user_ids {
match self.services.users.deactivate_account(&user_id).await { match self.services.users.deactivate_account(&user_id).await {
| Err(e) => {
self.services
.admin
.send_text(&format!("Failed deactivating user: {e}"))
.await;
},
| Ok(()) => { | Ok(()) => {
deactivation_count = deactivation_count.saturating_add(1); deactivation_count = deactivation_count.saturating_add(1);
if !no_leave_rooms { if !no_leave_rooms {
@ -365,33 +337,24 @@ pub(super) async fn deactivate_all(
leave_all_rooms(self.services, &user_id).await; leave_all_rooms(self.services, &user_id).await;
} }
}, },
| Err(e) => {
self.services
.admin
.send_message(RoomMessageEventContent::text_plain(format!(
"Failed deactivating user: {e}"
)))
.await
.ok();
},
} }
} }
if admins.is_empty() { if admins.is_empty() {
Ok(RoomMessageEventContent::text_plain(format!( write!(self, "Deactivated {deactivation_count} accounts.")
"Deactivated {deactivation_count} accounts."
)))
} else { } else {
Ok(RoomMessageEventContent::text_plain(format!( write!(
self,
"Deactivated {deactivation_count} accounts.\nSkipped admin accounts: {}. Use \ "Deactivated {deactivation_count} accounts.\nSkipped admin accounts: {}. Use \
--force to deactivate admin accounts", --force to deactivate admin accounts",
admins.join(", ") admins.join(", ")
))) )
} }
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn list_joined_rooms(&self, user_id: String) -> Result<RoomMessageEventContent> { pub(super) async fn list_joined_rooms(&self, user_id: String) -> Result {
// Validate user id // Validate user id
let user_id = parse_local_user_id(self.services, &user_id)?; let user_id = parse_local_user_id(self.services, &user_id)?;
@ -405,23 +368,20 @@ pub(super) async fn list_joined_rooms(&self, user_id: String) -> Result<RoomMess
.await; .await;
if rooms.is_empty() { if rooms.is_empty() {
return Ok(RoomMessageEventContent::text_plain("User is not in any rooms.")); return Err!("User is not in any rooms.");
} }
rooms.sort_by_key(|r| r.1); rooms.sort_by_key(|r| r.1);
rooms.reverse(); rooms.reverse();
let output_plain = format!( let body = rooms
"Rooms {user_id} Joined ({}):\n```\n{}\n```", .iter()
rooms.len(), .map(|(id, members, name)| format!("{id}\tMembers: {members}\tName: {name}"))
rooms .collect::<Vec<_>>()
.iter() .join("\n");
.map(|(id, members, name)| format!("{id}\tMembers: {members}\tName: {name}"))
.collect::<Vec<_>>()
.join("\n")
);
Ok(RoomMessageEventContent::notice_markdown(output_plain)) self.write_str(&format!("Rooms {user_id} Joined ({}):\n```\n{body}\n```", rooms.len(),))
.await
} }
#[admin_command] #[admin_command]
@ -429,27 +389,23 @@ pub(super) async fn force_join_list_of_local_users(
&self, &self,
room_id: OwnedRoomOrAliasId, room_id: OwnedRoomOrAliasId,
yes_i_want_to_do_this: bool, yes_i_want_to_do_this: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
if self.body.len() < 2 if self.body.len() < 2
|| !self.body[0].trim().starts_with("```") || !self.body[0].trim().starts_with("```")
|| self.body.last().unwrap_or(&"").trim() != "```" || self.body.last().unwrap_or(&"").trim() != "```"
{ {
return Ok(RoomMessageEventContent::text_plain( return Err!("Expected code block in command body. Add --help for details.",);
"Expected code block in command body. Add --help for details.",
));
} }
if !yes_i_want_to_do_this { if !yes_i_want_to_do_this {
return Ok(RoomMessageEventContent::notice_markdown( return Err!(
"You must pass the --yes-i-want-to-do-this-flag to ensure you really want to force \ "You must pass the --yes-i-want-to-do-this-flag to ensure you really want to force \
bulk join all specified local users.", bulk join all specified local users.",
)); );
} }
let Ok(admin_room) = self.services.admin.get_admin_room().await else { let Ok(admin_room) = self.services.admin.get_admin_room().await else {
return Ok(RoomMessageEventContent::notice_markdown( return Err!("There is not an admin room to check for server admins.",);
"There is not an admin room to check for server admins.",
));
}; };
let (room_id, servers) = self let (room_id, servers) = self
@ -466,7 +422,7 @@ pub(super) async fn force_join_list_of_local_users(
.server_in_room(self.services.globals.server_name(), &room_id) .server_in_room(self.services.globals.server_name(), &room_id)
.await .await
{ {
return Ok(RoomMessageEventContent::notice_markdown("We are not joined in this room.")); return Err!("We are not joined in this room.");
} }
let server_admins: Vec<_> = self let server_admins: Vec<_> = self
@ -486,9 +442,7 @@ pub(super) async fn force_join_list_of_local_users(
.ready_any(|user_id| server_admins.contains(&user_id.to_owned())) .ready_any(|user_id| server_admins.contains(&user_id.to_owned()))
.await .await
{ {
return Ok(RoomMessageEventContent::notice_markdown( return Err!("There is not a single server admin in the room.",);
"There is not a single server admin in the room.",
));
} }
let usernames = self let usernames = self
@ -506,11 +460,11 @@ pub(super) async fn force_join_list_of_local_users(
if user_id == self.services.globals.server_user { if user_id == self.services.globals.server_user {
self.services self.services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!(
"{username} is the server service account, skipping over" "{username} is the server service account, skipping over"
))) ))
.await .await;
.ok();
continue; continue;
} }
@ -519,11 +473,9 @@ pub(super) async fn force_join_list_of_local_users(
| Err(e) => { | Err(e) => {
self.services self.services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!("{username} is not a valid username, skipping over: {e}"))
"{username} is not a valid username, skipping over: {e}" .await;
)))
.await
.ok();
continue; continue;
}, },
} }
@ -554,10 +506,11 @@ pub(super) async fn force_join_list_of_local_users(
} }
} }
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!(
"{successful_joins} local users have been joined to {room_id}. {failed_joins} joins \ "{successful_joins} local users have been joined to {room_id}. {failed_joins} joins \
failed.", failed.",
))) ))
.await
} }
#[admin_command] #[admin_command]
@ -565,18 +518,16 @@ pub(super) async fn force_join_all_local_users(
&self, &self,
room_id: OwnedRoomOrAliasId, room_id: OwnedRoomOrAliasId,
yes_i_want_to_do_this: bool, yes_i_want_to_do_this: bool,
) -> Result<RoomMessageEventContent> { ) -> Result {
if !yes_i_want_to_do_this { if !yes_i_want_to_do_this {
return Ok(RoomMessageEventContent::notice_markdown( return Err!(
"You must pass the --yes-i-want-to-do-this-flag to ensure you really want to force \ "You must pass the --yes-i-want-to-do-this-flag to ensure you really want to force \
bulk join all local users.", bulk join all local users.",
)); );
} }
let Ok(admin_room) = self.services.admin.get_admin_room().await else { let Ok(admin_room) = self.services.admin.get_admin_room().await else {
return Ok(RoomMessageEventContent::notice_markdown( return Err!("There is not an admin room to check for server admins.",);
"There is not an admin room to check for server admins.",
));
}; };
let (room_id, servers) = self let (room_id, servers) = self
@ -593,7 +544,7 @@ pub(super) async fn force_join_all_local_users(
.server_in_room(self.services.globals.server_name(), &room_id) .server_in_room(self.services.globals.server_name(), &room_id)
.await .await
{ {
return Ok(RoomMessageEventContent::notice_markdown("We are not joined in this room.")); return Err!("We are not joined in this room.");
} }
let server_admins: Vec<_> = self let server_admins: Vec<_> = self
@ -613,9 +564,7 @@ pub(super) async fn force_join_all_local_users(
.ready_any(|user_id| server_admins.contains(&user_id.to_owned())) .ready_any(|user_id| server_admins.contains(&user_id.to_owned()))
.await .await
{ {
return Ok(RoomMessageEventContent::notice_markdown( return Err!("There is not a single server admin in the room.",);
"There is not a single server admin in the room.",
));
} }
let mut failed_joins: usize = 0; let mut failed_joins: usize = 0;
@ -650,10 +599,11 @@ pub(super) async fn force_join_all_local_users(
} }
} }
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!(
"{successful_joins} local users have been joined to {room_id}. {failed_joins} joins \ "{successful_joins} local users have been joined to {room_id}. {failed_joins} joins \
failed.", failed.",
))) ))
.await
} }
#[admin_command] #[admin_command]
@ -661,7 +611,7 @@ pub(super) async fn force_join_room(
&self, &self,
user_id: String, user_id: String,
room_id: OwnedRoomOrAliasId, room_id: OwnedRoomOrAliasId,
) -> Result<RoomMessageEventContent> { ) -> Result {
let user_id = parse_local_user_id(self.services, &user_id)?; let user_id = parse_local_user_id(self.services, &user_id)?;
let (room_id, servers) = self let (room_id, servers) = self
.services .services
@ -677,9 +627,8 @@ pub(super) async fn force_join_room(
join_room_by_id_helper(self.services, &user_id, &room_id, None, &servers, None, &None) join_room_by_id_helper(self.services, &user_id, &room_id, None, &servers, None, &None)
.await?; .await?;
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("{user_id} has been joined to {room_id}.",))
"{user_id} has been joined to {room_id}.", .await
)))
} }
#[admin_command] #[admin_command]
@ -687,7 +636,7 @@ pub(super) async fn force_leave_room(
&self, &self,
user_id: String, user_id: String,
room_id: OwnedRoomOrAliasId, room_id: OwnedRoomOrAliasId,
) -> Result<RoomMessageEventContent> { ) -> Result {
let user_id = parse_local_user_id(self.services, &user_id)?; let user_id = parse_local_user_id(self.services, &user_id)?;
let room_id = self.services.rooms.alias.resolve(&room_id).await?; let room_id = self.services.rooms.alias.resolve(&room_id).await?;
@ -703,24 +652,17 @@ pub(super) async fn force_leave_room(
.is_joined(&user_id, &room_id) .is_joined(&user_id, &room_id)
.await .await
{ {
return Ok(RoomMessageEventContent::notice_markdown(format!( return Err!("{user_id} is not joined in the room");
"{user_id} is not joined in the room"
)));
} }
leave_room(self.services, &user_id, &room_id, None).await?; leave_room(self.services, &user_id, &room_id, None).await?;
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("{user_id} has left {room_id}.",))
"{user_id} has left {room_id}.", .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn force_demote( pub(super) async fn force_demote(&self, user_id: String, room_id: OwnedRoomOrAliasId) -> Result {
&self,
user_id: String,
room_id: OwnedRoomOrAliasId,
) -> Result<RoomMessageEventContent> {
let user_id = parse_local_user_id(self.services, &user_id)?; let user_id = parse_local_user_id(self.services, &user_id)?;
let room_id = self.services.rooms.alias.resolve(&room_id).await?; let room_id = self.services.rooms.alias.resolve(&room_id).await?;
@ -731,15 +673,11 @@ pub(super) async fn force_demote(
let state_lock = self.services.rooms.state.mutex.lock(&room_id).await; let state_lock = self.services.rooms.state.mutex.lock(&room_id).await;
let room_power_levels = self let room_power_levels: Option<RoomPowerLevelsEventContent> = self
.services .services
.rooms .rooms
.state_accessor .state_accessor
.room_state_get_content::<RoomPowerLevelsEventContent>( .room_state_get_content(&room_id, &StateEventType::RoomPowerLevels, "")
&room_id,
&StateEventType::RoomPowerLevels,
"",
)
.await .await
.ok(); .ok();
@ -757,9 +695,7 @@ pub(super) async fn force_demote(
.is_ok_and(|event| event.sender == user_id); .is_ok_and(|event| event.sender == user_id);
if !user_can_demote_self { if !user_can_demote_self {
return Ok(RoomMessageEventContent::notice_markdown( return Err!("User is not allowed to modify their own power levels in the room.",);
"User is not allowed to modify their own power levels in the room.",
));
} }
let mut power_levels_content = room_power_levels.unwrap_or_default(); let mut power_levels_content = room_power_levels.unwrap_or_default();
@ -777,34 +713,34 @@ pub(super) async fn force_demote(
) )
.await?; .await?;
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!(
"User {user_id} demoted themselves to the room default power level in {room_id} - \ "User {user_id} demoted themselves to the room default power level in {room_id} - \
{event_id}" {event_id}"
))) ))
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn make_user_admin(&self, user_id: String) -> Result<RoomMessageEventContent> { pub(super) async fn make_user_admin(&self, user_id: String) -> Result {
let user_id = parse_local_user_id(self.services, &user_id)?; let user_id = parse_local_user_id(self.services, &user_id)?;
assert!( assert!(
self.services.globals.user_is_local(&user_id), self.services.globals.user_is_local(&user_id),
"Parsed user_id must be a local user" "Parsed user_id must be a local user"
); );
self.services.admin.make_user_admin(&user_id).await?; self.services.admin.make_user_admin(&user_id).await?;
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("{user_id} has been granted admin privileges.",))
"{user_id} has been granted admin privileges.", .await
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn put_room_tag( pub(super) async fn put_room_tag(
&self, &self,
user_id: String, user_id: String,
room_id: Box<RoomId>, room_id: OwnedRoomId,
tag: String, tag: String,
) -> Result<RoomMessageEventContent> { ) -> Result {
let user_id = parse_active_local_user_id(self.services, &user_id).await?; let user_id = parse_active_local_user_id(self.services, &user_id).await?;
let mut tags_event = self let mut tags_event = self
@ -831,18 +767,19 @@ pub(super) async fn put_room_tag(
) )
.await?; .await?;
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!(
"Successfully updated room account data for {user_id} and room {room_id} with tag {tag}" "Successfully updated room account data for {user_id} and room {room_id} with tag {tag}"
))) ))
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn delete_room_tag( pub(super) async fn delete_room_tag(
&self, &self,
user_id: String, user_id: String,
room_id: Box<RoomId>, room_id: OwnedRoomId,
tag: String, tag: String,
) -> Result<RoomMessageEventContent> { ) -> Result {
let user_id = parse_active_local_user_id(self.services, &user_id).await?; let user_id = parse_active_local_user_id(self.services, &user_id).await?;
let mut tags_event = self let mut tags_event = self
@ -866,18 +803,15 @@ pub(super) async fn delete_room_tag(
) )
.await?; .await?;
Ok(RoomMessageEventContent::text_plain(format!( self.write_str(&format!(
"Successfully updated room account data for {user_id} and room {room_id}, deleting room \ "Successfully updated room account data for {user_id} and room {room_id}, deleting room \
tag {tag}" tag {tag}"
))) ))
.await
} }
#[admin_command] #[admin_command]
pub(super) async fn get_room_tags( pub(super) async fn get_room_tags(&self, user_id: String, room_id: OwnedRoomId) -> Result {
&self,
user_id: String,
room_id: Box<RoomId>,
) -> Result<RoomMessageEventContent> {
let user_id = parse_active_local_user_id(self.services, &user_id).await?; let user_id = parse_active_local_user_id(self.services, &user_id).await?;
let tags_event = self let tags_event = self
@ -889,17 +823,12 @@ pub(super) async fn get_room_tags(
content: TagEventContent { tags: BTreeMap::new() }, content: TagEventContent { tags: BTreeMap::new() },
}); });
Ok(RoomMessageEventContent::notice_markdown(format!( self.write_str(&format!("```\n{:#?}\n```", tags_event.content.tags))
"```\n{:#?}\n```", .await
tags_event.content.tags
)))
} }
#[admin_command] #[admin_command]
pub(super) async fn redact_event( pub(super) async fn redact_event(&self, event_id: OwnedEventId) -> Result {
&self,
event_id: Box<EventId>,
) -> Result<RoomMessageEventContent> {
let Ok(event) = self let Ok(event) = self
.services .services
.rooms .rooms
@ -907,20 +836,18 @@ pub(super) async fn redact_event(
.get_non_outlier_pdu(&event_id) .get_non_outlier_pdu(&event_id)
.await .await
else { else {
return Ok(RoomMessageEventContent::text_plain("Event does not exist in our database.")); return Err!("Event does not exist in our database.");
}; };
if event.is_redacted() { if event.is_redacted() {
return Ok(RoomMessageEventContent::text_plain("Event is already redacted.")); return Err!("Event is already redacted.");
} }
let room_id = event.room_id; let room_id = event.room_id;
let sender_user = event.sender; let sender_user = event.sender;
if !self.services.globals.user_is_local(&sender_user) { if !self.services.globals.user_is_local(&sender_user) {
return Ok(RoomMessageEventContent::text_plain( return Err!("This command only works on local users.");
"This command only works on local users.",
));
} }
let reason = format!( let reason = format!(
@ -949,9 +876,8 @@ pub(super) async fn redact_event(
.await? .await?
}; };
let out = format!("Successfully redacted event. Redaction event ID: {redaction_event_id}"); self.write_str(&format!(
"Successfully redacted event. Redaction event ID: {redaction_event_id}"
self.write_str(out.as_str()).await?; ))
.await
Ok(RoomMessageEventContent::text_plain(""))
} }

View file

@ -2,7 +2,7 @@ mod commands;
use clap::Subcommand; use clap::Subcommand;
use conduwuit::Result; use conduwuit::Result;
use ruma::{EventId, OwnedRoomOrAliasId, RoomId}; use ruma::{OwnedEventId, OwnedRoomId, OwnedRoomOrAliasId};
use crate::admin_command_dispatch; use crate::admin_command_dispatch;
@ -102,21 +102,21 @@ pub(super) enum UserCommand {
/// room's internal ID, and the tag name `m.server_notice`. /// room's internal ID, and the tag name `m.server_notice`.
PutRoomTag { PutRoomTag {
user_id: String, user_id: String,
room_id: Box<RoomId>, room_id: OwnedRoomId,
tag: String, tag: String,
}, },
/// - Deletes the room tag for the specified user and room ID /// - Deletes the room tag for the specified user and room ID
DeleteRoomTag { DeleteRoomTag {
user_id: String, user_id: String,
room_id: Box<RoomId>, room_id: OwnedRoomId,
tag: String, tag: String,
}, },
/// - Gets all the room tags for the specified user and room ID /// - Gets all the room tags for the specified user and room ID
GetRoomTags { GetRoomTags {
user_id: String, user_id: String,
room_id: Box<RoomId>, room_id: OwnedRoomId,
}, },
/// - Attempts to forcefully redact the specified event ID from the sender /// - Attempts to forcefully redact the specified event ID from the sender
@ -124,7 +124,7 @@ pub(super) enum UserCommand {
/// ///
/// This is only valid for local users /// This is only valid for local users
RedactEvent { RedactEvent {
event_id: Box<EventId>, event_id: OwnedEventId,
}, },
/// - Force joins a specified list of local users to join the specified /// - Force joins a specified list of local users to join the specified

View file

@ -1,3 +1,5 @@
#![allow(dead_code)]
use conduwuit_core::{Err, Result, err}; use conduwuit_core::{Err, Result, err};
use ruma::{OwnedRoomId, OwnedUserId, RoomId, UserId}; use ruma::{OwnedRoomId, OwnedUserId, RoomId, UserId};
use service::Services; use service::Services;

View file

@ -17,21 +17,50 @@ crate-type = [
] ]
[features] [features]
element_hacks = [] brotli_compression = [
release_max_log_level = [ "conduwuit-core/brotli_compression",
"tracing/max_level_trace", "conduwuit-service/brotli_compression",
"tracing/release_max_level_info", "reqwest/brotli",
"log/max_level_trace",
"log/release_max_level_info",
] ]
zstd_compression = [ element_hacks = [
"reqwest/zstd", "conduwuit-service/element_hacks",
] ]
gzip_compression = [ gzip_compression = [
"conduwuit-core/gzip_compression",
"conduwuit-service/gzip_compression",
"reqwest/gzip", "reqwest/gzip",
] ]
brotli_compression = [ io_uring = [
"reqwest/brotli", "conduwuit-service/io_uring",
]
jemalloc = [
"conduwuit-core/jemalloc",
"conduwuit-service/jemalloc",
]
jemalloc_conf = [
"conduwuit-core/jemalloc_conf",
"conduwuit-service/jemalloc_conf",
]
jemalloc_prof = [
"conduwuit-core/jemalloc_prof",
"conduwuit-service/jemalloc_prof",
]
jemalloc_stats = [
"conduwuit-core/jemalloc_stats",
"conduwuit-service/jemalloc_stats",
]
release_max_log_level = [
"conduwuit-core/release_max_log_level",
"conduwuit-service/release_max_log_level",
"log/max_level_trace",
"log/release_max_level_info",
"tracing/max_level_trace",
"tracing/release_max_level_info",
]
zstd_compression = [
"conduwuit-core/zstd_compression",
"conduwuit-service/zstd_compression",
"reqwest/zstd",
] ]
[dependencies] [dependencies]
@ -42,7 +71,6 @@ axum.workspace = true
base64.workspace = true base64.workspace = true
bytes.workspace = true bytes.workspace = true
conduwuit-core.workspace = true conduwuit-core.workspace = true
conduwuit-database.workspace = true
conduwuit-service.workspace = true conduwuit-service.workspace = true
const-str.workspace = true const-str.workspace = true
futures.workspace = true futures.workspace = true

View file

@ -1,6 +1,6 @@
use std::{ use std::{
borrow::Borrow, borrow::Borrow,
collections::{BTreeMap, HashMap, HashSet}, collections::{HashMap, HashSet},
iter::once, iter::once,
net::IpAddr, net::IpAddr,
sync::Arc, sync::Arc,
@ -9,7 +9,7 @@ use std::{
use axum::extract::State; use axum::extract::State;
use axum_client_ip::InsecureClientIp; use axum_client_ip::InsecureClientIp;
use conduwuit::{ use conduwuit::{
Err, Result, at, debug, debug_info, debug_warn, err, error, info, Err, Result, at, debug, debug_error, debug_info, debug_warn, err, error, info, is_matching,
matrix::{ matrix::{
StateKey, StateKey,
pdu::{PduBuilder, PduEvent, gen_event_id, gen_event_id_canonical_json}, pdu::{PduBuilder, PduEvent, gen_event_id, gen_event_id_canonical_json},
@ -17,7 +17,12 @@ use conduwuit::{
}, },
result::{FlatOk, NotFound}, result::{FlatOk, NotFound},
trace, trace,
utils::{self, IterStream, ReadyExt, shuffle}, utils::{
self, FutureBoolExt,
future::ReadyEqExt,
shuffle,
stream::{BroadbandExt, IterStream, ReadyExt},
},
warn, warn,
}; };
use conduwuit_service::{ use conduwuit_service::{
@ -28,7 +33,7 @@ use conduwuit_service::{
state_compressor::{CompressedState, HashSetCompressStateEvent}, state_compressor::{CompressedState, HashSetCompressStateEvent},
}, },
}; };
use futures::{FutureExt, StreamExt, TryFutureExt, future::join4, join}; use futures::{FutureExt, StreamExt, TryFutureExt, join, pin_mut};
use ruma::{ use ruma::{
CanonicalJsonObject, CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedServerName, CanonicalJsonObject, CanonicalJsonValue, OwnedEventId, OwnedRoomId, OwnedServerName,
OwnedUserId, RoomId, RoomVersionId, ServerName, UserId, OwnedUserId, RoomId, RoomVersionId, ServerName, UserId,
@ -52,7 +57,6 @@ use ruma::{
room::{ room::{
join_rules::{AllowRule, JoinRule, RoomJoinRulesEventContent}, join_rules::{AllowRule, JoinRule, RoomJoinRulesEventContent},
member::{MembershipState, RoomMemberEventContent}, member::{MembershipState, RoomMemberEventContent},
message::RoomMessageEventContent,
}, },
}, },
}; };
@ -81,7 +85,7 @@ async fn banned_room_check(
|| services || services
.config .config
.forbidden_remote_server_names .forbidden_remote_server_names
.is_match(room_id.server_name().unwrap().host()) .is_match(room_id.server_name().expect("legacy room mxid").host())
{ {
warn!( warn!(
"User {user_id} who is not an admin attempted to send an invite for or \ "User {user_id} who is not an admin attempted to send an invite for or \
@ -96,12 +100,11 @@ async fn banned_room_check(
if services.server.config.admin_room_notices { if services.server.config.admin_room_notices {
services services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!(
"Automatically deactivating user {user_id} due to attempted banned \ "Automatically deactivating user {user_id} due to attempted banned \
room join from IP {client_ip}" room join from IP {client_ip}"
))) ))
.await .await;
.ok();
} }
let all_joined_rooms: Vec<OwnedRoomId> = services let all_joined_rooms: Vec<OwnedRoomId> = services
@ -136,12 +139,11 @@ async fn banned_room_check(
if services.server.config.admin_room_notices { if services.server.config.admin_room_notices {
services services
.admin .admin
.send_message(RoomMessageEventContent::text_plain(format!( .send_text(&format!(
"Automatically deactivating user {user_id} due to attempted banned \ "Automatically deactivating user {user_id} due to attempted banned \
room join from IP {client_ip}" room join from IP {client_ip}"
))) ))
.await .await;
.ok();
} }
let all_joined_rooms: Vec<OwnedRoomId> = services let all_joined_rooms: Vec<OwnedRoomId> = services
@ -366,10 +368,10 @@ pub(crate) async fn knock_room_route(
InsecureClientIp(client): InsecureClientIp, InsecureClientIp(client): InsecureClientIp,
body: Ruma<knock_room::v3::Request>, body: Ruma<knock_room::v3::Request>,
) -> Result<knock_room::v3::Response> { ) -> Result<knock_room::v3::Response> {
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let sender_user = body.sender_user();
let body = body.body; let body = &body.body;
let (servers, room_id) = match OwnedRoomId::try_from(body.room_id_or_alias) { let (servers, room_id) = match OwnedRoomId::try_from(body.room_id_or_alias.clone()) {
| Ok(room_id) => { | Ok(room_id) => {
banned_room_check( banned_room_check(
&services, &services,
@ -493,7 +495,7 @@ pub(crate) async fn invite_user_route(
let sender_user = body.sender_user(); let sender_user = body.sender_user();
if !services.users.is_admin(sender_user).await && services.config.block_non_admin_invites { if !services.users.is_admin(sender_user).await && services.config.block_non_admin_invites {
info!( debug_error!(
"User {sender_user} is not an admin and attempted to send an invite to room {}", "User {sender_user} is not an admin and attempted to send an invite to room {}",
&body.room_id &body.room_id
); );
@ -722,12 +724,10 @@ pub(crate) async fn forget_room_route(
let joined = services.rooms.state_cache.is_joined(user_id, room_id); let joined = services.rooms.state_cache.is_joined(user_id, room_id);
let knocked = services.rooms.state_cache.is_knocked(user_id, room_id); let knocked = services.rooms.state_cache.is_knocked(user_id, room_id);
let left = services.rooms.state_cache.is_left(user_id, room_id);
let invited = services.rooms.state_cache.is_invited(user_id, room_id); let invited = services.rooms.state_cache.is_invited(user_id, room_id);
let (joined, knocked, left, invited) = join4(joined, knocked, left, invited).await; pin_mut!(joined, knocked, invited);
if joined.or(knocked).or(invited).await {
if joined || knocked || invited {
return Err!(Request(Unknown("You must leave the room before forgetting it"))); return Err!(Request(Unknown("You must leave the room before forgetting it")));
} }
@ -741,11 +741,11 @@ pub(crate) async fn forget_room_route(
return Err!(Request(Unknown("No membership event was found, room was never joined"))); return Err!(Request(Unknown("No membership event was found, room was never joined")));
} }
if left let non_membership = membership
|| membership.is_ok_and(|member| { .map(|member| member.membership)
member.membership == MembershipState::Leave .is_ok_and(is_matching!(MembershipState::Leave | MembershipState::Ban));
|| member.membership == MembershipState::Ban
}) { if non_membership || services.rooms.state_cache.is_left(user_id, room_id).await {
services.rooms.state_cache.forget(room_id, user_id); services.rooms.state_cache.forget(room_id, user_id);
} }
@ -866,32 +866,32 @@ pub(crate) async fn joined_members_route(
State(services): State<crate::State>, State(services): State<crate::State>,
body: Ruma<joined_members::v3::Request>, body: Ruma<joined_members::v3::Request>,
) -> Result<joined_members::v3::Response> { ) -> Result<joined_members::v3::Response> {
let sender_user = body.sender_user();
if !services if !services
.rooms .rooms
.state_accessor .state_accessor
.user_can_see_state_events(sender_user, &body.room_id) .user_can_see_state_events(body.sender_user(), &body.room_id)
.await .await
{ {
return Err!(Request(Forbidden("You don't have permission to view this room."))); return Err!(Request(Forbidden("You don't have permission to view this room.")));
} }
let joined: BTreeMap<OwnedUserId, RoomMember> = services Ok(joined_members::v3::Response {
.rooms joined: services
.state_cache .rooms
.room_members(&body.room_id) .state_cache
.map(ToOwned::to_owned) .room_members(&body.room_id)
.then(|user| async move { .map(ToOwned::to_owned)
(user.clone(), RoomMember { .broad_then(|user_id| async move {
display_name: services.users.displayname(&user).await.ok(), let member = RoomMember {
avatar_url: services.users.avatar_url(&user).await.ok(), display_name: services.users.displayname(&user_id).await.ok(),
}) avatar_url: services.users.avatar_url(&user_id).await.ok(),
}) };
.collect()
.await;
Ok(joined_members::v3::Response { joined }) (user_id, member)
})
.collect()
.await,
})
} }
pub async fn join_room_by_id_helper( pub async fn join_room_by_id_helper(
@ -1118,9 +1118,10 @@ async fn join_room_by_id_helper_remote(
})?; })?;
if signed_event_id != event_id { if signed_event_id != event_id {
return Err!(Request(BadJson( return Err!(Request(BadJson(warn!(
warn!(%signed_event_id, %event_id, "Server {remote_server} sent event with wrong event ID") %signed_event_id, %event_id,
))); "Server {remote_server} sent event with wrong event ID"
))));
} }
match signed_value["signatures"] match signed_value["signatures"]
@ -1696,19 +1697,18 @@ pub(crate) async fn invite_helper(
})?; })?;
if pdu.event_id != event_id { if pdu.event_id != event_id {
return Err!(Request(BadJson( return Err!(Request(BadJson(warn!(
warn!(%pdu.event_id, %event_id, "Server {} sent event with wrong event ID", user_id.server_name()) %pdu.event_id, %event_id,
))); "Server {} sent event with wrong event ID",
user_id.server_name()
))));
} }
let origin: OwnedServerName = serde_json::from_value( let origin: OwnedServerName = serde_json::from_value(serde_json::to_value(
serde_json::to_value( value
value .get("origin")
.get("origin") .ok_or_else(|| err!(Request(BadJson("Event missing origin field."))))?,
.ok_or_else(|| err!(Request(BadJson("Event missing origin field."))))?, )?)
)
.expect("CanonicalJson is valid json value"),
)
.map_err(|e| { .map_err(|e| {
err!(Request(BadJson(warn!("Origin field in event is not a valid server name: {e}")))) err!(Request(BadJson(warn!("Origin field in event is not a valid server name: {e}"))))
})?; })?;
@ -1818,9 +1818,11 @@ pub async fn leave_room(
blurhash: None, blurhash: None,
}; };
if services.rooms.metadata.is_banned(room_id).await let is_banned = services.rooms.metadata.is_banned(room_id);
|| services.rooms.metadata.is_disabled(room_id).await let is_disabled = services.rooms.metadata.is_disabled(room_id);
{
pin_mut!(is_banned, is_disabled);
if is_banned.or(is_disabled).await {
// the room is banned/disabled, the room must be rejected locally since we // the room is banned/disabled, the room must be rejected locally since we
// cant/dont want to federate with this server // cant/dont want to federate with this server
services services
@ -1840,18 +1842,21 @@ pub async fn leave_room(
return Ok(()); return Ok(());
} }
// Ask a remote server if we don't have this room and are not knocking on it let dont_have_room = services
if !services
.rooms .rooms
.state_cache .state_cache
.server_in_room(services.globals.server_name(), room_id) .server_in_room(services.globals.server_name(), room_id)
.await && !services .eq(&false);
let not_knocked = services
.rooms .rooms
.state_cache .state_cache
.is_knocked(user_id, room_id) .is_knocked(user_id, room_id)
.await .eq(&false);
{
if let Err(e) = remote_leave_room(services, user_id, room_id).await { // Ask a remote server if we don't have this room and are not knocking on it
if dont_have_room.and(not_knocked).await {
if let Err(e) = remote_leave_room(services, user_id, room_id).boxed().await {
warn!(%user_id, "Failed to leave room {room_id} remotely: {e}"); warn!(%user_id, "Failed to leave room {room_id} remotely: {e}");
// Don't tell the client about this error // Don't tell the client about this error
} }

View file

@ -21,12 +21,15 @@ use conduwuit_service::{
}; };
use futures::{FutureExt, StreamExt, TryFutureExt, future::OptionFuture, pin_mut}; use futures::{FutureExt, StreamExt, TryFutureExt, future::OptionFuture, pin_mut};
use ruma::{ use ruma::{
RoomId, UserId, DeviceId, RoomId, UserId,
api::{ api::{
Direction, Direction,
client::{filter::RoomEventFilter, message::get_message_events}, client::{filter::RoomEventFilter, message::get_message_events},
}, },
events::{AnyStateEvent, StateEventType, TimelineEventType, TimelineEventType::*}, events::{
AnyStateEvent, StateEventType,
TimelineEventType::{self, *},
},
serde::Raw, serde::Raw,
}; };
@ -67,8 +70,8 @@ pub(crate) async fn get_message_events_route(
body: Ruma<get_message_events::v3::Request>, body: Ruma<get_message_events::v3::Request>,
) -> Result<get_message_events::v3::Response> { ) -> Result<get_message_events::v3::Response> {
debug_assert!(IGNORED_MESSAGE_TYPES.is_sorted(), "IGNORED_MESSAGE_TYPES is not sorted"); debug_assert!(IGNORED_MESSAGE_TYPES.is_sorted(), "IGNORED_MESSAGE_TYPES is not sorted");
let sender = body.sender(); let sender_user = body.sender_user();
let (sender_user, sender_device) = sender; let sender_device = body.sender_device.as_ref();
let room_id = &body.room_id; let room_id = &body.room_id;
let filter = &body.filter; let filter = &body.filter;
@ -129,10 +132,20 @@ pub(crate) async fn get_message_events_route(
.take(limit) .take(limit)
.collect() .collect()
.await; .await;
// let appservice_id = body.appservice_info.map(|appservice|
// appservice.registration.id);
let lazy_loading_context = lazy_loading::Context { let lazy_loading_context = lazy_loading::Context {
user_id: sender_user, user_id: sender_user,
device_id: sender_device, device_id: match sender_device {
| Some(device_id) => device_id,
| None =>
if let Some(registration) = body.appservice_info.as_ref() {
<&DeviceId>::from(registration.registration.id.as_str())
} else {
<&DeviceId>::from("")
},
},
room_id, room_id,
token: Some(from.into_unsigned()), token: Some(from.into_unsigned()),
options: Some(&filter.lazy_load_options), options: Some(&filter.lazy_load_options),

View file

@ -121,7 +121,9 @@ where
.map(|(key, val)| (key, val.collect())) .map(|(key, val)| (key, val.collect()))
.collect(); .collect();
if !populate { if populate {
rooms.push(summary_to_chunk(summary.clone()));
} else {
children = children children = children
.iter() .iter()
.rev() .rev()
@ -144,10 +146,8 @@ where
.collect(); .collect();
} }
if populate { if queue.is_empty() && children.is_empty() {
rooms.push(summary_to_chunk(summary.clone())); break;
} else if queue.is_empty() && children.is_empty() {
return Err!(Request(InvalidParam("Room IDs in token were not found.")));
} }
parents.insert(current_room.clone()); parents.insert(current_room.clone());
@ -179,7 +179,7 @@ where
(next_short_room_ids.iter().ne(short_room_ids) && !next_short_room_ids.is_empty()) (next_short_room_ids.iter().ne(short_room_ids) && !next_short_room_ids.is_empty())
.then_some(PaginationToken { .then_some(PaginationToken {
short_room_ids: next_short_room_ids, short_room_ids: next_short_room_ids,
limit: max_depth.try_into().ok()?, limit: limit.try_into().ok()?,
max_depth: max_depth.try_into().ok()?, max_depth: max_depth.try_into().ok()?,
suggested_only, suggested_only,
}) })

View file

@ -5,16 +5,12 @@ mod v5;
use conduwuit::{ use conduwuit::{
Error, PduCount, Result, Error, PduCount, Result,
matrix::pdu::PduEvent, matrix::pdu::PduEvent,
utils::{ utils::stream::{BroadbandExt, ReadyExt, TryIgnore},
IterStream,
stream::{BroadbandExt, ReadyExt, TryIgnore},
},
}; };
use conduwuit_service::Services; use conduwuit_service::Services;
use futures::{StreamExt, pin_mut}; use futures::{StreamExt, pin_mut};
use ruma::{ use ruma::{
RoomId, UserId, RoomId, UserId,
directory::RoomTypeFilter,
events::TimelineEventType::{ events::TimelineEventType::{
self, Beacon, CallInvite, PollStart, RoomEncrypted, RoomMessage, Sticker, self, Beacon, CallInvite, PollStart, RoomEncrypted, RoomMessage, Sticker,
}, },
@ -87,33 +83,3 @@ async fn share_encrypted_room(
}) })
.await .await
} }
pub(crate) async fn filter_rooms<'a>(
services: &Services,
rooms: &[&'a RoomId],
filter: &[RoomTypeFilter],
negate: bool,
) -> Vec<&'a RoomId> {
rooms
.iter()
.stream()
.filter_map(|r| async move {
let room_type = services.rooms.state_accessor.get_room_type(r).await;
if room_type.as_ref().is_err_and(|e| !e.is_not_found()) {
return None;
}
let room_type_filter = RoomTypeFilter::from(room_type.ok());
let include = if negate {
!filter.contains(&room_type_filter)
} else {
filter.is_empty() || filter.contains(&room_type_filter)
};
include.then_some(r)
})
.collect()
.await
}

View file

@ -14,8 +14,8 @@ use conduwuit::{
pair_of, ref_at, pair_of, ref_at,
result::FlatOk, result::FlatOk,
utils::{ utils::{
self, BoolExt, IterStream, ReadyExt, TryFutureExtExt, self, BoolExt, FutureBoolExt, IterStream, ReadyExt, TryFutureExtExt,
future::OptionStream, future::{OptionStream, ReadyEqExt},
math::ruma_from_u64, math::ruma_from_u64,
stream::{BroadbandExt, Tools, TryExpect, WidebandExt}, stream::{BroadbandExt, Tools, TryExpect, WidebandExt},
}, },
@ -32,6 +32,7 @@ use conduwuit_service::{
use futures::{ use futures::{
FutureExt, StreamExt, TryFutureExt, TryStreamExt, FutureExt, StreamExt, TryFutureExt, TryStreamExt,
future::{OptionFuture, join, join3, join4, join5, try_join, try_join4}, future::{OptionFuture, join, join3, join4, join5, try_join, try_join4},
pin_mut,
}; };
use ruma::{ use ruma::{
DeviceId, EventId, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId, DeviceId, EventId, OwnedEventId, OwnedRoomId, OwnedUserId, RoomId, UserId,
@ -433,10 +434,14 @@ async fn handle_left_room(
return Ok(None); return Ok(None);
} }
if !services.rooms.metadata.exists(room_id).await let is_not_found = services.rooms.metadata.exists(room_id).eq(&false);
|| services.rooms.metadata.is_disabled(room_id).await
|| services.rooms.metadata.is_banned(room_id).await let is_disabled = services.rooms.metadata.is_disabled(room_id);
{
let is_banned = services.rooms.metadata.is_banned(room_id);
pin_mut!(is_not_found, is_disabled, is_banned);
if is_not_found.or(is_disabled).or(is_banned).await {
// This is just a rejected invite, not a room we know // This is just a rejected invite, not a room we know
// Insert a leave event anyways for the client // Insert a leave event anyways for the client
let event = PduEvent { let event = PduEvent {

View file

@ -6,23 +6,27 @@ use std::{
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Error, PduCount, PduEvent, Result, debug, error, extract_variant, Err, Error, PduCount, PduEvent, Result, debug, error, extract_variant,
matrix::TypeStateKey,
utils::{ utils::{
BoolExt, IterStream, ReadyExt, TryFutureExtExt, BoolExt, IterStream, ReadyExt, TryFutureExtExt,
math::{ruma_from_usize, usize_from_ruma, usize_from_u64_truncated}, math::{ruma_from_usize, usize_from_ruma, usize_from_u64_truncated},
}, },
warn, warn,
}; };
use conduwuit_service::{
Services,
rooms::read_receipt::pack_receipts,
sync::{into_db_key, into_snake_key},
};
use futures::{FutureExt, StreamExt, TryFutureExt}; use futures::{FutureExt, StreamExt, TryFutureExt};
use ruma::{ use ruma::{
MilliSecondsSinceUnixEpoch, OwnedEventId, OwnedRoomId, RoomId, UInt, UserId, MilliSecondsSinceUnixEpoch, OwnedEventId, OwnedRoomId, RoomId, UInt, UserId,
api::client::{ api::client::sync::sync_events::{
error::ErrorKind, self, DeviceLists, UnreadNotificationsCount,
sync::sync_events::{ v4::{SlidingOp, SlidingSyncRoomHero},
self, DeviceLists, UnreadNotificationsCount,
v4::{SlidingOp, SlidingSyncRoomHero},
},
}, },
directory::RoomTypeFilter,
events::{ events::{
AnyRawAccountDataEvent, AnySyncEphemeralRoomEvent, StateEventType, AnyRawAccountDataEvent, AnySyncEphemeralRoomEvent, StateEventType,
TimelineEventType::*, TimelineEventType::*,
@ -31,15 +35,15 @@ use ruma::{
serde::Raw, serde::Raw,
uint, uint,
}; };
use service::rooms::read_receipt::pack_receipts;
use super::{load_timeline, share_encrypted_room}; use super::{load_timeline, share_encrypted_room};
use crate::{ use crate::{
Ruma, Ruma,
client::{DEFAULT_BUMP_TYPES, filter_rooms, ignored_filter, sync::v5::TodoRooms}, client::{DEFAULT_BUMP_TYPES, ignored_filter},
}; };
pub(crate) const SINGLE_CONNECTION_SYNC: &str = "single_connection_sync"; type TodoRooms = BTreeMap<OwnedRoomId, (BTreeSet<TypeStateKey>, usize, u64)>;
const SINGLE_CONNECTION_SYNC: &str = "single_connection_sync";
/// POST `/_matrix/client/unstable/org.matrix.msc3575/sync` /// POST `/_matrix/client/unstable/org.matrix.msc3575/sync`
/// ///
@ -50,10 +54,11 @@ pub(crate) async fn sync_events_v4_route(
) -> Result<sync_events::v4::Response> { ) -> Result<sync_events::v4::Response> {
debug_assert!(DEFAULT_BUMP_TYPES.is_sorted(), "DEFAULT_BUMP_TYPES is not sorted"); debug_assert!(DEFAULT_BUMP_TYPES.is_sorted(), "DEFAULT_BUMP_TYPES is not sorted");
let sender_user = body.sender_user.as_ref().expect("user is authenticated"); let sender_user = body.sender_user.as_ref().expect("user is authenticated");
let sender_device = body.sender_device.expect("user is authenticated"); let sender_device = body.sender_device.as_ref().expect("user is authenticated");
let mut body = body.body; let mut body = body.body;
// Setup watchers, so if there's no response, we can wait for them // Setup watchers, so if there's no response, we can wait for them
let watcher = services.sync.watch(sender_user, &sender_device); let watcher = services.sync.watch(sender_user, sender_device);
let next_batch = services.globals.next_count()?; let next_batch = services.globals.next_count()?;
@ -68,33 +73,21 @@ pub(crate) async fn sync_events_v4_route(
.and_then(|string| string.parse().ok()) .and_then(|string| string.parse().ok())
.unwrap_or(0); .unwrap_or(0);
if globalsince != 0 let db_key = into_db_key(sender_user, sender_device, conn_id.clone());
&& !services if globalsince != 0 && !services.sync.remembered(&db_key) {
.sync
.remembered(sender_user.clone(), sender_device.clone(), conn_id.clone())
{
debug!("Restarting sync stream because it was gone from the database"); debug!("Restarting sync stream because it was gone from the database");
return Err(Error::Request( return Err!(Request(UnknownPos("Connection data lost since last time")));
ErrorKind::UnknownPos,
"Connection data lost since last time".into(),
http::StatusCode::BAD_REQUEST,
));
} }
if globalsince == 0 { if globalsince == 0 {
services.sync.forget_sync_request_connection( services.sync.forget_sync_request_connection(&db_key);
sender_user.clone(),
sender_device.clone(),
conn_id.clone(),
);
} }
// Get sticky parameters from cache // Get sticky parameters from cache
let known_rooms = services.sync.update_sync_request_with_cache( let snake_key = into_snake_key(sender_user, sender_device, conn_id.clone());
sender_user.clone(), let known_rooms = services
sender_device.clone(), .sync
&mut body, .update_sync_request_with_cache(&snake_key, &mut body);
);
let all_joined_rooms: Vec<_> = services let all_joined_rooms: Vec<_> = services
.rooms .rooms
@ -136,7 +129,7 @@ pub(crate) async fn sync_events_v4_route(
if body.extensions.to_device.enabled.unwrap_or(false) { if body.extensions.to_device.enabled.unwrap_or(false) {
services services
.users .users
.remove_to_device_events(sender_user, &sender_device, globalsince) .remove_to_device_events(sender_user, sender_device, globalsince)
.await; .await;
} }
@ -261,7 +254,7 @@ pub(crate) async fn sync_events_v4_route(
if let Some(Ok(user_id)) = if let Some(Ok(user_id)) =
pdu.state_key.as_deref().map(UserId::parse) pdu.state_key.as_deref().map(UserId::parse)
{ {
if user_id == *sender_user { if user_id == sender_user {
continue; continue;
} }
@ -299,7 +292,7 @@ pub(crate) async fn sync_events_v4_route(
.state_cache .state_cache
.room_members(room_id) .room_members(room_id)
// Don't send key updates from the sender to the sender // Don't send key updates from the sender to the sender
.ready_filter(|user_id| sender_user != user_id) .ready_filter(|&user_id| sender_user != user_id)
// Only send keys if the sender doesn't share an encrypted room with the target // Only send keys if the sender doesn't share an encrypted room with the target
// already // already
.filter_map(|user_id| { .filter_map(|user_id| {
@ -425,10 +418,9 @@ pub(crate) async fn sync_events_v4_route(
}); });
if let Some(conn_id) = &body.conn_id { if let Some(conn_id) = &body.conn_id {
let db_key = into_db_key(sender_user, sender_device, conn_id);
services.sync.update_sync_known_rooms( services.sync.update_sync_known_rooms(
sender_user, &db_key,
&sender_device,
conn_id.clone(),
list_id.clone(), list_id.clone(),
new_known_rooms, new_known_rooms,
globalsince, globalsince,
@ -478,23 +470,20 @@ pub(crate) async fn sync_events_v4_route(
} }
if let Some(conn_id) = &body.conn_id { if let Some(conn_id) = &body.conn_id {
let db_key = into_db_key(sender_user, sender_device, conn_id);
services.sync.update_sync_known_rooms( services.sync.update_sync_known_rooms(
sender_user, &db_key,
&sender_device,
conn_id.clone(),
"subscriptions".to_owned(), "subscriptions".to_owned(),
known_subscription_rooms, known_subscription_rooms,
globalsince, globalsince,
); );
} }
if let Some(conn_id) = &body.conn_id { if let Some(conn_id) = body.conn_id.clone() {
services.sync.update_sync_subscriptions( let db_key = into_db_key(sender_user, sender_device, conn_id);
sender_user.clone(), services
sender_device.clone(), .sync
conn_id.clone(), .update_sync_subscriptions(&db_key, body.room_subscriptions);
body.room_subscriptions,
);
} }
let mut rooms = BTreeMap::new(); let mut rooms = BTreeMap::new();
@ -648,7 +637,7 @@ pub(crate) async fn sync_events_v4_route(
.rooms .rooms
.state_cache .state_cache
.room_members(room_id) .room_members(room_id)
.ready_filter(|member| member != sender_user) .ready_filter(|&member| member != sender_user)
.filter_map(|user_id| { .filter_map(|user_id| {
services services
.rooms .rooms
@ -787,7 +776,7 @@ pub(crate) async fn sync_events_v4_route(
.users .users
.get_to_device_events( .get_to_device_events(
sender_user, sender_user,
&sender_device, sender_device,
Some(globalsince), Some(globalsince),
Some(next_batch), Some(next_batch),
) )
@ -805,7 +794,7 @@ pub(crate) async fn sync_events_v4_route(
}, },
device_one_time_keys_count: services device_one_time_keys_count: services
.users .users
.count_one_time_keys(sender_user, &sender_device) .count_one_time_keys(sender_user, sender_device)
.await, .await,
// Fallback keys are not yet supported // Fallback keys are not yet supported
device_unused_fallback_key_types: None, device_unused_fallback_key_types: None,
@ -817,3 +806,33 @@ pub(crate) async fn sync_events_v4_route(
delta_token: None, delta_token: None,
}) })
} }
async fn filter_rooms<'a>(
services: &Services,
rooms: &[&'a RoomId],
filter: &[RoomTypeFilter],
negate: bool,
) -> Vec<&'a RoomId> {
rooms
.iter()
.stream()
.filter_map(|r| async move {
let room_type = services.rooms.state_accessor.get_room_type(r).await;
if room_type.as_ref().is_err_and(|e| !e.is_not_found()) {
return None;
}
let room_type_filter = RoomTypeFilter::from(room_type.ok());
let include = if negate {
!filter.contains(&room_type_filter)
} else {
filter.is_empty() || filter.contains(&room_type_filter)
};
include.then_some(r)
})
.collect()
.await
}

View file

@ -1,31 +1,35 @@
use std::{ use std::{
cmp::{self, Ordering}, cmp::{self, Ordering},
collections::{BTreeMap, BTreeSet, HashMap, HashSet}, collections::{BTreeMap, BTreeSet, HashMap, HashSet},
ops::Deref,
time::Duration, time::Duration,
}; };
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Error, Result, debug, error, extract_variant, Err, Error, Result, error, extract_variant, is_equal_to,
matrix::{ matrix::{
TypeStateKey, TypeStateKey,
pdu::{PduCount, PduEvent}, pdu::{PduCount, PduEvent},
}, },
trace, trace,
utils::{ utils::{
BoolExt, IterStream, ReadyExt, TryFutureExtExt, BoolExt, FutureBoolExt, IterStream, ReadyExt, TryFutureExtExt,
future::ReadyEqExt,
math::{ruma_from_usize, usize_from_ruma}, math::{ruma_from_usize, usize_from_ruma},
}, },
warn, warn,
}; };
use conduwuit_service::rooms::read_receipt::pack_receipts; use conduwuit_service::{Services, rooms::read_receipt::pack_receipts, sync::into_snake_key};
use futures::{FutureExt, StreamExt, TryFutureExt}; use futures::{
FutureExt, Stream, StreamExt, TryFutureExt,
future::{OptionFuture, join3, try_join4},
pin_mut,
};
use ruma::{ use ruma::{
DeviceId, OwnedEventId, OwnedRoomId, RoomId, UInt, UserId, DeviceId, OwnedEventId, OwnedRoomId, RoomId, UInt, UserId,
api::client::{ api::client::sync::sync_events::{self, DeviceLists, UnreadNotificationsCount},
error::ErrorKind, directory::RoomTypeFilter,
sync::sync_events::{self, DeviceLists, UnreadNotificationsCount},
},
events::{ events::{
AnyRawAccountDataEvent, AnySyncEphemeralRoomEvent, StateEventType, TimelineEventType, AnyRawAccountDataEvent, AnySyncEphemeralRoomEvent, StateEventType, TimelineEventType,
room::member::{MembershipState, RoomMemberEventContent}, room::member::{MembershipState, RoomMemberEventContent},
@ -34,13 +38,15 @@ use ruma::{
uint, uint,
}; };
use super::{filter_rooms, share_encrypted_room}; use super::share_encrypted_room;
use crate::{ use crate::{
Ruma, Ruma,
client::{DEFAULT_BUMP_TYPES, ignored_filter, sync::load_timeline}, client::{DEFAULT_BUMP_TYPES, ignored_filter, sync::load_timeline},
}; };
type SyncInfo<'a> = (&'a UserId, &'a DeviceId, u64, &'a sync_events::v5::Request); type SyncInfo<'a> = (&'a UserId, &'a DeviceId, u64, &'a sync_events::v5::Request);
type TodoRooms = BTreeMap<OwnedRoomId, (BTreeSet<TypeStateKey>, usize, u64)>;
type KnownRooms = BTreeMap<String, BTreeMap<OwnedRoomId, u64>>;
/// `POST /_matrix/client/unstable/org.matrix.simplified_msc3575/sync` /// `POST /_matrix/client/unstable/org.matrix.simplified_msc3575/sync`
/// ([MSC4186]) /// ([MSC4186])
@ -53,7 +59,7 @@ type SyncInfo<'a> = (&'a UserId, &'a DeviceId, u64, &'a sync_events::v5::Request
/// [MSC3575]: https://github.com/matrix-org/matrix-spec-proposals/pull/3575 /// [MSC3575]: https://github.com/matrix-org/matrix-spec-proposals/pull/3575
/// [MSC4186]: https://github.com/matrix-org/matrix-spec-proposals/pull/4186 /// [MSC4186]: https://github.com/matrix-org/matrix-spec-proposals/pull/4186
pub(crate) async fn sync_events_v5_route( pub(crate) async fn sync_events_v5_route(
State(services): State<crate::State>, State(ref services): State<crate::State>,
body: Ruma<sync_events::v5::Request>, body: Ruma<sync_events::v5::Request>,
) -> Result<sync_events::v5::Response> { ) -> Result<sync_events::v5::Response> {
debug_assert!(DEFAULT_BUMP_TYPES.is_sorted(), "DEFAULT_BUMP_TYPES is not sorted"); debug_assert!(DEFAULT_BUMP_TYPES.is_sorted(), "DEFAULT_BUMP_TYPES is not sorted");
@ -74,95 +80,95 @@ pub(crate) async fn sync_events_v5_route(
.and_then(|string| string.parse().ok()) .and_then(|string| string.parse().ok())
.unwrap_or(0); .unwrap_or(0);
if globalsince != 0 let snake_key = into_snake_key(sender_user, sender_device, conn_id);
&& !services.sync.snake_connection_cached(
sender_user.clone(), if globalsince != 0 && !services.sync.snake_connection_cached(&snake_key) {
sender_device.clone(), return Err!(Request(UnknownPos(
conn_id.clone(), "Connection data unknown to server; restarting sync stream."
) { )));
debug!("Restarting sync stream because it was gone from the database");
return Err(Error::Request(
ErrorKind::UnknownPos,
"Connection data lost since last time".into(),
http::StatusCode::BAD_REQUEST,
));
} }
// Client / User requested an initial sync // Client / User requested an initial sync
if globalsince == 0 { if globalsince == 0 {
services.sync.forget_snake_sync_connection( services.sync.forget_snake_sync_connection(&snake_key);
sender_user.clone(),
sender_device.clone(),
conn_id.clone(),
);
} }
// Get sticky parameters from cache // Get sticky parameters from cache
let known_rooms = services.sync.update_snake_sync_request_with_cache( let known_rooms = services
sender_user.clone(), .sync
sender_device.clone(), .update_snake_sync_request_with_cache(&snake_key, &mut body);
&mut body,
);
let all_joined_rooms: Vec<_> = services let all_joined_rooms = services
.rooms .rooms
.state_cache .state_cache
.rooms_joined(sender_user) .rooms_joined(sender_user)
.map(ToOwned::to_owned) .map(ToOwned::to_owned)
.collect() .collect::<Vec<OwnedRoomId>>();
.await;
let all_invited_rooms: Vec<_> = services let all_invited_rooms = services
.rooms .rooms
.state_cache .state_cache
.rooms_invited(sender_user) .rooms_invited(sender_user)
.map(|r| r.0) .map(|r| r.0)
.collect() .collect::<Vec<OwnedRoomId>>();
.await;
let all_knocked_rooms: Vec<_> = services let all_knocked_rooms = services
.rooms .rooms
.state_cache .state_cache
.rooms_knocked(sender_user) .rooms_knocked(sender_user)
.map(|r| r.0) .map(|r| r.0)
.collect() .collect::<Vec<OwnedRoomId>>();
.await;
let all_rooms: Vec<&RoomId> = all_joined_rooms let (all_joined_rooms, all_invited_rooms, all_knocked_rooms) =
.iter() join3(all_joined_rooms, all_invited_rooms, all_knocked_rooms).await;
.map(AsRef::as_ref)
.chain(all_invited_rooms.iter().map(AsRef::as_ref))
.chain(all_knocked_rooms.iter().map(AsRef::as_ref))
.collect();
let all_joined_rooms = all_joined_rooms.iter().map(AsRef::as_ref).collect(); let all_joined_rooms = all_joined_rooms.iter().map(AsRef::as_ref);
let all_invited_rooms = all_invited_rooms.iter().map(AsRef::as_ref).collect(); let all_invited_rooms = all_invited_rooms.iter().map(AsRef::as_ref);
let all_knocked_rooms = all_knocked_rooms.iter().map(AsRef::as_ref);
let all_rooms = all_joined_rooms
.clone()
.chain(all_invited_rooms.clone())
.chain(all_knocked_rooms.clone());
let pos = next_batch.clone().to_string(); let pos = next_batch.clone().to_string();
let mut todo_rooms: TodoRooms = BTreeMap::new(); let mut todo_rooms: TodoRooms = BTreeMap::new();
let sync_info: SyncInfo<'_> = (sender_user, sender_device, globalsince, &body); let sync_info: SyncInfo<'_> = (sender_user, sender_device, globalsince, &body);
let account_data = collect_account_data(services, sync_info).map(Ok);
let e2ee = collect_e2ee(services, sync_info, all_joined_rooms.clone());
let to_device = collect_to_device(services, sync_info, next_batch).map(Ok);
let receipts = collect_receipts(services).map(Ok);
let (account_data, e2ee, to_device, receipts) =
try_join4(account_data, e2ee, to_device, receipts).await?;
let extensions = sync_events::v5::response::Extensions {
account_data,
e2ee,
to_device,
receipts,
typing: sync_events::v5::response::Typing::default(),
};
let mut response = sync_events::v5::Response { let mut response = sync_events::v5::Response {
txn_id: body.txn_id.clone(), txn_id: body.txn_id.clone(),
pos, pos,
lists: BTreeMap::new(), lists: BTreeMap::new(),
rooms: BTreeMap::new(), rooms: BTreeMap::new(),
extensions: sync_events::v5::response::Extensions { extensions,
account_data: collect_account_data(services, sync_info).await,
e2ee: collect_e2ee(services, sync_info, &all_joined_rooms).await?,
to_device: collect_to_device(services, sync_info, next_batch).await,
receipts: collect_receipts(services).await,
typing: sync_events::v5::response::Typing::default(),
},
}; };
handle_lists( handle_lists(
services, services,
sync_info, sync_info,
&all_invited_rooms, all_invited_rooms.clone(),
&all_joined_rooms, all_joined_rooms.clone(),
&all_rooms, all_rooms,
&mut todo_rooms, &mut todo_rooms,
&known_rooms, &known_rooms,
&mut response, &mut response,
@ -175,7 +181,7 @@ pub(crate) async fn sync_events_v5_route(
services, services,
sender_user, sender_user,
next_batch, next_batch,
&all_invited_rooms, all_invited_rooms.clone(),
&todo_rooms, &todo_rooms,
&mut response, &mut response,
&body, &body,
@ -200,31 +206,33 @@ pub(crate) async fn sync_events_v5_route(
} }
trace!( trace!(
rooms=?response.rooms.len(), rooms = ?response.rooms.len(),
account_data=?response.extensions.account_data.rooms.len(), account_data = ?response.extensions.account_data.rooms.len(),
receipts=?response.extensions.receipts.rooms.len(), receipts = ?response.extensions.receipts.rooms.len(),
"responding to request with" "responding to request with"
); );
Ok(response) Ok(response)
} }
type KnownRooms = BTreeMap<String, BTreeMap<OwnedRoomId, u64>>;
pub(crate) type TodoRooms = BTreeMap<OwnedRoomId, (BTreeSet<TypeStateKey>, usize, u64)>;
async fn fetch_subscriptions( async fn fetch_subscriptions(
services: crate::State, services: &Services,
(sender_user, sender_device, globalsince, body): SyncInfo<'_>, (sender_user, sender_device, globalsince, body): SyncInfo<'_>,
known_rooms: &KnownRooms, known_rooms: &KnownRooms,
todo_rooms: &mut TodoRooms, todo_rooms: &mut TodoRooms,
) { ) {
let mut known_subscription_rooms = BTreeSet::new(); let mut known_subscription_rooms = BTreeSet::new();
for (room_id, room) in &body.room_subscriptions { for (room_id, room) in &body.room_subscriptions {
if !services.rooms.metadata.exists(room_id).await let not_exists = services.rooms.metadata.exists(room_id).eq(&false);
|| services.rooms.metadata.is_disabled(room_id).await
|| services.rooms.metadata.is_banned(room_id).await let is_disabled = services.rooms.metadata.is_disabled(room_id);
{
let is_banned = services.rooms.metadata.is_banned(room_id);
pin_mut!(not_exists, is_disabled, is_banned);
if not_exists.or(is_disabled).or(is_banned).await {
continue; continue;
} }
let todo_room = let todo_room =
todo_rooms todo_rooms
.entry(room_id.clone()) .entry(room_id.clone())
@ -254,11 +262,10 @@ async fn fetch_subscriptions(
// body.room_subscriptions.remove(&r); // body.room_subscriptions.remove(&r);
//} //}
if let Some(conn_id) = &body.conn_id { if let Some(conn_id) = body.conn_id.clone() {
let snake_key = into_snake_key(sender_user, sender_device, conn_id);
services.sync.update_snake_sync_known_rooms( services.sync.update_snake_sync_known_rooms(
sender_user, &snake_key,
sender_device,
conn_id.clone(),
"subscriptions".to_owned(), "subscriptions".to_owned(),
known_subscription_rooms, known_subscription_rooms,
globalsince, globalsince,
@ -267,27 +274,39 @@ async fn fetch_subscriptions(
} }
#[allow(clippy::too_many_arguments)] #[allow(clippy::too_many_arguments)]
async fn handle_lists<'a>( async fn handle_lists<'a, Rooms, AllRooms>(
services: crate::State, services: &Services,
(sender_user, sender_device, globalsince, body): SyncInfo<'_>, (sender_user, sender_device, globalsince, body): SyncInfo<'_>,
all_invited_rooms: &Vec<&'a RoomId>, all_invited_rooms: Rooms,
all_joined_rooms: &Vec<&'a RoomId>, all_joined_rooms: Rooms,
all_rooms: &Vec<&'a RoomId>, all_rooms: AllRooms,
todo_rooms: &'a mut TodoRooms, todo_rooms: &'a mut TodoRooms,
known_rooms: &'a KnownRooms, known_rooms: &'a KnownRooms,
response: &'_ mut sync_events::v5::Response, response: &'_ mut sync_events::v5::Response,
) -> KnownRooms { ) -> KnownRooms
where
Rooms: Iterator<Item = &'a RoomId> + Clone + Send + 'a,
AllRooms: Iterator<Item = &'a RoomId> + Clone + Send + 'a,
{
for (list_id, list) in &body.lists { for (list_id, list) in &body.lists {
let active_rooms = match list.filters.clone().and_then(|f| f.is_invite) { let active_rooms: Vec<_> = match list.filters.as_ref().and_then(|f| f.is_invite) {
| Some(true) => all_invited_rooms, | None => all_rooms.clone().collect(),
| Some(false) => all_joined_rooms, | Some(true) => all_invited_rooms.clone().collect(),
| None => all_rooms, | Some(false) => all_joined_rooms.clone().collect(),
}; };
let active_rooms = match list.filters.clone().map(|f| f.not_room_types) { let active_rooms = match list.filters.as_ref().map(|f| &f.not_room_types) {
| Some(filter) if filter.is_empty() => active_rooms,
| Some(value) => &filter_rooms(&services, active_rooms, &value, true).await,
| None => active_rooms, | None => active_rooms,
| Some(filter) if filter.is_empty() => active_rooms,
| Some(value) =>
filter_rooms(
services,
value,
&true,
active_rooms.iter().stream().map(Deref::deref),
)
.collect()
.await,
}; };
let mut new_known_rooms: BTreeSet<OwnedRoomId> = BTreeSet::new(); let mut new_known_rooms: BTreeSet<OwnedRoomId> = BTreeSet::new();
@ -305,6 +324,7 @@ async fn handle_lists<'a>(
let new_rooms: BTreeSet<OwnedRoomId> = let new_rooms: BTreeSet<OwnedRoomId> =
room_ids.clone().into_iter().map(From::from).collect(); room_ids.clone().into_iter().map(From::from).collect();
new_known_rooms.extend(new_rooms); new_known_rooms.extend(new_rooms);
//new_known_rooms.extend(room_ids..cloned()); //new_known_rooms.extend(room_ids..cloned());
for room_id in room_ids { for room_id in room_ids {
@ -340,29 +360,32 @@ async fn handle_lists<'a>(
count: ruma_from_usize(active_rooms.len()), count: ruma_from_usize(active_rooms.len()),
}); });
if let Some(conn_id) = &body.conn_id { if let Some(conn_id) = body.conn_id.clone() {
let snake_key = into_snake_key(sender_user, sender_device, conn_id);
services.sync.update_snake_sync_known_rooms( services.sync.update_snake_sync_known_rooms(
sender_user, &snake_key,
sender_device,
conn_id.clone(),
list_id.clone(), list_id.clone(),
new_known_rooms, new_known_rooms,
globalsince, globalsince,
); );
} }
} }
BTreeMap::default() BTreeMap::default()
} }
async fn process_rooms( async fn process_rooms<'a, Rooms>(
services: crate::State, services: &Services,
sender_user: &UserId, sender_user: &UserId,
next_batch: u64, next_batch: u64,
all_invited_rooms: &[&RoomId], all_invited_rooms: Rooms,
todo_rooms: &TodoRooms, todo_rooms: &TodoRooms,
response: &mut sync_events::v5::Response, response: &mut sync_events::v5::Response,
body: &sync_events::v5::Request, body: &sync_events::v5::Request,
) -> Result<BTreeMap<OwnedRoomId, sync_events::v5::response::Room>> { ) -> Result<BTreeMap<OwnedRoomId, sync_events::v5::response::Room>>
where
Rooms: Iterator<Item = &'a RoomId> + Clone + Send + 'a,
{
let mut rooms = BTreeMap::new(); let mut rooms = BTreeMap::new();
for (room_id, (required_state_request, timeline_limit, roomsince)) in todo_rooms { for (room_id, (required_state_request, timeline_limit, roomsince)) in todo_rooms {
let roomsincecount = PduCount::Normal(*roomsince); let roomsincecount = PduCount::Normal(*roomsince);
@ -371,7 +394,7 @@ async fn process_rooms(
let mut invite_state = None; let mut invite_state = None;
let (timeline_pdus, limited); let (timeline_pdus, limited);
let new_room_id: &RoomId = (*room_id).as_ref(); let new_room_id: &RoomId = (*room_id).as_ref();
if all_invited_rooms.contains(&new_room_id) { if all_invited_rooms.clone().any(is_equal_to!(new_room_id)) {
// TODO: figure out a timestamp we can use for remote invites // TODO: figure out a timestamp we can use for remote invites
invite_state = services invite_state = services
.rooms .rooms
@ -383,7 +406,7 @@ async fn process_rooms(
(timeline_pdus, limited) = (Vec::new(), true); (timeline_pdus, limited) = (Vec::new(), true);
} else { } else {
(timeline_pdus, limited) = match load_timeline( (timeline_pdus, limited) = match load_timeline(
&services, services,
sender_user, sender_user,
room_id, room_id,
roomsincecount, roomsincecount,
@ -416,18 +439,17 @@ async fn process_rooms(
.rooms .rooms
.read_receipt .read_receipt
.last_privateread_update(sender_user, room_id) .last_privateread_update(sender_user, room_id)
.await > *roomsince; .await;
let private_read_event = if last_privateread_update { let private_read_event: OptionFuture<_> = (last_privateread_update > *roomsince)
services .then(|| {
.rooms services
.read_receipt .rooms
.private_read_get(room_id, sender_user) .read_receipt
.await .private_read_get(room_id, sender_user)
.ok() .ok()
} else { })
None .into();
};
let mut receipts: Vec<Raw<AnySyncEphemeralRoomEvent>> = services let mut receipts: Vec<Raw<AnySyncEphemeralRoomEvent>> = services
.rooms .rooms
@ -443,7 +465,7 @@ async fn process_rooms(
.collect() .collect()
.await; .await;
if let Some(private_read_event) = private_read_event { if let Some(private_read_event) = private_read_event.await.flatten() {
receipts.push(private_read_event); receipts.push(private_read_event);
} }
@ -492,7 +514,7 @@ async fn process_rooms(
let room_events: Vec<_> = timeline_pdus let room_events: Vec<_> = timeline_pdus
.iter() .iter()
.stream() .stream()
.filter_map(|item| ignored_filter(&services, item.clone(), sender_user)) .filter_map(|item| ignored_filter(services, item.clone(), sender_user))
.map(|(_, pdu)| pdu.to_sync_room_event()) .map(|(_, pdu)| pdu.to_sync_room_event())
.collect() .collect()
.await; .await;
@ -644,7 +666,7 @@ async fn process_rooms(
Ok(rooms) Ok(rooms)
} }
async fn collect_account_data( async fn collect_account_data(
services: crate::State, services: &Services,
(sender_user, _, globalsince, body): (&UserId, &DeviceId, u64, &sync_events::v5::Request), (sender_user, _, globalsince, body): (&UserId, &DeviceId, u64, &sync_events::v5::Request),
) -> sync_events::v5::response::AccountData { ) -> sync_events::v5::response::AccountData {
let mut account_data = sync_events::v5::response::AccountData { let mut account_data = sync_events::v5::response::AccountData {
@ -680,16 +702,19 @@ async fn collect_account_data(
account_data account_data
} }
async fn collect_e2ee<'a>( async fn collect_e2ee<'a, Rooms>(
services: crate::State, services: &Services,
(sender_user, sender_device, globalsince, body): ( (sender_user, sender_device, globalsince, body): (
&UserId, &UserId,
&DeviceId, &DeviceId,
u64, u64,
&sync_events::v5::Request, &sync_events::v5::Request,
), ),
all_joined_rooms: &'a Vec<&'a RoomId>, all_joined_rooms: Rooms,
) -> Result<sync_events::v5::response::E2EE> { ) -> Result<sync_events::v5::response::E2EE>
where
Rooms: Iterator<Item = &'a RoomId> + Send + 'a,
{
if !body.extensions.e2ee.enabled.unwrap_or(false) { if !body.extensions.e2ee.enabled.unwrap_or(false) {
return Ok(sync_events::v5::response::E2EE::default()); return Ok(sync_events::v5::response::E2EE::default());
} }
@ -790,7 +815,7 @@ async fn collect_e2ee<'a>(
| MembershipState::Join => { | MembershipState::Join => {
// A new user joined an encrypted room // A new user joined an encrypted room
if !share_encrypted_room( if !share_encrypted_room(
&services, services,
sender_user, sender_user,
user_id, user_id,
Some(room_id), Some(room_id),
@ -823,7 +848,7 @@ async fn collect_e2ee<'a>(
// Only send keys if the sender doesn't share an encrypted room with the target // Only send keys if the sender doesn't share an encrypted room with the target
// already // already
.filter_map(|user_id| { .filter_map(|user_id| {
share_encrypted_room(&services, sender_user, user_id, Some(room_id)) share_encrypted_room(services, sender_user, user_id, Some(room_id))
.map(|res| res.or_some(user_id.to_owned())) .map(|res| res.or_some(user_id.to_owned()))
}) })
.collect::<Vec<_>>() .collect::<Vec<_>>()
@ -846,7 +871,7 @@ async fn collect_e2ee<'a>(
for user_id in left_encrypted_users { for user_id in left_encrypted_users {
let dont_share_encrypted_room = let dont_share_encrypted_room =
!share_encrypted_room(&services, sender_user, &user_id, None).await; !share_encrypted_room(services, sender_user, &user_id, None).await;
// If the user doesn't share an encrypted room with the target anymore, we need // If the user doesn't share an encrypted room with the target anymore, we need
// to tell them // to tell them
@ -856,20 +881,22 @@ async fn collect_e2ee<'a>(
} }
Ok(sync_events::v5::response::E2EE { Ok(sync_events::v5::response::E2EE {
device_lists: DeviceLists { device_unused_fallback_key_types: None,
changed: device_list_changes.into_iter().collect(),
left: device_list_left.into_iter().collect(),
},
device_one_time_keys_count: services device_one_time_keys_count: services
.users .users
.count_one_time_keys(sender_user, sender_device) .count_one_time_keys(sender_user, sender_device)
.await, .await,
device_unused_fallback_key_types: None,
device_lists: DeviceLists {
changed: device_list_changes.into_iter().collect(),
left: device_list_left.into_iter().collect(),
},
}) })
} }
async fn collect_to_device( async fn collect_to_device(
services: crate::State, services: &Services,
(sender_user, sender_device, globalsince, body): SyncInfo<'_>, (sender_user, sender_device, globalsince, body): SyncInfo<'_>,
next_batch: u64, next_batch: u64,
) -> Option<sync_events::v5::response::ToDevice> { ) -> Option<sync_events::v5::response::ToDevice> {
@ -892,7 +919,35 @@ async fn collect_to_device(
}) })
} }
async fn collect_receipts(_services: crate::State) -> sync_events::v5::response::Receipts { async fn collect_receipts(_services: &Services) -> sync_events::v5::response::Receipts {
sync_events::v5::response::Receipts { rooms: BTreeMap::new() } sync_events::v5::response::Receipts { rooms: BTreeMap::new() }
// TODO: get explicitly requested read receipts // TODO: get explicitly requested read receipts
} }
fn filter_rooms<'a, Rooms>(
services: &'a Services,
filter: &'a [RoomTypeFilter],
negate: &'a bool,
rooms: Rooms,
) -> impl Stream<Item = &'a RoomId> + Send + 'a
where
Rooms: Stream<Item = &'a RoomId> + Send + 'a,
{
rooms.filter_map(async |room_id| {
let room_type = services.rooms.state_accessor.get_room_type(room_id).await;
if room_type.as_ref().is_err_and(|e| !e.is_not_found()) {
return None;
}
let room_type_filter = RoomTypeFilter::from(room_type.ok());
let include = if *negate {
!filter.contains(&room_type_filter)
} else {
filter.is_empty() || filter.contains(&room_type_filter)
};
include.then_some(room_id)
})
}

View file

@ -1,7 +1,10 @@
use axum::extract::State; use axum::extract::State;
use conduwuit::{ use conduwuit::{
Result, Result,
utils::{future::BoolExt, stream::BroadbandExt}, utils::{
future::BoolExt,
stream::{BroadbandExt, ReadyExt},
},
}; };
use futures::{FutureExt, StreamExt, pin_mut}; use futures::{FutureExt, StreamExt, pin_mut};
use ruma::{ use ruma::{
@ -30,29 +33,21 @@ pub(crate) async fn search_users_route(
.map_or(LIMIT_DEFAULT, usize::from) .map_or(LIMIT_DEFAULT, usize::from)
.min(LIMIT_MAX); .min(LIMIT_MAX);
let search_term = body.search_term.to_lowercase();
let mut users = services let mut users = services
.users .users
.stream() .stream()
.ready_filter(|user_id| user_id.as_str().to_lowercase().contains(&search_term))
.map(ToOwned::to_owned) .map(ToOwned::to_owned)
.broad_filter_map(async |user_id| { .broad_filter_map(async |user_id| {
let user = search_users::v3::User { let display_name = services.users.displayname(&user_id).await.ok();
user_id: user_id.clone(),
display_name: services.users.displayname(&user_id).await.ok(),
avatar_url: services.users.avatar_url(&user_id).await.ok(),
};
let user_id_matches = user let display_name_matches = display_name
.user_id .as_deref()
.as_str() .map(str::to_lowercase)
.to_lowercase() .is_some_and(|display_name| display_name.contains(&search_term));
.contains(&body.search_term.to_lowercase());
let user_displayname_matches = user.display_name.as_ref().is_some_and(|name| { if !display_name_matches {
name.to_lowercase()
.contains(&body.search_term.to_lowercase())
});
if !user_id_matches && !user_displayname_matches {
return None; return None;
} }
@ -61,11 +56,11 @@ pub(crate) async fn search_users_route(
.state_cache .state_cache
.rooms_joined(&user_id) .rooms_joined(&user_id)
.map(ToOwned::to_owned) .map(ToOwned::to_owned)
.any(|room| async move { .broad_any(async |room_id| {
services services
.rooms .rooms
.state_accessor .state_accessor
.get_join_rules(&room) .get_join_rules(&room_id)
.map(|rule| matches!(rule, JoinRule::Public)) .map(|rule| matches!(rule, JoinRule::Public))
.await .await
}); });
@ -76,8 +71,14 @@ pub(crate) async fn search_users_route(
.user_sees_user(sender_user, &user_id); .user_sees_user(sender_user, &user_id);
pin_mut!(user_in_public_room, user_sees_user); pin_mut!(user_in_public_room, user_sees_user);
user_in_public_room
user_in_public_room.or(user_sees_user).await.then_some(user) .or(user_sees_user)
.await
.then_some(search_users::v3::User {
user_id: user_id.clone(),
display_name,
avatar_url: services.users.avatar_url(&user_id).await.ok(),
})
}); });
let results = users.by_ref().take(limit).collect().await; let results = users.by_ref().take(limit).collect().await;

View file

@ -17,17 +17,24 @@ crate-type = [
] ]
[features] [features]
release_max_log_level = [ brotli_compression = [
"tracing/max_level_trace", "reqwest/brotli",
"tracing/release_max_level_info", ]
"log/max_level_trace", conduwuit_mods = [
"log/release_max_level_info", "dep:libloading"
]
gzip_compression = [
"reqwest/gzip",
]
hardened_malloc = [
"dep:hardened_malloc-rs"
] ]
jemalloc = [ jemalloc = [
"dep:tikv-jemalloc-sys", "dep:tikv-jemalloc-sys",
"dep:tikv-jemalloc-ctl", "dep:tikv-jemalloc-ctl",
"dep:tikv-jemallocator", "dep:tikv-jemallocator",
] ]
jemalloc_conf = []
jemalloc_prof = [ jemalloc_prof = [
"tikv-jemalloc-sys/profiling", "tikv-jemalloc-sys/profiling",
] ]
@ -36,24 +43,17 @@ jemalloc_stats = [
"tikv-jemalloc-ctl/stats", "tikv-jemalloc-ctl/stats",
"tikv-jemallocator/stats", "tikv-jemallocator/stats",
] ]
jemalloc_conf = [] perf_measurements = []
hardened_malloc = [ release_max_log_level = [
"dep:hardened_malloc-rs" "tracing/max_level_trace",
] "tracing/release_max_level_info",
gzip_compression = [ "log/max_level_trace",
"reqwest/gzip", "log/release_max_level_info",
]
brotli_compression = [
"reqwest/brotli",
] ]
sentry_telemetry = []
zstd_compression = [ zstd_compression = [
"reqwest/zstd", "reqwest/zstd",
] ]
perf_measurements = []
sentry_telemetry = []
conduwuit_mods = [
"dep:libloading"
]
[dependencies] [dependencies]
argon2.workspace = true argon2.workspace = true

View file

@ -160,16 +160,6 @@ pub struct Config {
#[serde(default = "default_new_user_displayname_suffix")] #[serde(default = "default_new_user_displayname_suffix")]
pub new_user_displayname_suffix: String, pub new_user_displayname_suffix: String,
/// If enabled, conduwuit will send a simple GET request periodically to
/// `https://pupbrain.dev/check-for-updates/stable` for any new
/// announcements made. Despite the name, this is not an update check
/// endpoint, it is simply an announcement check endpoint.
///
/// This is disabled by default as this is rarely used except for security
/// updates or major updates.
#[serde(default, alias = "allow_announcements_check")]
pub allow_check_for_updates: bool,
/// Set this to any float value to multiply conduwuit's in-memory LRU caches /// Set this to any float value to multiply conduwuit's in-memory LRU caches
/// with such as "auth_chain_cache_capacity". /// with such as "auth_chain_cache_capacity".
/// ///
@ -1133,8 +1123,8 @@ pub struct Config {
#[serde(default = "true_fn")] #[serde(default = "true_fn")]
pub rocksdb_compaction_ioprio_idle: bool, pub rocksdb_compaction_ioprio_idle: bool,
/// Disables RocksDB compaction. You should never ever have to set this /// Enables RocksDB compaction. You should never ever have to set this
/// option to true. If you for some reason find yourself needing to use this /// option to false. If you for some reason find yourself needing to use this
/// option as part of troubleshooting or a bug, please reach out to us in /// option as part of troubleshooting or a bug, please reach out to us in
/// the conduwuit Matrix room with information and details. /// the conduwuit Matrix room with information and details.
/// ///
@ -1636,7 +1626,7 @@ pub struct Config {
/// Sentry reporting URL, if a custom one is desired. /// Sentry reporting URL, if a custom one is desired.
/// ///
/// display: sensitive /// display: sensitive
/// default: "https://fe2eb4536aa04949e28eff3128d64757@o4506996327251968.ingest.us.sentry.io/4506996334657536" /// default: ""
#[serde(default = "default_sentry_endpoint")] #[serde(default = "default_sentry_endpoint")]
pub sentry_endpoint: Option<Url>, pub sentry_endpoint: Option<Url>,
@ -2207,9 +2197,7 @@ fn default_url_preview_max_spider_size() -> usize {
fn default_new_user_displayname_suffix() -> String { "🏳️‍⚧️".to_owned() } fn default_new_user_displayname_suffix() -> String { "🏳️‍⚧️".to_owned() }
fn default_sentry_endpoint() -> Option<Url> { fn default_sentry_endpoint() -> Option<Url> { None }
Url::parse("https://fe2eb4536aa04949e28eff3128d64757@o4506996327251968.ingest.us.sentry.io/4506996334657536").ok()
}
fn default_sentry_traces_sample_rate() -> f32 { 0.15 } fn default_sentry_traces_sample_rate() -> f32 { 0.15 }

View file

@ -12,6 +12,7 @@ pub use crate::{result::DebugInspect, utils::debug::*};
/// Log event at given level in debug-mode (when debug-assertions are enabled). /// Log event at given level in debug-mode (when debug-assertions are enabled).
/// In release-mode it becomes DEBUG level, and possibly subject to elision. /// In release-mode it becomes DEBUG level, and possibly subject to elision.
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! debug_event { macro_rules! debug_event {
( $level:expr_2021, $($x:tt)+ ) => { ( $level:expr_2021, $($x:tt)+ ) => {
if $crate::debug::logging() { if $crate::debug::logging() {

View file

@ -33,6 +33,7 @@
//! option of replacing `error!` with `debug_error!`. //! option of replacing `error!` with `debug_error!`.
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! Err { macro_rules! Err {
($($args:tt)*) => { ($($args:tt)*) => {
Err($crate::err!($($args)*)) Err($crate::err!($($args)*))
@ -40,6 +41,7 @@ macro_rules! Err {
} }
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! err { macro_rules! err {
(Request(Forbidden($level:ident!($($args:tt)+)))) => {{ (Request(Forbidden($level:ident!($($args:tt)+)))) => {{
let mut buf = String::new(); let mut buf = String::new();
@ -109,6 +111,7 @@ macro_rules! err {
/// can share the same callsite metadata for the source of our Error and the /// can share the same callsite metadata for the source of our Error and the
/// associated logging and tracing event dispatches. /// associated logging and tracing event dispatches.
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! err_log { macro_rules! err_log {
($out:ident, $level:ident, $($fields:tt)+) => {{ ($out:ident, $level:ident, $($fields:tt)+) => {{
use $crate::tracing::{ use $crate::tracing::{

View file

@ -31,7 +31,7 @@ const ROUTER_MANIFEST: &'static str = ();
#[cargo_manifest(crate = "main")] #[cargo_manifest(crate = "main")]
const MAIN_MANIFEST: &'static str = (); const MAIN_MANIFEST: &'static str = ();
/// Processed list of features access all project crates. This is generated from /// Processed list of features across all project crates. This is generated from
/// the data in the MANIFEST strings and contains all possible project features. /// the data in the MANIFEST strings and contains all possible project features.
/// For *enabled* features see the info::rustc module instead. /// For *enabled* features see the info::rustc module instead.
static FEATURES: OnceLock<Vec<String>> = OnceLock::new(); static FEATURES: OnceLock<Vec<String>> = OnceLock::new();

View file

@ -7,7 +7,7 @@
use std::sync::OnceLock; use std::sync::OnceLock;
static BRANDING: &str = "conduwuit"; static BRANDING: &str = "continuwuity";
static SEMANTIC: &str = env!("CARGO_PKG_VERSION"); static SEMANTIC: &str = env!("CARGO_PKG_VERSION");
static VERSION: OnceLock<String> = OnceLock::new(); static VERSION: OnceLock<String> = OnceLock::new();

View file

@ -33,6 +33,7 @@ pub struct Log {
// the crate namespace like these. // the crate namespace like these.
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! event { macro_rules! event {
( $level:expr_2021, $($x:tt)+ ) => { ::tracing::event!( $level, $($x)+ ) } ( $level:expr_2021, $($x:tt)+ ) => { ::tracing::event!( $level, $($x)+ ) }
} }

View file

@ -2,7 +2,6 @@ use std::{
borrow::Borrow, borrow::Borrow,
fmt::{Debug, Display}, fmt::{Debug, Display},
hash::Hash, hash::Hash,
sync::Arc,
}; };
use ruma::{EventId, MilliSecondsSinceUnixEpoch, RoomId, UserId, events::TimelineEventType}; use ruma::{EventId, MilliSecondsSinceUnixEpoch, RoomId, UserId, events::TimelineEventType};
@ -72,31 +71,3 @@ impl<T: Event> Event for &T {
fn redacts(&self) -> Option<&Self::Id> { (*self).redacts() } fn redacts(&self) -> Option<&Self::Id> { (*self).redacts() }
} }
impl<T: Event> Event for Arc<T> {
type Id = T::Id;
fn event_id(&self) -> &Self::Id { (**self).event_id() }
fn room_id(&self) -> &RoomId { (**self).room_id() }
fn sender(&self) -> &UserId { (**self).sender() }
fn origin_server_ts(&self) -> MilliSecondsSinceUnixEpoch { (**self).origin_server_ts() }
fn event_type(&self) -> &TimelineEventType { (**self).event_type() }
fn content(&self) -> &RawJsonValue { (**self).content() }
fn state_key(&self) -> Option<&str> { (**self).state_key() }
fn prev_events(&self) -> impl DoubleEndedIterator<Item = &Self::Id> + Send + '_ {
(**self).prev_events()
}
fn auth_events(&self) -> impl DoubleEndedIterator<Item = &Self::Id> + Send + '_ {
(**self).auth_events()
}
fn redacts(&self) -> Option<&Self::Id> { (**self).redacts() }
}

View file

@ -4,10 +4,7 @@ extern crate test;
use std::{ use std::{
borrow::Borrow, borrow::Borrow,
collections::{HashMap, HashSet}, collections::{HashMap, HashSet},
sync::{ sync::atomic::{AtomicU64, Ordering::SeqCst},
Arc,
atomic::{AtomicU64, Ordering::SeqCst},
},
}; };
use futures::{future, future::ready}; use futures::{future, future::ready};
@ -64,7 +61,7 @@ fn resolution_shallow_auth_chain(c: &mut test::Bencher) {
c.iter(|| async { c.iter(|| async {
let ev_map = store.0.clone(); let ev_map = store.0.clone();
let state_sets = [&state_at_bob, &state_at_charlie]; let state_sets = [&state_at_bob, &state_at_charlie];
let fetch = |id: OwnedEventId| ready(ev_map.get(&id).map(Arc::clone)); let fetch = |id: OwnedEventId| ready(ev_map.get(&id).clone());
let exists = |id: OwnedEventId| ready(ev_map.get(&id).is_some()); let exists = |id: OwnedEventId| ready(ev_map.get(&id).is_some());
let auth_chain_sets: Vec<HashSet<_>> = state_sets let auth_chain_sets: Vec<HashSet<_>> = state_sets
.iter() .iter()
@ -148,7 +145,7 @@ fn resolve_deeper_event_set(c: &mut test::Bencher) {
}) })
.collect(); .collect();
let fetch = |id: OwnedEventId| ready(inner.get(&id).map(Arc::clone)); let fetch = |id: OwnedEventId| ready(inner.get(&id).clone());
let exists = |id: OwnedEventId| ready(inner.get(&id).is_some()); let exists = |id: OwnedEventId| ready(inner.get(&id).is_some());
let _ = match state_res::resolve( let _ = match state_res::resolve(
&RoomVersionId::V6, &RoomVersionId::V6,
@ -171,20 +168,20 @@ fn resolve_deeper_event_set(c: &mut test::Bencher) {
// IMPLEMENTATION DETAILS AHEAD // IMPLEMENTATION DETAILS AHEAD
// //
/////////////////////////////////////////////////////////////////////*/ /////////////////////////////////////////////////////////////////////*/
struct TestStore<E: Event>(HashMap<OwnedEventId, Arc<E>>); struct TestStore<E: Event>(HashMap<OwnedEventId, E>);
#[allow(unused)] #[allow(unused)]
impl<E: Event> TestStore<E> { impl<E: Event + Clone> TestStore<E> {
fn get_event(&self, room_id: &RoomId, event_id: &EventId) -> Result<Arc<E>> { fn get_event(&self, room_id: &RoomId, event_id: &EventId) -> Result<E> {
self.0 self.0
.get(event_id) .get(event_id)
.map(Arc::clone) .cloned()
.ok_or_else(|| Error::NotFound(format!("{} not found", event_id))) .ok_or_else(|| Error::NotFound(format!("{} not found", event_id)))
} }
/// Returns the events that correspond to the `event_ids` sorted in the same /// Returns the events that correspond to the `event_ids` sorted in the same
/// order. /// order.
fn get_events(&self, room_id: &RoomId, event_ids: &[OwnedEventId]) -> Result<Vec<Arc<E>>> { fn get_events(&self, room_id: &RoomId, event_ids: &[OwnedEventId]) -> Result<Vec<E>> {
let mut events = vec![]; let mut events = vec![];
for id in event_ids { for id in event_ids {
events.push(self.get_event(room_id, id)?); events.push(self.get_event(room_id, id)?);
@ -264,7 +261,7 @@ impl TestStore<PduEvent> {
&[], &[],
); );
let cre = create_event.event_id().to_owned(); let cre = create_event.event_id().to_owned();
self.0.insert(cre.clone(), Arc::clone(&create_event)); self.0.insert(cre.clone(), create_event.clone());
let alice_mem = to_pdu_event( let alice_mem = to_pdu_event(
"IMA", "IMA",
@ -276,7 +273,7 @@ impl TestStore<PduEvent> {
&[cre.clone()], &[cre.clone()],
); );
self.0 self.0
.insert(alice_mem.event_id().to_owned(), Arc::clone(&alice_mem)); .insert(alice_mem.event_id().to_owned(), alice_mem.clone());
let join_rules = to_pdu_event( let join_rules = to_pdu_event(
"IJR", "IJR",
@ -383,7 +380,7 @@ fn to_pdu_event<S>(
content: Box<RawJsonValue>, content: Box<RawJsonValue>,
auth_events: &[S], auth_events: &[S],
prev_events: &[S], prev_events: &[S],
) -> Arc<PduEvent> ) -> PduEvent
where where
S: AsRef<str>, S: AsRef<str>,
{ {
@ -407,7 +404,7 @@ where
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let state_key = state_key.map(ToOwned::to_owned); let state_key = state_key.map(ToOwned::to_owned);
Arc::new(PduEvent { PduEvent {
event_id: id.try_into().unwrap(), event_id: id.try_into().unwrap(),
rest: Pdu::RoomV3Pdu(RoomV3Pdu { rest: Pdu::RoomV3Pdu(RoomV3Pdu {
room_id: room_id().to_owned(), room_id: room_id().to_owned(),
@ -424,12 +421,12 @@ where
hashes: EventHash::new(String::new()), hashes: EventHash::new(String::new()),
signatures: Signatures::new(), signatures: Signatures::new(),
}), }),
}) }
} }
// all graphs start with these input events // all graphs start with these input events
#[allow(non_snake_case)] #[allow(non_snake_case)]
fn INITIAL_EVENTS() -> HashMap<OwnedEventId, Arc<PduEvent>> { fn INITIAL_EVENTS() -> HashMap<OwnedEventId, PduEvent> {
vec![ vec![
to_pdu_event::<&EventId>( to_pdu_event::<&EventId>(
"CREATE", "CREATE",
@ -511,7 +508,7 @@ fn INITIAL_EVENTS() -> HashMap<OwnedEventId, Arc<PduEvent>> {
// all graphs start with these input events // all graphs start with these input events
#[allow(non_snake_case)] #[allow(non_snake_case)]
fn BAN_STATE_SET() -> HashMap<OwnedEventId, Arc<PduEvent>> { fn BAN_STATE_SET() -> HashMap<OwnedEventId, PduEvent> {
vec![ vec![
to_pdu_event( to_pdu_event(
"PA", "PA",

View file

@ -1112,8 +1112,6 @@ fn verify_third_party_invite(
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::sync::Arc;
use ruma::events::{ use ruma::events::{
StateEventType, TimelineEventType, StateEventType, TimelineEventType,
room::{ room::{
@ -1143,7 +1141,7 @@ mod tests {
let auth_events = events let auth_events = events
.values() .values()
.map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), Arc::clone(ev))) .map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), ev.clone()))
.collect::<StateMap<_>>(); .collect::<StateMap<_>>();
let requester = to_pdu_event( let requester = to_pdu_event(
@ -1188,7 +1186,7 @@ mod tests {
let auth_events = events let auth_events = events
.values() .values()
.map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), Arc::clone(ev))) .map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), ev.clone()))
.collect::<StateMap<_>>(); .collect::<StateMap<_>>();
let requester = to_pdu_event( let requester = to_pdu_event(
@ -1233,7 +1231,7 @@ mod tests {
let auth_events = events let auth_events = events
.values() .values()
.map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), Arc::clone(ev))) .map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), ev.clone()))
.collect::<StateMap<_>>(); .collect::<StateMap<_>>();
let requester = to_pdu_event( let requester = to_pdu_event(
@ -1278,7 +1276,7 @@ mod tests {
let auth_events = events let auth_events = events
.values() .values()
.map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), Arc::clone(ev))) .map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), ev.clone()))
.collect::<StateMap<_>>(); .collect::<StateMap<_>>();
let requester = to_pdu_event( let requester = to_pdu_event(
@ -1340,7 +1338,7 @@ mod tests {
let auth_events = events let auth_events = events
.values() .values()
.map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), Arc::clone(ev))) .map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), ev.clone()))
.collect::<StateMap<_>>(); .collect::<StateMap<_>>();
let requester = to_pdu_event( let requester = to_pdu_event(
@ -1412,7 +1410,7 @@ mod tests {
let auth_events = events let auth_events = events
.values() .values()
.map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), Arc::clone(ev))) .map(|ev| (ev.event_type().with_state_key(ev.state_key().unwrap()), ev.clone()))
.collect::<StateMap<_>>(); .collect::<StateMap<_>>();
let requester = to_pdu_event( let requester = to_pdu_event(

View file

@ -15,11 +15,10 @@ use std::{
borrow::Borrow, borrow::Borrow,
cmp::{Ordering, Reverse}, cmp::{Ordering, Reverse},
collections::{BinaryHeap, HashMap, HashSet}, collections::{BinaryHeap, HashMap, HashSet},
fmt::Debug,
hash::{BuildHasher, Hash}, hash::{BuildHasher, Hash},
}; };
use futures::{Future, FutureExt, StreamExt, TryFutureExt, TryStreamExt, future, stream}; use futures::{Future, FutureExt, Stream, StreamExt, TryFutureExt, TryStreamExt, future};
use ruma::{ use ruma::{
EventId, Int, MilliSecondsSinceUnixEpoch, RoomVersionId, EventId, Int, MilliSecondsSinceUnixEpoch, RoomVersionId,
events::{ events::{
@ -37,9 +36,13 @@ pub use self::{
room_version::RoomVersion, room_version::RoomVersion,
}; };
use crate::{ use crate::{
debug, debug, debug_error,
matrix::{event::Event, pdu::StateKey}, matrix::{event::Event, pdu::StateKey},
trace, warn, trace,
utils::stream::{
BroadbandExt, IterStream, ReadyExt, TryBroadbandExt, TryReadyExt, WidebandExt,
},
warn,
}; };
/// A mapping of event type and state_key to some value `T`, usually an /// A mapping of event type and state_key to some value `T`, usually an
@ -112,20 +115,16 @@ where
debug!(count = conflicting.len(), "conflicting events"); debug!(count = conflicting.len(), "conflicting events");
trace!(map = ?conflicting, "conflicting events"); trace!(map = ?conflicting, "conflicting events");
let auth_chain_diff = let conflicting_values = conflicting.into_values().flatten().stream();
get_auth_chain_diff(auth_chain_sets).chain(conflicting.into_values().flatten());
// `all_conflicted` contains unique items // `all_conflicted` contains unique items
// synapse says `full_set = {eid for eid in full_conflicted_set if eid in // synapse says `full_set = {eid for eid in full_conflicted_set if eid in
// event_map}` // event_map}`
let all_conflicted: HashSet<_> = stream::iter(auth_chain_diff) let all_conflicted: HashSet<_> = get_auth_chain_diff(auth_chain_sets)
// Don't honor events we cannot "verify" .chain(conflicting_values)
.map(|id| event_exists(id.clone()).map(move |exists| (id, exists))) .broad_filter_map(async |id| event_exists(id.clone()).await.then_some(id))
.buffer_unordered(parallel_fetches) .collect()
.filter_map(|(id, exists)| future::ready(exists.then_some(id))) .await;
.collect()
.boxed()
.await;
debug!(count = all_conflicted.len(), "full conflicted set"); debug!(count = all_conflicted.len(), "full conflicted set");
trace!(set = ?all_conflicted, "full conflicted set"); trace!(set = ?all_conflicted, "full conflicted set");
@ -135,12 +134,15 @@ where
// Get only the control events with a state_key: "" or ban/kick event (sender != // Get only the control events with a state_key: "" or ban/kick event (sender !=
// state_key) // state_key)
let control_events: Vec<_> = stream::iter(all_conflicted.iter()) let control_events: Vec<_> = all_conflicted
.map(|id| is_power_event_id(id, &event_fetch).map(move |is| (id, is))) .iter()
.buffer_unordered(parallel_fetches) .stream()
.filter_map(|(id, is)| future::ready(is.then_some(id.clone()))) .wide_filter_map(async |id| {
is_power_event_id(id, &event_fetch)
.await
.then_some(id.clone())
})
.collect() .collect()
.boxed()
.await; .await;
// Sort the control events based on power_level/clock/event_id and // Sort the control events based on power_level/clock/event_id and
@ -160,10 +162,9 @@ where
// Sequentially auth check each control event. // Sequentially auth check each control event.
let resolved_control = iterative_auth_check( let resolved_control = iterative_auth_check(
&room_version, &room_version,
sorted_control_levels.iter(), sorted_control_levels.iter().stream(),
clean.clone(), clean.clone(),
&event_fetch, &event_fetch,
parallel_fetches,
) )
.await?; .await?;
@ -172,36 +173,35 @@ where
// At this point the control_events have been resolved we now have to // At this point the control_events have been resolved we now have to
// sort the remaining events using the mainline of the resolved power level. // sort the remaining events using the mainline of the resolved power level.
let deduped_power_ev = sorted_control_levels.into_iter().collect::<HashSet<_>>(); let deduped_power_ev: HashSet<_> = sorted_control_levels.into_iter().collect();
// This removes the control events that passed auth and more importantly those // This removes the control events that passed auth and more importantly those
// that failed auth // that failed auth
let events_to_resolve = all_conflicted let events_to_resolve: Vec<_> = all_conflicted
.iter() .iter()
.filter(|&id| !deduped_power_ev.contains(id.borrow())) .filter(|&id| !deduped_power_ev.contains(id.borrow()))
.cloned() .cloned()
.collect::<Vec<_>>(); .collect();
debug!(count = events_to_resolve.len(), "events left to resolve"); debug!(count = events_to_resolve.len(), "events left to resolve");
trace!(list = ?events_to_resolve, "events left to resolve"); trace!(list = ?events_to_resolve, "events left to resolve");
// This "epochs" power level event // This "epochs" power level event
let power_event = resolved_control.get(&(StateEventType::RoomPowerLevels, StateKey::new())); let power_levels_ty_sk = (StateEventType::RoomPowerLevels, StateKey::new());
let power_event = resolved_control.get(&power_levels_ty_sk);
debug!(event_id = ?power_event, "power event"); debug!(event_id = ?power_event, "power event");
let sorted_left_events = let sorted_left_events =
mainline_sort(&events_to_resolve, power_event.cloned(), &event_fetch, parallel_fetches) mainline_sort(&events_to_resolve, power_event.cloned(), &event_fetch).await?;
.await?;
trace!(list = ?sorted_left_events, "events left, sorted"); trace!(list = ?sorted_left_events, "events left, sorted");
let mut resolved_state = iterative_auth_check( let mut resolved_state = iterative_auth_check(
&room_version, &room_version,
sorted_left_events.iter(), sorted_left_events.iter().stream(),
resolved_control, // The control events are added to the final resolved state resolved_control, // The control events are added to the final resolved state
&event_fetch, &event_fetch,
parallel_fetches,
) )
.await?; .await?;
@ -265,7 +265,7 @@ where
#[allow(clippy::arithmetic_side_effects)] #[allow(clippy::arithmetic_side_effects)]
fn get_auth_chain_diff<Id, Hasher>( fn get_auth_chain_diff<Id, Hasher>(
auth_chain_sets: &[HashSet<Id, Hasher>], auth_chain_sets: &[HashSet<Id, Hasher>],
) -> impl Iterator<Item = Id> + Send + use<Id, Hasher> ) -> impl Stream<Item = Id> + Send + use<Id, Hasher>
where where
Id: Clone + Eq + Hash + Send, Id: Clone + Eq + Hash + Send,
Hasher: BuildHasher + Send + Sync, Hasher: BuildHasher + Send + Sync,
@ -279,6 +279,7 @@ where
id_counts id_counts
.into_iter() .into_iter()
.filter_map(move |(id, count)| (count < num_sets).then_some(id)) .filter_map(move |(id, count)| (count < num_sets).then_some(id))
.stream()
} }
/// Events are sorted from "earliest" to "latest". /// Events are sorted from "earliest" to "latest".
@ -310,13 +311,15 @@ where
} }
// This is used in the `key_fn` passed to the lexico_topo_sort fn // This is used in the `key_fn` passed to the lexico_topo_sort fn
let event_to_pl = stream::iter(graph.keys()) let event_to_pl = graph
.keys()
.stream()
.map(|event_id| { .map(|event_id| {
get_power_level_for_sender(event_id.clone(), fetch_event, parallel_fetches) get_power_level_for_sender(event_id.clone(), fetch_event)
.map(move |res| res.map(|pl| (event_id, pl))) .map(move |res| res.map(|pl| (event_id, pl)))
}) })
.buffer_unordered(parallel_fetches) .buffer_unordered(parallel_fetches)
.try_fold(HashMap::new(), |mut event_to_pl, (event_id, pl)| { .ready_try_fold(HashMap::new(), |mut event_to_pl, (event_id, pl)| {
debug!( debug!(
event_id = event_id.borrow().as_str(), event_id = event_id.borrow().as_str(),
power_level = i64::from(pl), power_level = i64::from(pl),
@ -324,7 +327,7 @@ where
); );
event_to_pl.insert(event_id.clone(), pl); event_to_pl.insert(event_id.clone(), pl);
future::ok(event_to_pl) Ok(event_to_pl)
}) })
.boxed() .boxed()
.await?; .await?;
@ -475,7 +478,6 @@ where
async fn get_power_level_for_sender<E, F, Fut>( async fn get_power_level_for_sender<E, F, Fut>(
event_id: E::Id, event_id: E::Id,
fetch_event: &F, fetch_event: &F,
parallel_fetches: usize,
) -> serde_json::Result<Int> ) -> serde_json::Result<Int>
where where
F: Fn(E::Id) -> Fut + Sync, F: Fn(E::Id) -> Fut + Sync,
@ -485,19 +487,17 @@ where
{ {
debug!("fetch event ({event_id}) senders power level"); debug!("fetch event ({event_id}) senders power level");
let event = fetch_event(event_id.clone()).await; let event = fetch_event(event_id).await;
let auth_events = event.as_ref().map(Event::auth_events).into_iter().flatten(); let auth_events = event.as_ref().map(Event::auth_events);
let pl = stream::iter(auth_events) let pl = auth_events
.map(|aid| fetch_event(aid.clone()))
.buffer_unordered(parallel_fetches.min(5))
.filter_map(future::ready)
.collect::<Vec<_>>()
.boxed()
.await
.into_iter() .into_iter()
.find(|aev| is_type_and_key(aev, &TimelineEventType::RoomPowerLevels, "")); .flatten()
.stream()
.broadn_filter_map(5, |aid| fetch_event(aid.clone()))
.ready_find(|aev| is_type_and_key(aev, &TimelineEventType::RoomPowerLevels, ""))
.await;
let content: PowerLevelsContentFields = match pl { let content: PowerLevelsContentFields = match pl {
| None => return Ok(int!(0)), | None => return Ok(int!(0)),
@ -525,34 +525,28 @@ where
/// For each `events_to_check` event we gather the events needed to auth it from /// For each `events_to_check` event we gather the events needed to auth it from
/// the the `fetch_event` closure and verify each event using the /// the the `fetch_event` closure and verify each event using the
/// `event_auth::auth_check` function. /// `event_auth::auth_check` function.
async fn iterative_auth_check<'a, E, F, Fut, I>( async fn iterative_auth_check<'a, E, F, Fut, S>(
room_version: &RoomVersion, room_version: &RoomVersion,
events_to_check: I, events_to_check: S,
unconflicted_state: StateMap<E::Id>, unconflicted_state: StateMap<E::Id>,
fetch_event: &F, fetch_event: &F,
parallel_fetches: usize,
) -> Result<StateMap<E::Id>> ) -> Result<StateMap<E::Id>>
where where
F: Fn(E::Id) -> Fut + Sync, F: Fn(E::Id) -> Fut + Sync,
Fut: Future<Output = Option<E>> + Send, Fut: Future<Output = Option<E>> + Send,
E::Id: Borrow<EventId> + Clone + Eq + Ord + Send + Sync + 'a, E::Id: Borrow<EventId> + Clone + Eq + Ord + Send + Sync + 'a,
I: Iterator<Item = &'a E::Id> + Debug + Send + 'a, S: Stream<Item = &'a E::Id> + Send + 'a,
E: Event + Clone + Send + Sync, E: Event + Clone + Send + Sync,
{ {
debug!("starting iterative auth check"); debug!("starting iterative auth check");
trace!(
list = ?events_to_check,
"events to check"
);
let events_to_check: Vec<_> = stream::iter(events_to_check) let events_to_check: Vec<_> = events_to_check
.map(Result::Ok) .map(Result::Ok)
.map_ok(|event_id| { .broad_and_then(async |event_id| {
fetch_event(event_id.clone()).map(move |result| { fetch_event(event_id.clone())
result.ok_or_else(|| Error::NotFound(format!("Failed to find {event_id}"))) .await
}) .ok_or_else(|| Error::NotFound(format!("Failed to find {event_id}")))
}) })
.try_buffer_unordered(parallel_fetches)
.try_collect() .try_collect()
.boxed() .boxed()
.await?; .await?;
@ -562,10 +556,10 @@ where
.flat_map(|event: &E| event.auth_events().map(Clone::clone)) .flat_map(|event: &E| event.auth_events().map(Clone::clone))
.collect(); .collect();
let auth_events: HashMap<E::Id, E> = stream::iter(auth_event_ids.into_iter()) let auth_events: HashMap<E::Id, E> = auth_event_ids
.map(fetch_event) .into_iter()
.buffer_unordered(parallel_fetches) .stream()
.filter_map(future::ready) .broad_filter_map(fetch_event)
.map(|auth_event| (auth_event.event_id().clone(), auth_event)) .map(|auth_event| (auth_event.event_id().clone(), auth_event))
.collect() .collect()
.boxed() .boxed()
@ -574,7 +568,6 @@ where
let auth_events = &auth_events; let auth_events = &auth_events;
let mut resolved_state = unconflicted_state; let mut resolved_state = unconflicted_state;
for event in &events_to_check { for event in &events_to_check {
let event_id = event.event_id();
let state_key = event let state_key = event
.state_key() .state_key()
.ok_or_else(|| Error::InvalidPdu("State event had no state key".to_owned()))?; .ok_or_else(|| Error::InvalidPdu("State event had no state key".to_owned()))?;
@ -603,24 +596,22 @@ where
} }
} }
stream::iter( auth_types
auth_types .iter()
.iter() .stream()
.filter_map(|key| Some((key, resolved_state.get(key)?))), .ready_filter_map(|key| Some((key, resolved_state.get(key)?)))
) .filter_map(|(key, ev_id)| async move {
.filter_map(|(key, ev_id)| async move { if let Some(event) = auth_events.get(ev_id.borrow()) {
if let Some(event) = auth_events.get(ev_id.borrow()) { Some((key, event.clone()))
Some((key, event.clone())) } else {
} else { Some((key, fetch_event(ev_id.clone()).await?))
Some((key, fetch_event(ev_id.clone()).await?)) }
} })
}) .ready_for_each(|(key, event)| {
.for_each(|(key, event)| { //TODO: synapse checks "rejected_reason" is None here
//TODO: synapse checks "rejected_reason" is None here auth_state.insert(key.to_owned(), event);
auth_state.insert(key.to_owned(), event); })
future::ready(()) .await;
})
.await;
debug!("event to check {:?}", event.event_id()); debug!("event to check {:?}", event.event_id());
@ -634,12 +625,25 @@ where
future::ready(auth_state.get(&ty.with_state_key(key))) future::ready(auth_state.get(&ty.with_state_key(key)))
}; };
if auth_check(room_version, &event, current_third_party.as_ref(), fetch_state).await? { let auth_result =
// add event to resolved state map auth_check(room_version, &event, current_third_party.as_ref(), fetch_state).await;
resolved_state.insert(event.event_type().with_state_key(state_key), event_id.clone());
} else { match auth_result {
// synapse passes here on AuthError. We do not add this event to resolved_state. | Ok(true) => {
warn!("event {event_id} failed the authentication check"); // add event to resolved state map
resolved_state.insert(
event.event_type().with_state_key(state_key),
event.event_id().clone(),
);
},
| Ok(false) => {
// synapse passes here on AuthError. We do not add this event to resolved_state.
warn!("event {} failed the authentication check", event.event_id());
},
| Err(e) => {
debug_error!("event {} failed the authentication check: {e}", event.event_id());
return Err(e);
},
} }
} }
@ -659,7 +663,6 @@ async fn mainline_sort<E, F, Fut>(
to_sort: &[E::Id], to_sort: &[E::Id],
resolved_power_level: Option<E::Id>, resolved_power_level: Option<E::Id>,
fetch_event: &F, fetch_event: &F,
parallel_fetches: usize,
) -> Result<Vec<E::Id>> ) -> Result<Vec<E::Id>>
where where
F: Fn(E::Id) -> Fut + Sync, F: Fn(E::Id) -> Fut + Sync,
@ -682,11 +685,13 @@ where
let event = fetch_event(p.clone()) let event = fetch_event(p.clone())
.await .await
.ok_or_else(|| Error::NotFound(format!("Failed to find {p}")))?; .ok_or_else(|| Error::NotFound(format!("Failed to find {p}")))?;
pl = None; pl = None;
for aid in event.auth_events() { for aid in event.auth_events() {
let ev = fetch_event(aid.clone()) let ev = fetch_event(aid.clone())
.await .await
.ok_or_else(|| Error::NotFound(format!("Failed to find {aid}")))?; .ok_or_else(|| Error::NotFound(format!("Failed to find {aid}")))?;
if is_type_and_key(&ev, &TimelineEventType::RoomPowerLevels, "") { if is_type_and_key(&ev, &TimelineEventType::RoomPowerLevels, "") {
pl = Some(aid.to_owned()); pl = Some(aid.to_owned());
break; break;
@ -694,36 +699,32 @@ where
} }
} }
let mainline_map = mainline let mainline_map: HashMap<_, _> = mainline
.iter() .iter()
.rev() .rev()
.enumerate() .enumerate()
.map(|(idx, eid)| ((*eid).clone(), idx)) .map(|(idx, eid)| ((*eid).clone(), idx))
.collect::<HashMap<_, _>>(); .collect();
let order_map = stream::iter(to_sort.iter()) let order_map: HashMap<_, _> = to_sort
.map(|ev_id| { .iter()
fetch_event(ev_id.clone()).map(move |event| event.map(|event| (event, ev_id))) .stream()
.broad_filter_map(async |ev_id| {
fetch_event(ev_id.clone()).await.map(|event| (event, ev_id))
}) })
.buffer_unordered(parallel_fetches) .broad_filter_map(|(event, ev_id)| {
.filter_map(future::ready)
.map(|(event, ev_id)| {
get_mainline_depth(Some(event.clone()), &mainline_map, fetch_event) get_mainline_depth(Some(event.clone()), &mainline_map, fetch_event)
.map_ok(move |depth| (depth, event, ev_id)) .map_ok(move |depth| (ev_id, (depth, event.origin_server_ts(), ev_id)))
.map(Result::ok) .map(Result::ok)
}) })
.buffer_unordered(parallel_fetches) .collect()
.filter_map(future::ready)
.fold(HashMap::new(), |mut order_map, (depth, event, ev_id)| {
order_map.insert(ev_id, (depth, event.origin_server_ts(), ev_id));
future::ready(order_map)
})
.boxed() .boxed()
.await; .await;
// Sort the event_ids by their depth, timestamp and EventId // Sort the event_ids by their depth, timestamp and EventId
// unwrap is OK order map and sort_event_ids are from to_sort (the same Vec) // unwrap is OK order map and sort_event_ids are from to_sort (the same Vec)
let mut sort_event_ids = order_map.keys().map(|&k| k.clone()).collect::<Vec<_>>(); let mut sort_event_ids: Vec<_> = order_map.keys().map(|&k| k.clone()).collect();
sort_event_ids.sort_by_key(|sort_id| &order_map[sort_id]); sort_event_ids.sort_by_key(|sort_id| &order_map[sort_id]);
Ok(sort_event_ids) Ok(sort_event_ids)
@ -744,6 +745,7 @@ where
{ {
while let Some(sort_ev) = event { while let Some(sort_ev) = event {
debug!(event_id = sort_ev.event_id().borrow().as_str(), "mainline"); debug!(event_id = sort_ev.event_id().borrow().as_str(), "mainline");
let id = sort_ev.event_id(); let id = sort_ev.event_id();
if let Some(depth) = mainline_map.get(id.borrow()) { if let Some(depth) = mainline_map.get(id.borrow()) {
return Ok(*depth); return Ok(*depth);
@ -754,6 +756,7 @@ where
let aev = fetch_event(aid.clone()) let aev = fetch_event(aid.clone())
.await .await
.ok_or_else(|| Error::NotFound(format!("Failed to find {aid}")))?; .ok_or_else(|| Error::NotFound(format!("Failed to find {aid}")))?;
if is_type_and_key(&aev, &TimelineEventType::RoomPowerLevels, "") { if is_type_and_key(&aev, &TimelineEventType::RoomPowerLevels, "") {
event = Some(aev); event = Some(aev);
break; break;
@ -858,10 +861,7 @@ where
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::{ use std::collections::{HashMap, HashSet};
collections::{HashMap, HashSet},
sync::Arc,
};
use maplit::{hashmap, hashset}; use maplit::{hashmap, hashset};
use rand::seq::SliceRandom; use rand::seq::SliceRandom;
@ -884,7 +884,7 @@ mod tests {
zara, zara,
}, },
}; };
use crate::debug; use crate::{debug, utils::stream::IterStream};
async fn test_event_sort() { async fn test_event_sort() {
use futures::future::ready; use futures::future::ready;
@ -903,7 +903,7 @@ mod tests {
let power_events = event_map let power_events = event_map
.values() .values()
.filter(|&pdu| is_power_event(&**pdu)) .filter(|&pdu| is_power_event(&*pdu))
.map(|pdu| pdu.event_id.clone()) .map(|pdu| pdu.event_id.clone())
.collect::<Vec<_>>(); .collect::<Vec<_>>();
@ -915,10 +915,9 @@ mod tests {
let resolved_power = super::iterative_auth_check( let resolved_power = super::iterative_auth_check(
&RoomVersion::V6, &RoomVersion::V6,
sorted_power_events.iter(), sorted_power_events.iter().stream(),
HashMap::new(), // unconflicted events HashMap::new(), // unconflicted events
&fetcher, &fetcher,
1,
) )
.await .await
.expect("iterative auth check failed on resolved events"); .expect("iterative auth check failed on resolved events");
@ -932,7 +931,7 @@ mod tests {
.get(&(StateEventType::RoomPowerLevels, "".into())) .get(&(StateEventType::RoomPowerLevels, "".into()))
.cloned(); .cloned();
let sorted_event_ids = super::mainline_sort(&events_to_sort, power_level, &fetcher, 1) let sorted_event_ids = super::mainline_sort(&events_to_sort, power_level, &fetcher)
.await .await
.unwrap(); .unwrap();
@ -1487,7 +1486,7 @@ mod tests {
} }
#[allow(non_snake_case)] #[allow(non_snake_case)]
fn BAN_STATE_SET() -> HashMap<OwnedEventId, Arc<PduEvent>> { fn BAN_STATE_SET() -> HashMap<OwnedEventId, PduEvent> {
vec![ vec![
to_pdu_event( to_pdu_event(
"PA", "PA",
@ -1532,7 +1531,7 @@ mod tests {
} }
#[allow(non_snake_case)] #[allow(non_snake_case)]
fn JOIN_RULE() -> HashMap<OwnedEventId, Arc<PduEvent>> { fn JOIN_RULE() -> HashMap<OwnedEventId, PduEvent> {
vec![ vec![
to_pdu_event( to_pdu_event(
"JR", "JR",

View file

@ -1,10 +1,7 @@
use std::{ use std::{
borrow::Borrow, borrow::Borrow,
collections::{BTreeMap, HashMap, HashSet}, collections::{BTreeMap, HashMap, HashSet},
sync::{ sync::atomic::{AtomicU64, Ordering::SeqCst},
Arc,
atomic::{AtomicU64, Ordering::SeqCst},
},
}; };
use futures::future::ready; use futures::future::ready;
@ -36,7 +33,7 @@ use crate::{
static SERVER_TIMESTAMP: AtomicU64 = AtomicU64::new(0); static SERVER_TIMESTAMP: AtomicU64 = AtomicU64::new(0);
pub(crate) async fn do_check( pub(crate) async fn do_check(
events: &[Arc<PduEvent>], events: &[PduEvent],
edges: Vec<Vec<OwnedEventId>>, edges: Vec<Vec<OwnedEventId>>,
expected_state_ids: Vec<OwnedEventId>, expected_state_ids: Vec<OwnedEventId>,
) { ) {
@ -85,7 +82,7 @@ pub(crate) async fn do_check(
} }
// event_id -> PduEvent // event_id -> PduEvent
let mut event_map: HashMap<OwnedEventId, Arc<PduEvent>> = HashMap::new(); let mut event_map: HashMap<OwnedEventId, PduEvent> = HashMap::new();
// event_id -> StateMap<OwnedEventId> // event_id -> StateMap<OwnedEventId>
let mut state_at_event: HashMap<OwnedEventId, StateMap<OwnedEventId>> = HashMap::new(); let mut state_at_event: HashMap<OwnedEventId, StateMap<OwnedEventId>> = HashMap::new();
@ -194,7 +191,7 @@ pub(crate) async fn do_check(
store.0.insert(ev_id.to_owned(), event.clone()); store.0.insert(ev_id.to_owned(), event.clone());
state_at_event.insert(node, state_after); state_at_event.insert(node, state_after);
event_map.insert(event_id.to_owned(), Arc::clone(store.0.get(ev_id).unwrap())); event_map.insert(event_id.to_owned(), store.0.get(ev_id).unwrap().clone());
} }
let mut expected_state = StateMap::new(); let mut expected_state = StateMap::new();
@ -235,10 +232,10 @@ pub(crate) async fn do_check(
} }
#[allow(clippy::exhaustive_structs)] #[allow(clippy::exhaustive_structs)]
pub(crate) struct TestStore<E: Event>(pub(crate) HashMap<OwnedEventId, Arc<E>>); pub(crate) struct TestStore<E: Event>(pub(crate) HashMap<OwnedEventId, E>);
impl<E: Event> TestStore<E> { impl<E: Event + Clone> TestStore<E> {
pub(crate) fn get_event(&self, _: &RoomId, event_id: &EventId) -> Result<Arc<E>> { pub(crate) fn get_event(&self, _: &RoomId, event_id: &EventId) -> Result<E> {
self.0 self.0
.get(event_id) .get(event_id)
.cloned() .cloned()
@ -288,7 +285,7 @@ impl TestStore<PduEvent> {
&[], &[],
); );
let cre = create_event.event_id().to_owned(); let cre = create_event.event_id().to_owned();
self.0.insert(cre.clone(), Arc::clone(&create_event)); self.0.insert(cre.clone(), create_event.clone());
let alice_mem = to_pdu_event( let alice_mem = to_pdu_event(
"IMA", "IMA",
@ -300,7 +297,7 @@ impl TestStore<PduEvent> {
&[cre.clone()], &[cre.clone()],
); );
self.0 self.0
.insert(alice_mem.event_id().to_owned(), Arc::clone(&alice_mem)); .insert(alice_mem.event_id().to_owned(), alice_mem.clone());
let join_rules = to_pdu_event( let join_rules = to_pdu_event(
"IJR", "IJR",
@ -399,7 +396,7 @@ pub(crate) fn to_init_pdu_event(
ev_type: TimelineEventType, ev_type: TimelineEventType,
state_key: Option<&str>, state_key: Option<&str>,
content: Box<RawJsonValue>, content: Box<RawJsonValue>,
) -> Arc<PduEvent> { ) -> PduEvent {
let ts = SERVER_TIMESTAMP.fetch_add(1, SeqCst); let ts = SERVER_TIMESTAMP.fetch_add(1, SeqCst);
let id = if id.contains('$') { let id = if id.contains('$') {
id.to_owned() id.to_owned()
@ -408,7 +405,7 @@ pub(crate) fn to_init_pdu_event(
}; };
let state_key = state_key.map(ToOwned::to_owned); let state_key = state_key.map(ToOwned::to_owned);
Arc::new(PduEvent { PduEvent {
event_id: id.try_into().unwrap(), event_id: id.try_into().unwrap(),
rest: Pdu::RoomV3Pdu(RoomV3Pdu { rest: Pdu::RoomV3Pdu(RoomV3Pdu {
room_id: room_id().to_owned(), room_id: room_id().to_owned(),
@ -425,7 +422,7 @@ pub(crate) fn to_init_pdu_event(
hashes: EventHash::new("".to_owned()), hashes: EventHash::new("".to_owned()),
signatures: ServerSignatures::default(), signatures: ServerSignatures::default(),
}), }),
}) }
} }
pub(crate) fn to_pdu_event<S>( pub(crate) fn to_pdu_event<S>(
@ -436,7 +433,7 @@ pub(crate) fn to_pdu_event<S>(
content: Box<RawJsonValue>, content: Box<RawJsonValue>,
auth_events: &[S], auth_events: &[S],
prev_events: &[S], prev_events: &[S],
) -> Arc<PduEvent> ) -> PduEvent
where where
S: AsRef<str>, S: AsRef<str>,
{ {
@ -458,7 +455,7 @@ where
.collect::<Vec<_>>(); .collect::<Vec<_>>();
let state_key = state_key.map(ToOwned::to_owned); let state_key = state_key.map(ToOwned::to_owned);
Arc::new(PduEvent { PduEvent {
event_id: id.try_into().unwrap(), event_id: id.try_into().unwrap(),
rest: Pdu::RoomV3Pdu(RoomV3Pdu { rest: Pdu::RoomV3Pdu(RoomV3Pdu {
room_id: room_id().to_owned(), room_id: room_id().to_owned(),
@ -475,12 +472,12 @@ where
hashes: EventHash::new("".to_owned()), hashes: EventHash::new("".to_owned()),
signatures: ServerSignatures::default(), signatures: ServerSignatures::default(),
}), }),
}) }
} }
// all graphs start with these input events // all graphs start with these input events
#[allow(non_snake_case)] #[allow(non_snake_case)]
pub(crate) fn INITIAL_EVENTS() -> HashMap<OwnedEventId, Arc<PduEvent>> { pub(crate) fn INITIAL_EVENTS() -> HashMap<OwnedEventId, PduEvent> {
vec![ vec![
to_pdu_event::<&EventId>( to_pdu_event::<&EventId>(
"CREATE", "CREATE",
@ -562,7 +559,7 @@ pub(crate) fn INITIAL_EVENTS() -> HashMap<OwnedEventId, Arc<PduEvent>> {
// all graphs start with these input events // all graphs start with these input events
#[allow(non_snake_case)] #[allow(non_snake_case)]
pub(crate) fn INITIAL_EVENTS_CREATE_ROOM() -> HashMap<OwnedEventId, Arc<PduEvent>> { pub(crate) fn INITIAL_EVENTS_CREATE_ROOM() -> HashMap<OwnedEventId, PduEvent> {
vec![to_pdu_event::<&EventId>( vec![to_pdu_event::<&EventId>(
"CREATE", "CREATE",
alice(), alice(),

View file

@ -22,30 +22,6 @@ where
Self: Sized + Unpin; Self: Sized + Unpin;
} }
pub async fn and<I, F>(args: I) -> impl Future<Output = bool> + Send
where
I: Iterator<Item = F> + Send,
F: Future<Output = bool> + Send,
{
type Result = crate::Result<(), ()>;
let args = args.map(|a| a.map(|a| a.then_some(()).ok_or(Result::Err(()))));
try_join_all(args).map(|result| result.is_ok())
}
pub async fn or<I, F>(args: I) -> impl Future<Output = bool> + Send
where
I: Iterator<Item = F> + Send,
F: Future<Output = bool> + Send + Unpin,
{
type Result = crate::Result<(), ()>;
let args = args.map(|a| a.map(|a| a.then_some(()).ok_or(Result::Err(()))));
select_ok(args).map(|result| result.is_ok())
}
impl<Fut> BoolExt for Fut impl<Fut> BoolExt for Fut
where where
Fut: Future<Output = bool> + Send, Fut: Future<Output = bool> + Send,
@ -80,3 +56,27 @@ where
try_select(a, b).map(|result| result.is_ok()) try_select(a, b).map(|result| result.is_ok())
} }
} }
pub async fn and<I, F>(args: I) -> impl Future<Output = bool> + Send
where
I: Iterator<Item = F> + Send,
F: Future<Output = bool> + Send,
{
type Result = crate::Result<(), ()>;
let args = args.map(|a| a.map(|a| a.then_some(()).ok_or(Result::Err(()))));
try_join_all(args).map(|result| result.is_ok())
}
pub async fn or<I, F>(args: I) -> impl Future<Output = bool> + Send
where
I: Iterator<Item = F> + Send,
F: Future<Output = bool> + Send + Unpin,
{
type Result = crate::Result<(), ()>;
let args = args.map(|a| a.map(|a| a.then_some(()).ok_or(Result::Err(()))));
select_ok(args).map(|result| result.is_ok())
}

View file

@ -2,10 +2,12 @@ mod bool_ext;
mod ext_ext; mod ext_ext;
mod option_ext; mod option_ext;
mod option_stream; mod option_stream;
mod ready_eq_ext;
mod try_ext_ext; mod try_ext_ext;
pub use bool_ext::{BoolExt, and, or}; pub use bool_ext::{BoolExt, and, or};
pub use ext_ext::ExtExt; pub use ext_ext::ExtExt;
pub use option_ext::OptionExt; pub use option_ext::OptionExt;
pub use option_stream::OptionStream; pub use option_stream::OptionStream;
pub use ready_eq_ext::ReadyEqExt;
pub use try_ext_ext::TryExtExt; pub use try_ext_ext::TryExtExt;

View file

@ -0,0 +1,25 @@
//! Future extension for Partial Equality against present value
use futures::{Future, FutureExt};
pub trait ReadyEqExt<T>
where
Self: Future<Output = T> + Send + Sized,
T: PartialEq + Send + Sync,
{
fn eq(self, t: &T) -> impl Future<Output = bool> + Send;
fn ne(self, t: &T) -> impl Future<Output = bool> + Send;
}
impl<Fut, T> ReadyEqExt<T> for Fut
where
Fut: Future<Output = T> + Send + Sized,
T: PartialEq + Send + Sync,
{
#[inline]
fn eq(self, t: &T) -> impl Future<Output = bool> + Send { self.map(move |r| r.eq(t)) }
#[inline]
fn ne(self, t: &T) -> impl Future<Output = bool> + Send { self.map(move |r| r.ne(t)) }
}

View file

@ -10,6 +10,7 @@ use crate::{Err, Error, Result, debug::type_name, err};
/// Checked arithmetic expression. Returns a Result<R, Error::Arithmetic> /// Checked arithmetic expression. Returns a Result<R, Error::Arithmetic>
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! checked { macro_rules! checked {
($($input:tt)+) => { ($($input:tt)+) => {
$crate::utils::math::checked_ops!($($input)+) $crate::utils::math::checked_ops!($($input)+)
@ -22,6 +23,7 @@ macro_rules! checked {
/// has no realistic expectation for error and no interest in cluttering the /// has no realistic expectation for error and no interest in cluttering the
/// callsite with result handling from checked!. /// callsite with result handling from checked!.
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! expected { macro_rules! expected {
($msg:literal, $($input:tt)+) => { ($msg:literal, $($input:tt)+) => {
$crate::checked!($($input)+).expect($msg) $crate::checked!($($input)+).expect($msg)
@ -37,6 +39,7 @@ macro_rules! expected {
/// regression analysis. /// regression analysis.
#[cfg(not(debug_assertions))] #[cfg(not(debug_assertions))]
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! validated { macro_rules! validated {
($($input:tt)+) => { ($($input:tt)+) => {
//#[allow(clippy::arithmetic_side_effects)] { //#[allow(clippy::arithmetic_side_effects)] {
@ -53,6 +56,7 @@ macro_rules! validated {
/// the expression is obviously safe. The check is elided in release-mode. /// the expression is obviously safe. The check is elided in release-mode.
#[cfg(debug_assertions)] #[cfg(debug_assertions)]
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! validated { macro_rules! validated {
($($input:tt)+) => { $crate::expected!($($input)+) } ($($input:tt)+) => { $crate::expected!($($input)+) }
} }

View file

@ -28,7 +28,7 @@ pub use self::{
bool::BoolExt, bool::BoolExt,
bytes::{increment, u64_from_bytes, u64_from_u8, u64_from_u8x8}, bytes::{increment, u64_from_bytes, u64_from_u8, u64_from_u8x8},
debug::slice_truncated as debug_slice_truncated, debug::slice_truncated as debug_slice_truncated,
future::TryExtExt as TryFutureExtExt, future::{BoolExt as FutureBoolExt, OptionStream, TryExtExt as TryFutureExtExt},
hash::sha256::delimited as calculate_hash, hash::sha256::delimited as calculate_hash,
html::Escape as HtmlEscape, html::Escape as HtmlEscape,
json::{deserialize_from_str, to_canonical_object}, json::{deserialize_from_str, to_canonical_object},
@ -173,7 +173,6 @@ macro_rules! is_equal {
/// Functor for |x| *x.$i /// Functor for |x| *x.$i
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! deref_at { macro_rules! deref_at {
($idx:tt) => { ($idx:tt) => {
|t| *t.$idx |t| *t.$idx
@ -182,7 +181,6 @@ macro_rules! deref_at {
/// Functor for |ref x| x.$i /// Functor for |ref x| x.$i
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! ref_at { macro_rules! ref_at {
($idx:tt) => { ($idx:tt) => {
|ref t| &t.$idx |ref t| &t.$idx
@ -191,7 +189,6 @@ macro_rules! ref_at {
/// Functor for |&x| x.$i /// Functor for |&x| x.$i
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! val_at { macro_rules! val_at {
($idx:tt) => { ($idx:tt) => {
|&t| t.$idx |&t| t.$idx
@ -200,7 +197,6 @@ macro_rules! val_at {
/// Functor for |x| x.$i /// Functor for |x| x.$i
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! at { macro_rules! at {
($idx:tt) => { ($idx:tt) => {
|t| t.$idx |t| t.$idx

View file

@ -10,7 +10,7 @@ pub trait TryExpect<'a, Item> {
impl<'a, T, Item> TryExpect<'a, Item> for T impl<'a, T, Item> TryExpect<'a, Item> for T
where where
T: Stream<Item = Result<Item>> + TryStream + Send + 'a, T: Stream<Item = Result<Item>> + Send + TryStream + 'a,
Item: 'a, Item: 'a,
{ {
#[inline] #[inline]

View file

@ -2,7 +2,7 @@
#![allow(clippy::type_complexity)] #![allow(clippy::type_complexity)]
use futures::{ use futures::{
future::{Ready, ready}, future::{FutureExt, Ready, ready},
stream::{ stream::{
All, Any, Filter, FilterMap, Fold, ForEach, Scan, SkipWhile, Stream, StreamExt, TakeWhile, All, Any, Filter, FilterMap, Fold, ForEach, Scan, SkipWhile, Stream, StreamExt, TakeWhile,
}, },
@ -16,7 +16,7 @@ use futures::{
/// This interface is not necessarily complete; feel free to add as-needed. /// This interface is not necessarily complete; feel free to add as-needed.
pub trait ReadyExt<Item> pub trait ReadyExt<Item>
where where
Self: Stream<Item = Item> + Send + Sized, Self: Stream<Item = Item> + Sized,
{ {
fn ready_all<F>(self, f: F) -> All<Self, Ready<bool>, impl FnMut(Item) -> Ready<bool>> fn ready_all<F>(self, f: F) -> All<Self, Ready<bool>, impl FnMut(Item) -> Ready<bool>>
where where
@ -26,6 +26,12 @@ where
where where
F: Fn(Item) -> bool; F: Fn(Item) -> bool;
fn ready_find<'a, F>(self, f: F) -> impl Future<Output = Option<Item>> + Send
where
Self: Send + Unpin + 'a,
F: Fn(&Item) -> bool + Send + 'a,
Item: Send;
fn ready_filter<'a, F>( fn ready_filter<'a, F>(
self, self,
f: F, f: F,
@ -93,7 +99,7 @@ where
impl<Item, S> ReadyExt<Item> for S impl<Item, S> ReadyExt<Item> for S
where where
S: Stream<Item = Item> + Send + Sized, S: Stream<Item = Item> + Sized,
{ {
#[inline] #[inline]
fn ready_all<F>(self, f: F) -> All<Self, Ready<bool>, impl FnMut(Item) -> Ready<bool>> fn ready_all<F>(self, f: F) -> All<Self, Ready<bool>, impl FnMut(Item) -> Ready<bool>>
@ -111,6 +117,19 @@ where
self.any(move |t| ready(f(t))) self.any(move |t| ready(f(t)))
} }
#[inline]
fn ready_find<'a, F>(self, f: F) -> impl Future<Output = Option<Item>> + Send
where
Self: Send + Unpin + 'a,
F: Fn(&Item) -> bool + Send + 'a,
Item: Send,
{
self.ready_filter(f)
.take(1)
.into_future()
.map(|(curr, _next)| curr)
}
#[inline] #[inline]
fn ready_filter<'a, F>( fn ready_filter<'a, F>(
self, self,

View file

@ -13,8 +13,8 @@ use crate::Result;
/// This interface is not necessarily complete; feel free to add as-needed. /// This interface is not necessarily complete; feel free to add as-needed.
pub trait TryReadyExt<T, E, S> pub trait TryReadyExt<T, E, S>
where where
S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + Send + ?Sized, S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + ?Sized,
Self: TryStream + Send + Sized, Self: TryStream + Sized,
{ {
fn ready_and_then<U, F>( fn ready_and_then<U, F>(
self, self,
@ -67,8 +67,8 @@ where
impl<T, E, S> TryReadyExt<T, E, S> for S impl<T, E, S> TryReadyExt<T, E, S> for S
where where
S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + Send + ?Sized, S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + ?Sized,
Self: TryStream + Send + Sized, Self: TryStream + Sized,
{ {
#[inline] #[inline]
fn ready_and_then<U, F>( fn ready_and_then<U, F>(

View file

@ -8,8 +8,8 @@ use crate::Result;
/// TryStreamTools /// TryStreamTools
pub trait TryTools<T, E, S> pub trait TryTools<T, E, S>
where where
S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + Send + ?Sized, S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + ?Sized,
Self: TryStream + Send + Sized, Self: TryStream + Sized,
{ {
fn try_take( fn try_take(
self, self,
@ -23,8 +23,8 @@ where
impl<T, E, S> TryTools<T, E, S> for S impl<T, E, S> TryTools<T, E, S> for S
where where
S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + Send + ?Sized, S: TryStream<Ok = T, Error = E, Item = Result<T, E>> + ?Sized,
Self: TryStream + Send + Sized, Self: TryStream + Sized,
{ {
#[inline] #[inline]
fn try_take( fn try_take(

View file

@ -14,6 +14,7 @@ pub const EMPTY: &str = "";
/// returned otherwise the input (i.e. &'static str) is returned. If multiple /// returned otherwise the input (i.e. &'static str) is returned. If multiple
/// arguments are provided the first is assumed to be a format string. /// arguments are provided the first is assumed to be a format string.
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! format_maybe { macro_rules! format_maybe {
($s:literal $(,)?) => { ($s:literal $(,)?) => {
if $crate::is_format!($s) { std::format!($s).into() } else { $s.into() } if $crate::is_format!($s) { std::format!($s).into() } else { $s.into() }
@ -27,6 +28,7 @@ macro_rules! format_maybe {
/// Constant expression to decide if a literal is a format string. Note: could /// Constant expression to decide if a literal is a format string. Note: could
/// use some improvement. /// use some improvement.
#[macro_export] #[macro_export]
#[collapse_debuginfo(yes)]
macro_rules! is_format { macro_rules! is_format {
($s:literal) => { ($s:literal) => {
::const_str::contains!($s, "{") && ::const_str::contains!($s, "}") ::const_str::contains!($s, "{") && ::const_str::contains!($s, "}")

View file

@ -117,7 +117,7 @@ pub fn name_from_path(path: &Path) -> Result<String> {
/// Get the (major, minor) of the block device on which Path is mounted. /// Get the (major, minor) of the block device on which Path is mounted.
#[allow(clippy::useless_conversion, clippy::unnecessary_fallible_conversions)] #[allow(clippy::useless_conversion, clippy::unnecessary_fallible_conversions)]
pub fn dev_from_path(path: &Path) -> Result<(dev_t, dev_t)> { fn dev_from_path(path: &Path) -> Result<(dev_t, dev_t)> {
#[cfg(target_family = "unix")] #[cfg(target_family = "unix")]
use std::os::unix::fs::MetadataExt; use std::os::unix::fs::MetadataExt;

View file

@ -17,19 +17,31 @@ crate-type = [
] ]
[features] [features]
release_max_log_level = [
"tracing/max_level_trace",
"tracing/release_max_level_info",
"log/max_level_trace",
"log/release_max_level_info",
]
jemalloc = [
"rust-rocksdb/jemalloc",
]
io_uring = [ io_uring = [
"rust-rocksdb/io-uring", "rust-rocksdb/io-uring",
] ]
jemalloc = [
"conduwuit-core/jemalloc",
"rust-rocksdb/jemalloc",
]
jemalloc_conf = [
"conduwuit-core/jemalloc_conf",
]
jemalloc_prof = [
"conduwuit-core/jemalloc_prof",
]
jemalloc_stats = [
"conduwuit-core/jemalloc_stats",
]
release_max_log_level = [
"conduwuit-core/release_max_log_level",
"log/max_level_trace",
"log/release_max_level_info",
"tracing/max_level_trace",
"tracing/release_max_level_info",
]
zstd_compression = [ zstd_compression = [
"conduwuit-core/zstd_compression",
"rust-rocksdb/zstd", "rust-rocksdb/zstd",
] ]

View file

@ -1,24 +1,16 @@
use std::fmt::Write; use std::{ffi::OsString, path::PathBuf};
use conduwuit::{Result, error, implement, info, utils::time::rfc2822_from_seconds, warn}; use conduwuit::{Err, Result, error, implement, info, utils::time::rfc2822_from_seconds, warn};
use rocksdb::backup::{BackupEngine, BackupEngineOptions}; use rocksdb::backup::{BackupEngine, BackupEngineOptions};
use super::Engine; use super::Engine;
use crate::{or_else, util::map_err}; use crate::util::map_err;
#[implement(Engine)] #[implement(Engine)]
#[tracing::instrument(skip(self))] #[tracing::instrument(skip(self))]
pub fn backup(&self) -> Result { pub fn backup(&self) -> Result {
let server = &self.ctx.server; let mut engine = self.backup_engine()?;
let config = &server.config; let config = &self.ctx.server.config;
let path = config.database_backup_path.as_ref();
if path.is_none() || path.is_some_and(|path| path.as_os_str().is_empty()) {
return Ok(());
}
let options =
BackupEngineOptions::new(path.expect("valid database backup path")).map_err(map_err)?;
let mut engine = BackupEngine::open(&options, &*self.ctx.env.lock()?).map_err(map_err)?;
if config.database_backups_to_keep > 0 { if config.database_backups_to_keep > 0 {
let flush = !self.is_read_only(); let flush = !self.is_read_only();
engine engine
@ -40,34 +32,62 @@ pub fn backup(&self) -> Result {
} }
} }
if config.database_backups_to_keep == 0 {
warn!("Configuration item `database_backups_to_keep` is set to 0.");
}
Ok(()) Ok(())
} }
#[implement(Engine)] #[implement(Engine)]
pub fn backup_list(&self) -> Result<String> { pub fn backup_list(&self) -> Result<impl Iterator<Item = String> + Send> {
let server = &self.ctx.server; let info = self.backup_engine()?.get_backup_info();
let config = &server.config;
let path = config.database_backup_path.as_ref(); if info.is_empty() {
if path.is_none() || path.is_some_and(|path| path.as_os_str().is_empty()) { return Err!("No backups found.");
return Ok("Configure database_backup_path to enable backups, or the path specified is \
not valid"
.to_owned());
} }
let mut res = String::new(); let list = info.into_iter().map(|info| {
let options = format!(
BackupEngineOptions::new(path.expect("valid database backup path")).or_else(or_else)?;
let engine = BackupEngine::open(&options, &*self.ctx.env.lock()?).or_else(or_else)?;
for info in engine.get_backup_info() {
writeln!(
res,
"#{} {}: {} bytes, {} files", "#{} {}: {} bytes, {} files",
info.backup_id, info.backup_id,
rfc2822_from_seconds(info.timestamp), rfc2822_from_seconds(info.timestamp),
info.size, info.size,
info.num_files, info.num_files,
)?; )
});
Ok(list)
}
#[implement(Engine)]
pub fn backup_count(&self) -> Result<usize> {
let info = self.backup_engine()?.get_backup_info();
Ok(info.len())
}
#[implement(Engine)]
fn backup_engine(&self) -> Result<BackupEngine> {
let path = self.backup_path()?;
let options = BackupEngineOptions::new(path).map_err(map_err)?;
BackupEngine::open(&options, &*self.ctx.env.lock()?).map_err(map_err)
}
#[implement(Engine)]
fn backup_path(&self) -> Result<OsString> {
let path = self
.ctx
.server
.config
.database_backup_path
.clone()
.map(PathBuf::into_os_string)
.unwrap_or_default();
if path.is_empty() {
return Err!(Config("database_backup_path", "Configure path to enable backups"));
} }
Ok(res) Ok(path)
} }

View file

@ -8,7 +8,7 @@ use crate::{Result, utils::camel_to_snake_string};
pub(super) fn command(mut item: ItemFn, _args: &[Meta]) -> Result<TokenStream> { pub(super) fn command(mut item: ItemFn, _args: &[Meta]) -> Result<TokenStream> {
let attr: Attribute = parse_quote! { let attr: Attribute = parse_quote! {
#[conduwuit_macros::implement(crate::Command, params = "<'_>")] #[conduwuit_macros::implement(crate::Context, params = "<'_>")]
}; };
item.attrs.push(attr); item.attrs.push(attr);
@ -19,15 +19,16 @@ pub(super) fn command_dispatch(item: ItemEnum, _args: &[Meta]) -> Result<TokenSt
let name = &item.ident; let name = &item.ident;
let arm: Vec<TokenStream2> = item.variants.iter().map(dispatch_arm).try_collect()?; let arm: Vec<TokenStream2> = item.variants.iter().map(dispatch_arm).try_collect()?;
let switch = quote! { let switch = quote! {
#[allow(clippy::large_stack_frames)] //TODO: fixme
pub(super) async fn process( pub(super) async fn process(
command: #name, command: #name,
context: &crate::Command<'_> context: &crate::Context<'_>
) -> Result { ) -> Result {
use #name::*; use #name::*;
#[allow(non_snake_case)] #[allow(non_snake_case)]
Ok(match command { match command {
#( #arm )* #( #arm )*
}) }
} }
}; };
@ -47,8 +48,7 @@ fn dispatch_arm(v: &Variant) -> Result<TokenStream2> {
let arg = field.clone(); let arg = field.clone();
quote! { quote! {
#name { #( #field ),* } => { #name { #( #field ),* } => {
let c = Box::pin(context.#handler(#( #arg ),*)).await?; Box::pin(context.#handler(#( #arg ),*)).await
Box::pin(context.write_str(c.body())).await?;
}, },
} }
}, },
@ -58,15 +58,14 @@ fn dispatch_arm(v: &Variant) -> Result<TokenStream2> {
}; };
quote! { quote! {
#name ( #field ) => { #name ( #field ) => {
Box::pin(#handler::process(#field, context)).await?; Box::pin(#handler::process(#field, context)).await
} }
} }
}, },
| Fields::Unit => { | Fields::Unit => {
quote! { quote! {
#name => { #name => {
let c = Box::pin(context.#handler()).await?; Box::pin(context.#handler()).await
Box::pin(context.write_str(c.body())).await?;
}, },
} }
}, },

View file

@ -70,6 +70,7 @@ element_hacks = [
] ]
gzip_compression = [ gzip_compression = [
"conduwuit-api/gzip_compression", "conduwuit-api/gzip_compression",
"conduwuit-core/gzip_compression",
"conduwuit-router/gzip_compression", "conduwuit-router/gzip_compression",
"conduwuit-service/gzip_compression", "conduwuit-service/gzip_compression",
] ]
@ -141,6 +142,7 @@ zstd_compression = [
"conduwuit-core/zstd_compression", "conduwuit-core/zstd_compression",
"conduwuit-database/zstd_compression", "conduwuit-database/zstd_compression",
"conduwuit-router/zstd_compression", "conduwuit-router/zstd_compression",
"conduwuit-service/zstd_compression",
] ]
conduwuit_mods = [ conduwuit_mods = [
"conduwuit-core/conduwuit_mods", "conduwuit-core/conduwuit_mods",

View file

@ -17,34 +17,79 @@ crate-type = [
] ]
[features] [features]
brotli_compression = [
"conduwuit-admin/brotli_compression",
"conduwuit-api/brotli_compression",
"conduwuit-core/brotli_compression",
"conduwuit-service/brotli_compression",
"tower-http/compression-br",
]
direct_tls = [
"axum-server/tls-rustls",
"dep:rustls",
"dep:axum-server-dual-protocol",
]
gzip_compression = [
"conduwuit-admin/gzip_compression",
"conduwuit-api/gzip_compression",
"conduwuit-core/gzip_compression",
"conduwuit-service/gzip_compression",
"tower-http/compression-gzip",
]
io_uring = [
"conduwuit-admin/io_uring",
"conduwuit-api/io_uring",
"conduwuit-service/io_uring",
"conduwuit-api/io_uring",
]
jemalloc = [
"conduwuit-admin/jemalloc",
"conduwuit-api/jemalloc",
"conduwuit-core/jemalloc",
"conduwuit-service/jemalloc",
]
jemalloc_conf = [
"conduwuit-admin/jemalloc_conf",
"conduwuit-api/jemalloc_conf",
"conduwuit-core/jemalloc_conf",
"conduwuit-service/jemalloc_conf",
]
jemalloc_prof = [
"conduwuit-admin/jemalloc_prof",
"conduwuit-api/jemalloc_prof",
"conduwuit-core/jemalloc_prof",
"conduwuit-service/jemalloc_prof",
]
jemalloc_stats = [
"conduwuit-admin/jemalloc_stats",
"conduwuit-api/jemalloc_stats",
"conduwuit-core/jemalloc_stats",
"conduwuit-service/jemalloc_stats",
]
release_max_log_level = [ release_max_log_level = [
"conduwuit-admin/release_max_log_level",
"conduwuit-api/release_max_log_level",
"conduwuit-core/release_max_log_level",
"conduwuit-service/release_max_log_level",
"tracing/max_level_trace", "tracing/max_level_trace",
"tracing/release_max_level_info", "tracing/release_max_level_info",
"log/max_level_trace", "log/max_level_trace",
"log/release_max_level_info", "log/release_max_level_info",
] ]
sentry_telemetry = [ sentry_telemetry = [
"conduwuit-core/sentry_telemetry",
"dep:sentry", "dep:sentry",
"dep:sentry-tracing", "dep:sentry-tracing",
"dep:sentry-tower", "dep:sentry-tower",
] ]
zstd_compression = [
"tower-http/compression-zstd",
]
gzip_compression = [
"tower-http/compression-gzip",
]
brotli_compression = [
"tower-http/compression-br",
]
systemd = [ systemd = [
"dep:sd-notify", "dep:sd-notify",
] ]
zstd_compression = [
direct_tls = [ "conduwuit-api/zstd_compression",
"axum-server/tls-rustls", "conduwuit-core/zstd_compression",
"dep:rustls", "conduwuit-service/zstd_compression",
"dep:axum-server-dual-protocol", "tower-http/compression-zstd",
] ]
[dependencies] [dependencies]

View file

@ -31,12 +31,14 @@ pub(super) async fn serve(
.install_default() .install_default()
.expect("failed to initialise aws-lc-rs rustls crypto provider"); .expect("failed to initialise aws-lc-rs rustls crypto provider");
debug!("Using direct TLS. Certificate path {certs} and certificate private key path {key}",);
info!( info!(
"Note: It is strongly recommended that you use a reverse proxy instead of running \ "Note: It is strongly recommended that you use a reverse proxy instead of running \
conduwuit directly with TLS." conduwuit directly with TLS."
); );
let conf = RustlsConfig::from_pem_file(certs, key).await?; debug!("Using direct TLS. Certificate path {certs} and certificate private key path {key}",);
let conf = RustlsConfig::from_pem_file(certs, key)
.await
.map_err(|e| err!(Config("tls", "Failed to load certificates or key: {e}")))?;
let mut join_set = JoinSet::new(); let mut join_set = JoinSet::new();
let app = app.into_make_service_with_connect_info::<SocketAddr>(); let app = app.into_make_service_with_connect_info::<SocketAddr>();

View file

@ -17,7 +17,12 @@ crate-type = [
] ]
[features] [features]
blurhashing = [
"dep:image",
"dep:blurhash",
]
brotli_compression = [ brotli_compression = [
"conduwuit-core/brotli_compression",
"reqwest/brotli", "reqwest/brotli",
] ]
console = [ console = [
@ -26,25 +31,48 @@ console = [
] ]
element_hacks = [] element_hacks = []
gzip_compression = [ gzip_compression = [
"conduwuit-core/gzip_compression",
"reqwest/gzip", "reqwest/gzip",
] ]
io_uring = [
"conduwuit-database/io_uring",
]
jemalloc = [
"conduwuit-core/jemalloc",
"conduwuit-database/jemalloc",
]
jemalloc_conf = [
"conduwuit-core/jemalloc_conf",
"conduwuit-database/jemalloc_conf",
]
jemalloc_prof = [
"conduwuit-core/jemalloc_prof",
"conduwuit-database/jemalloc_prof",
]
jemalloc_stats = [
"conduwuit-core/jemalloc_stats",
"conduwuit-database/jemalloc_stats",
]
media_thumbnail = [ media_thumbnail = [
"dep:image", "dep:image",
] ]
release_max_log_level = [ release_max_log_level = [
"tracing/max_level_trace", "conduwuit-core/release_max_log_level",
"tracing/release_max_level_info", "conduwuit-database/release_max_log_level",
"log/max_level_trace", "log/max_level_trace",
"log/release_max_level_info", "log/release_max_level_info",
"tracing/max_level_trace",
"tracing/release_max_level_info",
] ]
url_preview = [ url_preview = [
"dep:image", "dep:image",
"dep:webpage", "dep:webpage",
] ]
zstd_compression = [ zstd_compression = [
"conduwuit-core/zstd_compression",
"conduwuit-database/zstd_compression",
"reqwest/zstd", "reqwest/zstd",
] ]
blurhashing = ["dep:image","dep:blurhash"]
[dependencies] [dependencies]
async-trait.workspace = true async-trait.workspace = true

View file

@ -1,6 +1,7 @@
use std::collections::BTreeMap; use std::collections::BTreeMap;
use conduwuit::{Result, pdu::PduBuilder}; use conduwuit::{Result, pdu::PduBuilder};
use futures::FutureExt;
use ruma::{ use ruma::{
RoomId, RoomVersionId, RoomId, RoomVersionId,
events::room::{ events::room::{
@ -63,6 +64,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
// 2. Make server user/bot join // 2. Make server user/bot join
@ -78,6 +80,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
// 3. Power levels // 3. Power levels
@ -95,6 +98,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
// 4.1 Join Rules // 4.1 Join Rules
@ -107,6 +111,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
// 4.2 History Visibility // 4.2 History Visibility
@ -122,6 +127,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
// 4.3 Guest Access // 4.3 Guest Access
@ -137,6 +143,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
// 5. Events implied by name and topic // 5. Events implied by name and topic
@ -150,6 +157,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
services services
@ -163,6 +171,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
// 6. Room alias // 6. Room alias
@ -180,6 +189,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
services services
@ -197,6 +207,7 @@ pub async fn create_admin_room(services: &Services) -> Result {
&room_id, &room_id,
&state_lock, &state_lock,
) )
.boxed()
.await?; .await?;
Ok(()) Ok(())

View file

@ -1,20 +1,20 @@
mod namespace_regex; mod namespace_regex;
mod registration_info; mod registration_info;
use std::{collections::BTreeMap, sync::Arc}; use std::{collections::BTreeMap, iter::IntoIterator, sync::Arc};
use async_trait::async_trait; use async_trait::async_trait;
use conduwuit::{Result, err, utils::stream::TryIgnore}; use conduwuit::{Result, err, utils::stream::IterStream};
use database::Map; use database::Map;
use futures::{Future, StreamExt, TryStreamExt}; use futures::{Future, FutureExt, Stream, TryStreamExt};
use ruma::{RoomAliasId, RoomId, UserId, api::appservice::Registration}; use ruma::{RoomAliasId, RoomId, UserId, api::appservice::Registration};
use tokio::sync::RwLock; use tokio::sync::{RwLock, RwLockReadGuard};
pub use self::{namespace_regex::NamespaceRegex, registration_info::RegistrationInfo}; pub use self::{namespace_regex::NamespaceRegex, registration_info::RegistrationInfo};
use crate::{Dep, sending}; use crate::{Dep, sending};
pub struct Service { pub struct Service {
registration_info: RwLock<BTreeMap<String, RegistrationInfo>>, registration_info: RwLock<Registrations>,
services: Services, services: Services,
db: Data, db: Data,
} }
@ -27,6 +27,8 @@ struct Data {
id_appserviceregistrations: Arc<Map>, id_appserviceregistrations: Arc<Map>,
} }
type Registrations = BTreeMap<String, RegistrationInfo>;
#[async_trait] #[async_trait]
impl crate::Service for Service { impl crate::Service for Service {
fn build(args: crate::Args<'_>) -> Result<Arc<Self>> { fn build(args: crate::Args<'_>) -> Result<Arc<Self>> {
@ -41,19 +43,18 @@ impl crate::Service for Service {
})) }))
} }
async fn worker(self: Arc<Self>) -> Result<()> { async fn worker(self: Arc<Self>) -> Result {
// Inserting registrations into cache // Inserting registrations into cache
for appservice in self.iter_db_ids().await? { self.iter_db_ids()
self.registration_info.write().await.insert( .try_for_each(async |appservice| {
appservice.0, self.registration_info
appservice .write()
.1 .await
.try_into() .insert(appservice.0, appservice.1.try_into()?);
.expect("Should be validated on registration"),
);
}
Ok(()) Ok(())
})
.await
} }
fn name(&self) -> &str { crate::service::make_name(std::module_path!()) } fn name(&self) -> &str { crate::service::make_name(std::module_path!()) }
@ -84,7 +85,7 @@ impl Service {
/// # Arguments /// # Arguments
/// ///
/// * `service_name` - the registration ID of the appservice /// * `service_name` - the registration ID of the appservice
pub async fn unregister_appservice(&self, appservice_id: &str) -> Result<()> { pub async fn unregister_appservice(&self, appservice_id: &str) -> Result {
// removes the appservice registration info // removes the appservice registration info
self.registration_info self.registration_info
.write() .write()
@ -112,15 +113,6 @@ impl Service {
.map(|info| info.registration) .map(|info| info.registration)
} }
pub async fn iter_ids(&self) -> Vec<String> {
self.registration_info
.read()
.await
.keys()
.cloned()
.collect()
}
pub async fn find_from_token(&self, token: &str) -> Option<RegistrationInfo> { pub async fn find_from_token(&self, token: &str) -> Option<RegistrationInfo> {
self.read() self.read()
.await .await
@ -156,15 +148,22 @@ impl Service {
.any(|info| info.rooms.is_exclusive_match(room_id.as_str())) .any(|info| info.rooms.is_exclusive_match(room_id.as_str()))
} }
pub fn read( pub fn iter_ids(&self) -> impl Stream<Item = String> + Send {
&self, self.read()
) -> impl Future<Output = tokio::sync::RwLockReadGuard<'_, BTreeMap<String, RegistrationInfo>>> .map(|info| info.keys().cloned().collect::<Vec<_>>())
{ .map(IntoIterator::into_iter)
self.registration_info.read() .map(IterStream::stream)
.flatten_stream()
} }
#[inline] pub fn iter_db_ids(&self) -> impl Stream<Item = Result<(String, Registration)>> + Send {
pub async fn all(&self) -> Result<Vec<(String, Registration)>> { self.iter_db_ids().await } self.db
.id_appserviceregistrations
.keys()
.and_then(move |id: &str| async move {
Ok((id.to_owned(), self.get_db_registration(id).await?))
})
}
pub async fn get_db_registration(&self, id: &str) -> Result<Registration> { pub async fn get_db_registration(&self, id: &str) -> Result<Registration> {
self.db self.db
@ -175,16 +174,7 @@ impl Service {
.map_err(|e| err!(Database("Invalid appservice {id:?} registration: {e:?}"))) .map_err(|e| err!(Database("Invalid appservice {id:?} registration: {e:?}")))
} }
async fn iter_db_ids(&self) -> Result<Vec<(String, Registration)>> { pub fn read(&self) -> impl Future<Output = RwLockReadGuard<'_, Registrations>> + Send {
self.db self.registration_info.read()
.id_appserviceregistrations
.keys()
.ignore_err()
.then(|id: String| async move {
let reg = self.get_db_registration(&id).await?;
Ok((id, reg))
})
.try_collect()
.await
} }
} }

View file

@ -72,10 +72,4 @@ impl Data {
pub fn bump_database_version(&self, new_version: u64) { pub fn bump_database_version(&self, new_version: u64) {
self.global.raw_put(b"version", new_version); self.global.raw_put(b"version", new_version);
} }
#[inline]
pub fn backup(&self) -> Result { self.db.db.backup() }
#[inline]
pub fn backup_list(&self) -> Result<String> { self.db.db.backup_list() }
} }

View file

@ -127,8 +127,6 @@ impl Service {
&self.server.config.new_user_displayname_suffix &self.server.config.new_user_displayname_suffix
} }
pub fn allow_check_for_updates(&self) -> bool { self.server.config.allow_check_for_updates }
pub fn trusted_servers(&self) -> &[OwnedServerName] { &self.server.config.trusted_servers } pub fn trusted_servers(&self) -> &[OwnedServerName] { &self.server.config.trusted_servers }
pub fn turn_password(&self) -> &String { &self.server.config.turn_password } pub fn turn_password(&self) -> &String { &self.server.config.turn_password }

Some files were not shown because too many files have changed in this diff Show more