delete unnecessary real_users_cache, fix overwriting push_target iter, add proper function for getting local active users in room

this `real_users_cache` cache seems weird, and i have no idea what
prompted its creation upstream. perhaps they did this because
sqlite was very slow and their rocksdb setup is very poor, so
a "solution" was to stick member counts in memory.
slow iterators, scanning, etc do not apply to conduwuit where
our rocksdb is extremely tuned, and i seriously doubt something
like this would have any real world net-positive performance impact.

also for some reason, there is suspicious logic where we
overwrite the entire push target collection.

both of these things could be a potential cause for receiving
notifications in rooms we've left.

Signed-off-by: strawberry <strawberry@puppygock.gay>
This commit is contained in:
strawberry 2024-06-04 01:13:43 -04:00
parent c1227340b3
commit c738c119f8
6 changed files with 30 additions and 58 deletions

View file

@ -1,5 +1,5 @@
use std::{
collections::{BTreeMap, HashMap, HashSet},
collections::{BTreeMap, HashMap},
path::Path,
sync::{Arc, Mutex, RwLock},
};
@ -150,7 +150,6 @@ pub struct KeyValueDatabase {
pub senderkey_pusher: Arc<dyn KvTree>,
pub auth_chain_cache: Mutex<LruCache<Vec<u64>, Arc<[u64]>>>,
pub our_real_users_cache: RwLock<HashMap<OwnedRoomId, Arc<HashSet<OwnedUserId>>>>,
pub appservice_in_room_cache: RwLock<HashMap<OwnedRoomId, HashMap<String, bool>>>,
pub lasttimelinecount_cache: Mutex<HashMap<OwnedRoomId, PduCount>>,
}
@ -265,7 +264,6 @@ impl KeyValueDatabase {
auth_chain_cache: Mutex::new(LruCache::new(
(f64::from(config.auth_chain_cache_capacity) * config.conduit_cache_capacity_modifier) as usize,
)),
our_real_users_cache: RwLock::new(HashMap::new()),
appservice_in_room_cache: RwLock::new(HashMap::new()),
lasttimelinecount_cache: Mutex::new(HashMap::new()),
})