Server Side Caching not working as intended #483

Open
opened 2 months ago by danner26 · 9 comments

I found that when you have in-memory caching enabled on the server, task labels become duplicated. When you create a label, navigate away and then come back it duplicates the labels. Please see these 2 images for instance:
image
image

Here is my config.yml:

service:
    JWTSecret: REDACTED_512_SECRET
    frontendurl: REDACTED
    motd: REDACTED
    timezone: EST
    enableregistration: false
cache:
    enabled: true
    type: keyvalue
cors:
    enable: true
    origins: REDACTED
    maxage: 300
mailer:
    enabled: true
    host: REDACTED
    port: REDACTED
    username: REDACTED
    password: REDACTED
    fromemail: REDACTED
    forcessl: yes
keyvalue:
    type: memory
metrics:
    enabled: true
    username: REDACTED
    password: REDACTED

The first response from the server on the list page (before duplication) looks like the following code snippet. This is the request that shows a valid list/label group.

[{"id":7,"title":"test","description":"","done":false,"done_at":"0001-01-01T00:00:00Z","due_date":"0001-01-01T00:00:00Z","reminder_dates":null,"list_id":2,"repeat_after":0,"repeat_mode":0,"priority":0,"start_date":"0001-01-01T00:00:00Z","end_date":"0001-01-01T00:00:00Z","assignees":null,"labels":[{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"}],"hex_color":"198cff","percent_done":0,"identifier":"-1","index":1,"related_tasks":{},"attachments":null,"is_favorite":false,"created":"2021-04-15T15:54:10-05:00","updated":"2021-04-15T15:54:20-05:00","bucket_id":2,"position":65536,"created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"}}]

On the second request (after duplication) this what we recieve:

[{"id":7,"title":"test","description":"","done":false,"done_at":"0001-01-01T00:00:00Z","due_date":"0001-01-01T00:00:00Z","reminder_dates":null,"list_id":2,"repeat_after":0,"repeat_mode":0,"priority":0,"start_date":"0001-01-01T00:00:00Z","end_date":"0001-01-01T00:00:00Z","assignees":null,"labels":[{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"},{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"},{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"}],"hex_color":"198cff","percent_done":0,"identifier":"-1","index":1,"related_tasks":{},"attachments":null,"is_favorite":false,"created":"2021-04-15T15:54:10-05:00","updated":"2021-04-15T15:54:20-05:00","bucket_id":2,"position":65536,"created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"}}]

Disabling cache in the config.yml resolves the issue completely, even after multiple page changes/reloads (as expected):

[{"id":7,"title":"test","description":"","done":false,"done_at":"0001-01-01T00:00:00Z","due_date":"0001-01-01T00:00:00Z","reminder_dates":null,"list_id":2,"repeat_after":0,"repeat_mode":0,"priority":0,"start_date":"0001-01-01T00:00:00Z","end_date":"0001-01-01T00:00:00Z","assignees":null,"labels":[{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"}],"hex_color":"198cff","percent_done":0,"identifier":"-1","index":1,"related_tasks":{},"attachments":null,"is_favorite":false,"created":"2021-04-15T15:54:10-05:00","updated":"2021-04-15T15:54:20-05:00","bucket_id":2,"position":65536,"created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"}}]

It appears that the backend caching on the server side is keeping multiple records of the same data, without either updating or replacing (appending instead). I am not sure if this is a misconfiguration on my part. I am running the backend and frontend using docker, and a reverse proxy in front of both with the proper nginx rules setup.

I found that when you have in-memory caching enabled on the server, task labels become duplicated. When you create a label, navigate away and then come back it duplicates the labels. Please see these 2 images for instance: ![image](/attachments/485426f8-6f14-4205-956f-404ea8c0af0f) ![image](/attachments/e62a9dcb-ccaa-4454-9ad0-79b9859920da) Here is my config.yml: ``` service: JWTSecret: REDACTED_512_SECRET frontendurl: REDACTED motd: REDACTED timezone: EST enableregistration: false cache: enabled: true type: keyvalue cors: enable: true origins: REDACTED maxage: 300 mailer: enabled: true host: REDACTED port: REDACTED username: REDACTED password: REDACTED fromemail: REDACTED forcessl: yes keyvalue: type: memory metrics: enabled: true username: REDACTED password: REDACTED ``` The first response from the server on the list page (before duplication) looks like the following code snippet. This is the request that shows a valid list/label group. ``` [{"id":7,"title":"test","description":"","done":false,"done_at":"0001-01-01T00:00:00Z","due_date":"0001-01-01T00:00:00Z","reminder_dates":null,"list_id":2,"repeat_after":0,"repeat_mode":0,"priority":0,"start_date":"0001-01-01T00:00:00Z","end_date":"0001-01-01T00:00:00Z","assignees":null,"labels":[{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"}],"hex_color":"198cff","percent_done":0,"identifier":"-1","index":1,"related_tasks":{},"attachments":null,"is_favorite":false,"created":"2021-04-15T15:54:10-05:00","updated":"2021-04-15T15:54:20-05:00","bucket_id":2,"position":65536,"created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"}}] ``` On the second request (after duplication) this what we recieve: ``` [{"id":7,"title":"test","description":"","done":false,"done_at":"0001-01-01T00:00:00Z","due_date":"0001-01-01T00:00:00Z","reminder_dates":null,"list_id":2,"repeat_after":0,"repeat_mode":0,"priority":0,"start_date":"0001-01-01T00:00:00Z","end_date":"0001-01-01T00:00:00Z","assignees":null,"labels":[{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"},{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"},{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"}],"hex_color":"198cff","percent_done":0,"identifier":"-1","index":1,"related_tasks":{},"attachments":null,"is_favorite":false,"created":"2021-04-15T15:54:10-05:00","updated":"2021-04-15T15:54:20-05:00","bucket_id":2,"position":65536,"created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"}}] ``` Disabling cache in the config.yml resolves the issue completely, even after multiple page changes/reloads (as expected): ``` [{"id":7,"title":"test","description":"","done":false,"done_at":"0001-01-01T00:00:00Z","due_date":"0001-01-01T00:00:00Z","reminder_dates":null,"list_id":2,"repeat_after":0,"repeat_mode":0,"priority":0,"start_date":"0001-01-01T00:00:00Z","end_date":"0001-01-01T00:00:00Z","assignees":null,"labels":[{"id":4,"title":"test","description":"","hex_color":"e8e8e8","created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"},"created":"2021-04-15T15:54:19-05:00","updated":"2021-04-15T15:54:19-05:00"}],"hex_color":"198cff","percent_done":0,"identifier":"-1","index":1,"related_tasks":{},"attachments":null,"is_favorite":false,"created":"2021-04-15T15:54:10-05:00","updated":"2021-04-15T15:54:20-05:00","bucket_id":2,"position":65536,"created_by":{"id":1,"name":"REDACTED_NAME","username":"REDACTED_USERNAME","created":"2021-04-15T13:27:38-05:00","updated":"2021-04-15T13:55:58-05:00"}}] ``` It appears that the backend caching on the server side is keeping multiple records of the same data, without either updating or replacing (appending instead). I am not sure if this is a misconfiguration on my part. I am running the backend and frontend using docker, and a reverse proxy in front of both with the proper nginx rules setup.
danner26 changed title from Service Worker Caching & Config Caching Enabled to Server Side Caching not working as intended 2 months ago
Owner

The caching is provided by the orm layer directly so maybe that actually is a bug in there somewhere.

Could you try redis to see if it makes any difference?

In general, I found the caching nice to have but the difference in speed is usually not noticable at all. I'm running my instance and the demo one without it enabled and am still seeing < 10ms response times for most things.

The caching is provided by the orm layer directly so maybe that actually is a bug in there somewhere. Could you try redis to see if it makes any difference? In general, I found the caching nice to have but the difference in speed is usually not noticable at all. I'm running my instance and the demo one without it enabled and am still seeing < 10ms response times for most things.
Poster

I agree caching for this type of content on the orm layer probably wont reduce the load times much, especially since the assets are being cached by the service worker. Either way it is offered for in-memory and not working as intended so I wanted to create a ticket.

At the end of the day caching wont really be required for me, even though it would be nice. I will give redis a try and get back to you.

I agree caching for this type of content on the orm layer probably wont reduce the load times much, especially since the assets are being cached by the service worker. Either way it is offered for in-memory and not working as intended so I wanted to create a ticket. At the end of the day caching wont really be required for me, even though it would be nice. I will give redis a try and get back to you.
konrad added the
kind/bug
label 2 months ago
Poster

Looks like redis may have its own issues going on:

api_1       | 2021-04-15T21:37:52.570317971Z: ERROR     ▶ [DATABASE] 394 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks
frontend_1  | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /lists/2/list HTTP/1.0" 200 3650 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP"
proxy_1     | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /lists/2/list HTTP/1.1" 200 3650 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP"
api_1       | 2021-04-15T21:37:52.580304765Z: ERROR     ▶ [DATABASE] 3b6 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks
api_1       | 2021-04-15T21:37:52.580783348Z: WEB       ▶ REDACTED_IP  GET 200 /api/v1/notifications?page=1 6.764758ms - Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36
api_1       | 2021-04-15T21:37:52.582621163Z: ERROR     ▶ [DATABASE] 3be [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks
proxy_1     | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /api/v1/notifications?page=1 HTTP/1.1" 200 565 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP"
api_1       | 2021-04-15T21:37:52.592035285Z: ERROR     ▶ [DATABASE] 3e6 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks
api_1       | 2021-04-15T21:37:52.593809131Z: ERROR     ▶ [DATABASE] 3ee [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks
api_1       | 2021-04-15T21:37:52.598745274Z: WEB       ▶ REDACTED_IP  GET 200 /api/v1/lists/2 21.18648ms - Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36
proxy_1     | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /api/v1/lists/2 HTTP/1.1" 200 375 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP"
api_1       | 2021-04-15T21:37:52.604908091Z: ERROR     ▶ [DATABASE] 417 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks

Could be my implementation but I do not think so.. there is no authentication for this redis server, and it is a local docker container as well. Any ideas on that?

To make it a bit easier to read, this is the only error in that dump:

[redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks
Looks like redis may have its own issues going on: ``` api_1 | 2021-04-15T21:37:52.570317971Z: ERROR ▶ [DATABASE] 394 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks frontend_1 | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /lists/2/list HTTP/1.0" 200 3650 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP" proxy_1 | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /lists/2/list HTTP/1.1" 200 3650 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP" api_1 | 2021-04-15T21:37:52.580304765Z: ERROR ▶ [DATABASE] 3b6 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks api_1 | 2021-04-15T21:37:52.580783348Z: WEB ▶ REDACTED_IP GET 200 /api/v1/notifications?page=1 6.764758ms - Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36 api_1 | 2021-04-15T21:37:52.582621163Z: ERROR ▶ [DATABASE] 3be [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks proxy_1 | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /api/v1/notifications?page=1 HTTP/1.1" 200 565 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP" api_1 | 2021-04-15T21:37:52.592035285Z: ERROR ▶ [DATABASE] 3e6 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks api_1 | 2021-04-15T21:37:52.593809131Z: ERROR ▶ [DATABASE] 3ee [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks api_1 | 2021-04-15T21:37:52.598745274Z: WEB ▶ REDACTED_IP GET 200 /api/v1/lists/2 21.18648ms - Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36 proxy_1 | REDACTED_IP - - [15/Apr/2021:21:37:52 +0000] "GET /api/v1/lists/2 HTTP/1.1" 200 375 "https://REDACTED_URL/lists/2/list" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36" "REDACTED_IP" api_1 | 2021-04-15T21:37:52.604908091Z: ERROR ▶ [DATABASE] 417 [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks ``` Could be my implementation but I do not think so.. there is no authentication for this redis server, and it is a local docker container as well. Any ideas on that? To make it a bit easier to read, this is the only error in that dump: ``` [redis_cacher] decode failed: gob: wrong type (models.RelatedTaskMap) for received field .RelatedTasks ```
Owner

AFAIU even though this is an error it should still cache things. I've asked the xorm guys a while back about this but they didn't have a good idea either. Not quite sure where to look.

Does the issue you originally reported still happen?

In the (far away) future I'd like to get rid of the orm caching and cache stuff in Vikunja directly but there's a lot of other stuff that is more important than this so that wont happen any time in the next months.

AFAIU even though this is an error it should still cache things. I've asked the xorm guys a while back about this but they didn't have a good idea either. Not quite sure where to look. Does the issue you originally reported still happen? In the (far away) future I'd like to get rid of the orm caching and cache stuff in Vikunja directly but there's a lot of other stuff that is more important than this so that wont happen any time in the next months.
Poster

This is a fresh install of redis, nothing else using the service.
Some more details from redis side of things:
keys*

127.0.0.1:6379> keys *
 1) "xorm:sql:lists:3027824172"
 2) "xorm:sql:saved_filters:3922608160"
 3) "xorm:bean:lists:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x02"
 4) "xorm:bean:notifications:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x02"
 5) "xorm:sql:task_reminders:2706106389"
 6) "xorm:sql:tasks:3795459004"
 7) "xorm:sql:subscriptions:1606035003"
 8) "xorm:sql:users:29866484"
 9) "xorm:sql:lists:2635114230"
10) "xorm:sql:users:2843061681"
11) "xorm:sql:task_relations:4191207804"
12) "xorm:sql:lists:1232550842"
13) "xorm:sql:users:516001826"
14) "xorm:sql:unsplash_photos:2702400236"
15) "xorm:bean:lists:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x04"
16) "xorm:bean:users:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x02"
17) "xorm:sql:users:4186005587"
18) "xorm:bean:tasks:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x0e"
19) "xorm:sql:task_attachments:24396998"
20) "xorm:sql:notifications:2571868094"

and

127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> keys *
(empty array)
127.0.0.1:6379[1]> select 3
OK
127.0.0.1:6379[3]> keys *
(empty array)
127.0.0.1:6379[3]> select 6
OK
127.0.0.1:6379[6]> keys *
(empty array)
127.0.0.1:6379[6]> select 9
OK
127.0.0.1:6379[9]> keys *
(empty array)
127.0.0.1:6379[9]> select 11
OK
127.0.0.1:6379[11]> keys *
(empty array)
127.0.0.1:6379[11]> select 12
OK
127.0.0.1:6379[12]> keys *
(empty array)
127.0.0.1:6379[12]> select 15
OK
127.0.0.1:6379[15]> keys *
(empty array)
This is a fresh install of redis, nothing else using the service. Some more details from redis side of things: keys* ``` 127.0.0.1:6379> keys * 1) "xorm:sql:lists:3027824172" 2) "xorm:sql:saved_filters:3922608160" 3) "xorm:bean:lists:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x02" 4) "xorm:bean:notifications:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x02" 5) "xorm:sql:task_reminders:2706106389" 6) "xorm:sql:tasks:3795459004" 7) "xorm:sql:subscriptions:1606035003" 8) "xorm:sql:users:29866484" 9) "xorm:sql:lists:2635114230" 10) "xorm:sql:users:2843061681" 11) "xorm:sql:task_relations:4191207804" 12) "xorm:sql:lists:1232550842" 13) "xorm:sql:users:516001826" 14) "xorm:sql:unsplash_photos:2702400236" 15) "xorm:bean:lists:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x04" 16) "xorm:bean:users:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x02" 17) "xorm:sql:users:4186005587" 18) "xorm:bean:tasks:\x10\xff\x81\x02\x01\x01\x02PK\x01\xff\x82\x00\x01\x10\x00\x00\x0e\xff\x82\x00\x01\x05int64\x04\x02\x00\x0e" 19) "xorm:sql:task_attachments:24396998" 20) "xorm:sql:notifications:2571868094" ``` and ``` 127.0.0.1:6379> select 1 OK 127.0.0.1:6379[1]> keys * (empty array) 127.0.0.1:6379[1]> select 3 OK 127.0.0.1:6379[3]> keys * (empty array) 127.0.0.1:6379[3]> select 6 OK 127.0.0.1:6379[6]> keys * (empty array) 127.0.0.1:6379[6]> select 9 OK 127.0.0.1:6379[9]> keys * (empty array) 127.0.0.1:6379[9]> select 11 OK 127.0.0.1:6379[11]> keys * (empty array) 127.0.0.1:6379[11]> select 12 OK 127.0.0.1:6379[12]> keys * (empty array) 127.0.0.1:6379[12]> select 15 OK 127.0.0.1:6379[15]> keys * (empty array) ```
Poster

The issue is still present with in-memory caching and it appears that redis caching does not work, at least not in a dockerized setup like this. Right now disabling caching is the best route.

I agree caching on the vikunja side would be optimal, but at the end of the day of course there are more important tasks. I suppose this will be one of those issues that remains open for a while lol

The issue is still present with in-memory caching and it appears that redis caching does not work, at least not in a dockerized setup like this. Right now disabling caching is the best route. I agree caching on the vikunja side would be optimal, but at the end of the day of course there are more important tasks. I suppose this will be one of those issues that remains open for a while lol
Owner

Yeah I guess this will be open for a while (that is, unless you want to dive in and send a PR 🙂 )

In a best case scenario this would get fixed by xorm without any doings on our side.

Yeah I guess this will be open for a while (that is, unless you want to dive in and send a PR 🙂 ) In a best case scenario this would get fixed by xorm without any doings on our side.
Poster

If I get time to look into redis/the memory caching I will submit a PR.. if I get anywhere. Between work and this masters program, development time has been sevearly reduced :(

That would be nice. I find it rarely happens though lol

If I get time to look into redis/the memory caching I will submit a PR.. if I get anywhere. Between work and this masters program, development time has been sevearly reduced :( That would be nice. I find it rarely happens though lol
Owner

I find it rarely happens though lol

Well at least you were able to reproduce it, that's something.

> I find it rarely happens though lol Well at least you were able to reproduce it, that's something.
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date

No due date set.

Dependencies

This issue currently doesn't have any dependencies.

Loading…
There is no content yet.