Remove frontend container for new merged docker #29

Merged
konrad merged 14 commits from xeruf/helm-chart:merged-docker into main 2025-01-29 12:35:33 +00:00
Contributor

Currently testing on my cluster

Fixes #28

Currently testing on my cluster Fixes #28
xeruf added 2 commits 2024-05-27 05:26:10 +00:00
Inspired by d9664199ab/bitnami/argo-workflows/templates/_helpers.tpl (L138)
Should still default to global values if set so is backwards compatible.

Fixes #26
Remove frontend container for new merged docker
Some checks failed
continuous-integration/drone/pr Build is failing
aa6d27adfb
Fixes #28
xeruf added 1 commit 2024-05-27 06:32:58 +00:00
Fix accidental disabled api
All checks were successful
continuous-integration/drone/pr Build is passing
ddc098616b
Author
Contributor

It works! Doing further validation now...

It works! Doing further validation now...
Author
Contributor

Oddly importing our data from 0.22.1 succeeds but if I look inside VIkunja I cannot find it

Oddly importing our data from 0.22.1 succeeds but if I look inside VIkunja I cannot find it
Owner

Oddly importing our data from 0.22.1 succeeds but if I look inside VIkunja I cannot find it

Does it exist in the database?

> Oddly importing our data from 0.22.1 succeeds but if I look inside VIkunja I cannot find it Does it exist in the database?
konrad reviewed 2024-05-31 13:30:03 +00:00
README.md Outdated
@ -140,3 +145,3 @@
Anything you see [in bjw-s' `common` library](https://github.com/bjw-s/helm-charts/blob/a081de53024d8328d1ae9ff7e4f6bc500b0f3a29/charts/library/common/values.yaml),
including the top-level keys, can be added and subtracted from this chart's `values.yaml`,
underneath the `api`, `frontend`, and (optionally) `typesense` key.
underneath the `api` and (optionally) `typesense` key.
Owner

What do you think about renaming the api key? Since it's no longer only the api.

What do you think about renaming the `api` key? Since it's no longer only the api.
Author
Contributor

planning to rename it to vikunja, wanted to verify everything runs fine first

planning to rename it to `vikunja`, wanted to verify everything runs fine first
xeruf marked this conversation as resolved
Author
Contributor

Oddly importing our data from 0.22.1 succeeds but if I look inside VIkunja I cannot find it

Does it exist in the database?

This is the output from my helper-script:

❯ stack vikunja-test restore vikunja-dump_2024-05-31.zip
...
> stack kube exec vikunja-test-api -it -- ./vikunja restore vikunja-dump_2024-05-31.zip

2024-06-01T12:17:18.547760877Z: INFO	▶ config/InitConfig 001 Using config file: /etc/vikunja/config.yml
2024-06-01T12:17:18.554872055Z: INFO	▶ [DATABASE] 002 [SQL] SELECT tablename FROM pg_tables WHERE schemaname = $1 [public] - 6.856814ms
2024-06-01T12:17:18.563904658Z: INFO	▶ [DATABASE] 003 [SQL] SELECT column_name, column_default, is_nullable, data_type, character_maximum_length, description,
    CASE WHEN p.contype = 'p' THEN true ELSE false END AS primarykey,
    CASE WHEN p.contype = 'u' THEN true ELSE false END AS uniquekey
FROM pg_attribute f
    JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid
    LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
    LEFT JOIN pg_description de ON f.attrelid=de.objoid AND f.attnum=de.objsubid
    LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
    LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
    LEFT JOIN pg_class AS g ON p.confrelid = g.oid
    LEFT JOIN INFORMATION_SCHEMA.COLUMNS s ON s.column_name=f.attname AND c.relname=s.table_name
WHERE n.nspname= s.table_schema AND c.relkind = 'r' AND c.relname = $1 AND s.table_schema = $2 AND f.attnum > 0 ORDER BY f.attnum; [migration public] - 8.95756ms
2024-06-01T12:17:18.565125881Z: INFO	▶ [DATABASE] 004 [SQL] SELECT indexname, indexdef FROM pg_indexes WHERE tablename=$1 AND schemaname=$2 [migration public] - 1.157462ms
2024-06-01T12:17:18.565822676Z: INFO	▶ [DATABASE] 005 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [SCHEMA_INIT] - 440.338µs
2024-06-01T12:17:18.566325964Z: INFO	▶ [DATABASE] 006 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190324205606] - 273.922µs
2024-06-01T12:17:18.56659185Z: INFO	▶ [DATABASE] 007 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190328074430] - 214.528µs
2024-06-01T12:17:18.566841574Z: INFO	▶ [DATABASE] 008 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190430111111] - 223.305µs
2024-06-01T12:17:18.567055802Z: INFO	▶ [DATABASE] 009 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190511202210] - 194.52µs
2024-06-01T12:17:18.567248369Z: INFO	▶ [DATABASE] 00a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190514192749] - 172.959µs
2024-06-01T12:17:18.567435114Z: INFO	▶ [DATABASE] 00b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190524205441] - 166.307µs
2024-06-01T12:17:18.567621718Z: INFO	▶ [DATABASE] 00c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190718200716] - 168.891µs
2024-06-01T12:17:18.567852318Z: INFO	▶ [DATABASE] 00d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190818210133] - 214.789µs
2024-06-01T12:17:18.568064952Z: INFO	▶ [DATABASE] 00e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190920185205] - 193.938µs
2024-06-01T12:17:18.568335577Z: INFO	▶ [DATABASE] 00f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190922205826] - 253.022µs
2024-06-01T12:17:18.568548813Z: INFO	▶ [DATABASE] 010 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191008194238] - 171.595µs
2024-06-01T12:17:18.568822303Z: INFO	▶ [DATABASE] 011 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191010131430] - 234.366µs
2024-06-01T12:17:18.569066176Z: INFO	▶ [DATABASE] 012 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207204427] - 216.422µs
2024-06-01T12:17:18.569382198Z: INFO	▶ [DATABASE] 013 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207220736] - 286.695µs
2024-06-01T12:17:18.56965129Z: INFO	▶ [DATABASE] 014 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200120201756] - 227.964µs
2024-06-01T12:17:18.569867531Z: INFO	▶ [DATABASE] 015 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200219183248] - 187.758µs
2024-06-01T12:17:18.570131814Z: INFO	▶ [DATABASE] 016 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308205855] - 236.851µs
2024-06-01T12:17:18.57041914Z: INFO	▶ [DATABASE] 017 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308210130] - 258.542µs
2024-06-01T12:17:18.570667603Z: INFO	▶ [DATABASE] 018 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214440] - 217.303µs
2024-06-01T12:17:18.570942526Z: INFO	▶ [DATABASE] 019 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214624] - 244.535µs
2024-06-01T12:17:18.57121273Z: INFO	▶ [DATABASE] 01a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200417175201] - 229.957µs
2024-06-01T12:17:18.571436476Z: INFO	▶ [DATABASE] 01b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230432] - 192.537µs
2024-06-01T12:17:18.571646926Z: INFO	▶ [DATABASE] 01c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230605] - 182.287µs
2024-06-01T12:17:18.571920958Z: INFO	▶ [DATABASE] 01d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200420215928] - 235.949µs
2024-06-01T12:17:18.572205188Z: INFO	▶ [DATABASE] 01e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200425182634] - 253.282µs
2024-06-01T12:17:18.572431047Z: INFO	▶ [DATABASE] 01f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200509103709] - 190.692µs
2024-06-01T12:17:18.572671656Z: INFO	▶ [DATABASE] 020 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200515172220] - 213.356µs
2024-06-01T12:17:18.572880032Z: INFO	▶ [DATABASE] 021 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200515195546] - 178.429µs
2024-06-01T12:17:18.573142161Z: INFO	▶ [DATABASE] 022 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200516123847] - 214.679µs
2024-06-01T12:17:18.573423215Z: INFO	▶ [DATABASE] 023 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200524221534] - 253.241µs
2024-06-01T12:17:18.573681116Z: INFO	▶ [DATABASE] 024 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200524224611] - 226.591µs
2024-06-01T12:17:18.573862581Z: INFO	▶ [DATABASE] 025 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200614113230] - 142.781µs
2024-06-01T12:17:18.57415625Z: INFO	▶ [DATABASE] 026 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200621214452] - 244.885µs

The problem is I have no decent way of inspecting the pod because somehow the new docker container does not even have a shell or ls or anything?

❯ stack exec vikunja-test-api -it -- /bin/ls
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "a707a7ab0aca5c244ff50ebf671b44205f4c441fb50f0f32b53be296705bb43f": OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/ls": stat /bin/ls: no such file or directory: unknown

❯ stack shell vikunja-test-api
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "20a209c0117077c3429766fcb5806817dc8d57cdf67da80f68fec8e8ccd863ca": OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown

The import definitely did not work, we have multiple teams:

❯ stack vikunja-test psql
psql (15.1)
Type "help" for help.

vikunja=> \l
                                                 List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    | ICU Locale | Locale Provider |   Access privileges   
-----------+----------+----------+-------------+-------------+------------+-----------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | =c/postgres          +
           |          |          |             |             |            |                 | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | =c/postgres          +
           |          |          |             |             |            |                 | postgres=CTc/postgres
 vikunja   | vikunja  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | =Tc/vikunja          +
           |          |          |             |             |            |                 | vikunja=CTc/vikunja
(4 rows)

vikunja=> \dt
              List of relations
 Schema |       Name       | Type  |  Owner  
--------+------------------+-------+---------
 public | api_tokens       | table | vikunja
 public | buckets          | table | vikunja
 public | favorites        | table | vikunja
 public | files            | table | vikunja
 public | label_tasks      | table | vikunja
 public | labels           | table | vikunja
 public | link_shares      | table | vikunja
 public | migration        | table | vikunja
 public | migration_status | table | vikunja
 public | notifications    | table | vikunja
 public | project_views    | table | vikunja
 public | projects         | table | vikunja
 public | reactions        | table | vikunja
 public | saved_filters    | table | vikunja
 public | subscriptions    | table | vikunja
 public | task_assignees   | table | vikunja
 public | task_attachments | table | vikunja
 public | task_buckets     | table | vikunja
 public | task_comments    | table | vikunja
 public | task_positions   | table | vikunja
 public | task_relations   | table | vikunja
 public | task_reminders   | table | vikunja
 public | tasks            | table | vikunja
 public | team_members     | table | vikunja
 public | team_projects    | table | vikunja
 public | teams            | table | vikunja
 public | totp             | table | vikunja
 public | typesense_sync   | table | vikunja
 public | unsplash_photos  | table | vikunja
 public | user_tokens      | table | vikunja
 public | users            | table | vikunja
 public | users_projects   | table | vikunja
 public | webhooks         | table | vikunja
(33 rows)

vikunja=> SELECT * FROM teams;
 id | name | description | created_by_id | oidc_id | issuer | created | updated | is_public 
----+------+-------------+---------------+---------+--------+---------+---------+-----------
(0 rows)
> > Oddly importing our data from 0.22.1 succeeds but if I look inside VIkunja I cannot find it > > Does it exist in the database? This is the output from my helper-script: ```sh ❯ stack vikunja-test restore vikunja-dump_2024-05-31.zip ... > stack kube exec vikunja-test-api -it -- ./vikunja restore vikunja-dump_2024-05-31.zip 2024-06-01T12:17:18.547760877Z: INFO ▶ config/InitConfig 001 Using config file: /etc/vikunja/config.yml 2024-06-01T12:17:18.554872055Z: INFO ▶ [DATABASE] 002 [SQL] SELECT tablename FROM pg_tables WHERE schemaname = $1 [public] - 6.856814ms 2024-06-01T12:17:18.563904658Z: INFO ▶ [DATABASE] 003 [SQL] SELECT column_name, column_default, is_nullable, data_type, character_maximum_length, description, CASE WHEN p.contype = 'p' THEN true ELSE false END AS primarykey, CASE WHEN p.contype = 'u' THEN true ELSE false END AS uniquekey FROM pg_attribute f JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum LEFT JOIN pg_description de ON f.attrelid=de.objoid AND f.attnum=de.objsubid LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey) LEFT JOIN pg_class AS g ON p.confrelid = g.oid LEFT JOIN INFORMATION_SCHEMA.COLUMNS s ON s.column_name=f.attname AND c.relname=s.table_name WHERE n.nspname= s.table_schema AND c.relkind = 'r' AND c.relname = $1 AND s.table_schema = $2 AND f.attnum > 0 ORDER BY f.attnum; [migration public] - 8.95756ms 2024-06-01T12:17:18.565125881Z: INFO ▶ [DATABASE] 004 [SQL] SELECT indexname, indexdef FROM pg_indexes WHERE tablename=$1 AND schemaname=$2 [migration public] - 1.157462ms 2024-06-01T12:17:18.565822676Z: INFO ▶ [DATABASE] 005 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [SCHEMA_INIT] - 440.338µs 2024-06-01T12:17:18.566325964Z: INFO ▶ [DATABASE] 006 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190324205606] - 273.922µs 2024-06-01T12:17:18.56659185Z: INFO ▶ [DATABASE] 007 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190328074430] - 214.528µs 2024-06-01T12:17:18.566841574Z: INFO ▶ [DATABASE] 008 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190430111111] - 223.305µs 2024-06-01T12:17:18.567055802Z: INFO ▶ [DATABASE] 009 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190511202210] - 194.52µs 2024-06-01T12:17:18.567248369Z: INFO ▶ [DATABASE] 00a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190514192749] - 172.959µs 2024-06-01T12:17:18.567435114Z: INFO ▶ [DATABASE] 00b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190524205441] - 166.307µs 2024-06-01T12:17:18.567621718Z: INFO ▶ [DATABASE] 00c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190718200716] - 168.891µs 2024-06-01T12:17:18.567852318Z: INFO ▶ [DATABASE] 00d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190818210133] - 214.789µs 2024-06-01T12:17:18.568064952Z: INFO ▶ [DATABASE] 00e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190920185205] - 193.938µs 2024-06-01T12:17:18.568335577Z: INFO ▶ [DATABASE] 00f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190922205826] - 253.022µs 2024-06-01T12:17:18.568548813Z: INFO ▶ [DATABASE] 010 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191008194238] - 171.595µs 2024-06-01T12:17:18.568822303Z: INFO ▶ [DATABASE] 011 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191010131430] - 234.366µs 2024-06-01T12:17:18.569066176Z: INFO ▶ [DATABASE] 012 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207204427] - 216.422µs 2024-06-01T12:17:18.569382198Z: INFO ▶ [DATABASE] 013 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207220736] - 286.695µs 2024-06-01T12:17:18.56965129Z: INFO ▶ [DATABASE] 014 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200120201756] - 227.964µs 2024-06-01T12:17:18.569867531Z: INFO ▶ [DATABASE] 015 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200219183248] - 187.758µs 2024-06-01T12:17:18.570131814Z: INFO ▶ [DATABASE] 016 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308205855] - 236.851µs 2024-06-01T12:17:18.57041914Z: INFO ▶ [DATABASE] 017 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308210130] - 258.542µs 2024-06-01T12:17:18.570667603Z: INFO ▶ [DATABASE] 018 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214440] - 217.303µs 2024-06-01T12:17:18.570942526Z: INFO ▶ [DATABASE] 019 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214624] - 244.535µs 2024-06-01T12:17:18.57121273Z: INFO ▶ [DATABASE] 01a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200417175201] - 229.957µs 2024-06-01T12:17:18.571436476Z: INFO ▶ [DATABASE] 01b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230432] - 192.537µs 2024-06-01T12:17:18.571646926Z: INFO ▶ [DATABASE] 01c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230605] - 182.287µs 2024-06-01T12:17:18.571920958Z: INFO ▶ [DATABASE] 01d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200420215928] - 235.949µs 2024-06-01T12:17:18.572205188Z: INFO ▶ [DATABASE] 01e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200425182634] - 253.282µs 2024-06-01T12:17:18.572431047Z: INFO ▶ [DATABASE] 01f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200509103709] - 190.692µs 2024-06-01T12:17:18.572671656Z: INFO ▶ [DATABASE] 020 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200515172220] - 213.356µs 2024-06-01T12:17:18.572880032Z: INFO ▶ [DATABASE] 021 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200515195546] - 178.429µs 2024-06-01T12:17:18.573142161Z: INFO ▶ [DATABASE] 022 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200516123847] - 214.679µs 2024-06-01T12:17:18.573423215Z: INFO ▶ [DATABASE] 023 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200524221534] - 253.241µs 2024-06-01T12:17:18.573681116Z: INFO ▶ [DATABASE] 024 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200524224611] - 226.591µs 2024-06-01T12:17:18.573862581Z: INFO ▶ [DATABASE] 025 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200614113230] - 142.781µs 2024-06-01T12:17:18.57415625Z: INFO ▶ [DATABASE] 026 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200621214452] - 244.885µs ``` The problem is I have no decent way of inspecting the pod because somehow the new docker container does not even have a shell or ls or anything? ```sh ❯ stack exec vikunja-test-api -it -- /bin/ls error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "a707a7ab0aca5c244ff50ebf671b44205f4c441fb50f0f32b53be296705bb43f": OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/ls": stat /bin/ls: no such file or directory: unknown ❯ stack shell vikunja-test-api error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "20a209c0117077c3429766fcb5806817dc8d57cdf67da80f68fec8e8ccd863ca": OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown ``` The import definitely did not work, we have multiple teams: ```sh ❯ stack vikunja-test psql psql (15.1) Type "help" for help. vikunja=> \l List of databases Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileges -----------+----------+----------+-------------+-------------+------------+-----------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres + | | | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres + | | | | | | | postgres=CTc/postgres vikunja | vikunja | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =Tc/vikunja + | | | | | | | vikunja=CTc/vikunja (4 rows) vikunja=> \dt List of relations Schema | Name | Type | Owner --------+------------------+-------+--------- public | api_tokens | table | vikunja public | buckets | table | vikunja public | favorites | table | vikunja public | files | table | vikunja public | label_tasks | table | vikunja public | labels | table | vikunja public | link_shares | table | vikunja public | migration | table | vikunja public | migration_status | table | vikunja public | notifications | table | vikunja public | project_views | table | vikunja public | projects | table | vikunja public | reactions | table | vikunja public | saved_filters | table | vikunja public | subscriptions | table | vikunja public | task_assignees | table | vikunja public | task_attachments | table | vikunja public | task_buckets | table | vikunja public | task_comments | table | vikunja public | task_positions | table | vikunja public | task_relations | table | vikunja public | task_reminders | table | vikunja public | tasks | table | vikunja public | team_members | table | vikunja public | team_projects | table | vikunja public | teams | table | vikunja public | totp | table | vikunja public | typesense_sync | table | vikunja public | unsplash_photos | table | vikunja public | user_tokens | table | vikunja public | users | table | vikunja public | users_projects | table | vikunja public | webhooks | table | vikunja (33 rows) vikunja=> SELECT * FROM teams; id | name | description | created_by_id | oidc_id | issuer | created | updated | is_public ----+------+-------------+---------------+---------+--------+---------+---------+----------- (0 rows) ```
Owner

This is the output from my helper-script:

Nothing after that? Are you sure it crashed and not that it just continued importing without printing anything?

The problem is I have no decent way of inspecting the pod because somehow the new docker container does not even have a shell or ls or anything?

Yes, it only contains the Vikunja binary. That reduces the attack surface and makes it more maintainable because we don't need to "maintain" the OS in the image. There is nothing else in that container, so what do you need the shell for anyways?

> This is the output from my helper-script: Nothing after that? Are you sure it crashed and not that it just continued importing without printing anything? > The problem is I have no decent way of inspecting the pod because somehow the new docker container does not even have a shell or ls or anything? Yes, it only contains the Vikunja binary. That reduces the attack surface and makes it more maintainable because we don't need to "maintain" the OS in the image. There is nothing else in that container, so what do you need the shell for anyways?
Author
Contributor

This is the output from my helper-script:

Nothing after that? Are you sure it crashed and not that it just continued importing without printing anything?

The scripts terminates at that point, not even an indication of it having crashed:

❯ kubectl exec -n stackspout vikunja-test-api-77748766bf-mgzxl -it -- ./vikunja restore vikunja-dump_2024-05-31.zip
2024-06-02T11:37:33.302998967Z: INFO	▶ config/InitConfig 001 Using config file: /etc/vikunja/config.yml
2024-06-02T11:37:33.310554629Z: INFO	▶ [DATABASE] 002 [SQL] SELECT tablename FROM pg_tables WHERE schemaname = $1 [public] - 7.221374ms
2024-06-02T11:37:33.319537803Z: INFO	▶ [DATABASE] 003 [SQL] SELECT column_name, column_default, is_nullable, data_type, character_maximum_length, description,
    CASE WHEN p.contype = 'p' THEN true ELSE false END AS primarykey,
    CASE WHEN p.contype = 'u' THEN true ELSE false END AS uniquekey
FROM pg_attribute f
    JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid
    LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
    LEFT JOIN pg_description de ON f.attrelid=de.objoid AND f.attnum=de.objsubid
    LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
    LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
    LEFT JOIN pg_class AS g ON p.confrelid = g.oid
    LEFT JOIN INFORMATION_SCHEMA.COLUMNS s ON s.column_name=f.attname AND c.relname=s.table_name
WHERE n.nspname= s.table_schema AND c.relkind = 'r' AND c.relname = $1 AND s.table_schema = $2 AND f.attnum > 0 ORDER BY f.attnum; [migration public] - 8.873545ms
2024-06-02T11:37:33.320845878Z: INFO	▶ [DATABASE] 004 [SQL] SELECT indexname, indexdef FROM pg_indexes WHERE tablename=$1 AND schemaname=$2 [migration public] - 1.234184ms
2024-06-02T11:37:33.321314532Z: INFO	▶ [DATABASE] 005 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [SCHEMA_INIT] - 404.671µs
2024-06-02T11:37:33.321588705Z: INFO	▶ [DATABASE] 006 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190324205606] - 236.28µs
2024-06-02T11:37:33.321867997Z: INFO	▶ [DATABASE] 007 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190328074430] - 238.313µs
2024-06-02T11:37:33.322071656Z: INFO	▶ [DATABASE] 008 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190430111111] - 169.894µs
2024-06-02T11:37:33.322458544Z: INFO	▶ [DATABASE] 009 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190511202210] - 358.674µs
2024-06-02T11:37:33.322738378Z: INFO	▶ [DATABASE] 00a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190514192749] - 241.42µs
2024-06-02T11:37:33.322996891Z: INFO	▶ [DATABASE] 00b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190524205441] - 227.053µs
2024-06-02T11:37:33.323308325Z: INFO	▶ [DATABASE] 00c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190718200716] - 269.445µs
2024-06-02T11:37:33.323550767Z: INFO	▶ [DATABASE] 00d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190818210133] - 206.153µs
2024-06-02T11:37:33.323779043Z: INFO	▶ [DATABASE] 00e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190920185205] - 198.069µs
2024-06-02T11:37:33.324027236Z: INFO	▶ [DATABASE] 00f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190922205826] - 211.594µs
2024-06-02T11:37:33.324301019Z: INFO	▶ [DATABASE] 010 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191008194238] - 237.965µs
2024-06-02T11:37:33.324567506Z: INFO	▶ [DATABASE] 011 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191010131430] - 233.084µs
2024-06-02T11:37:33.324827702Z: INFO	▶ [DATABASE] 012 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207204427] - 230.129µs
2024-06-02T11:37:33.325127916Z: INFO	▶ [DATABASE] 013 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207220736] - 238.005µs
2024-06-02T11:37:33.32536648Z: INFO	▶ [DATABASE] 014 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200120201756] - 210.04µs
2024-06-02T11:37:33.325627018Z: INFO	▶ [DATABASE] 015 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200219183248] - 229.058µs
❯ echo $?
0

Yes, it only contains the Vikunja binary. That reduces the attack surface and makes it more maintainable because we don't need to "maintain" the OS in the image. There is nothing else in that container, so what do you need the shell for anyways?

I want to check that the backup file has been uploaded correctly.

> > This is the output from my helper-script: > > Nothing after that? Are you sure it crashed and not that it just continued importing without printing anything? The scripts terminates at that point, not even an indication of it having crashed: ```sh ❯ kubectl exec -n stackspout vikunja-test-api-77748766bf-mgzxl -it -- ./vikunja restore vikunja-dump_2024-05-31.zip 2024-06-02T11:37:33.302998967Z: INFO ▶ config/InitConfig 001 Using config file: /etc/vikunja/config.yml 2024-06-02T11:37:33.310554629Z: INFO ▶ [DATABASE] 002 [SQL] SELECT tablename FROM pg_tables WHERE schemaname = $1 [public] - 7.221374ms 2024-06-02T11:37:33.319537803Z: INFO ▶ [DATABASE] 003 [SQL] SELECT column_name, column_default, is_nullable, data_type, character_maximum_length, description, CASE WHEN p.contype = 'p' THEN true ELSE false END AS primarykey, CASE WHEN p.contype = 'u' THEN true ELSE false END AS uniquekey FROM pg_attribute f JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum LEFT JOIN pg_description de ON f.attrelid=de.objoid AND f.attnum=de.objsubid LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey) LEFT JOIN pg_class AS g ON p.confrelid = g.oid LEFT JOIN INFORMATION_SCHEMA.COLUMNS s ON s.column_name=f.attname AND c.relname=s.table_name WHERE n.nspname= s.table_schema AND c.relkind = 'r' AND c.relname = $1 AND s.table_schema = $2 AND f.attnum > 0 ORDER BY f.attnum; [migration public] - 8.873545ms 2024-06-02T11:37:33.320845878Z: INFO ▶ [DATABASE] 004 [SQL] SELECT indexname, indexdef FROM pg_indexes WHERE tablename=$1 AND schemaname=$2 [migration public] - 1.234184ms 2024-06-02T11:37:33.321314532Z: INFO ▶ [DATABASE] 005 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [SCHEMA_INIT] - 404.671µs 2024-06-02T11:37:33.321588705Z: INFO ▶ [DATABASE] 006 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190324205606] - 236.28µs 2024-06-02T11:37:33.321867997Z: INFO ▶ [DATABASE] 007 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190328074430] - 238.313µs 2024-06-02T11:37:33.322071656Z: INFO ▶ [DATABASE] 008 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190430111111] - 169.894µs 2024-06-02T11:37:33.322458544Z: INFO ▶ [DATABASE] 009 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190511202210] - 358.674µs 2024-06-02T11:37:33.322738378Z: INFO ▶ [DATABASE] 00a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190514192749] - 241.42µs 2024-06-02T11:37:33.322996891Z: INFO ▶ [DATABASE] 00b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190524205441] - 227.053µs 2024-06-02T11:37:33.323308325Z: INFO ▶ [DATABASE] 00c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190718200716] - 269.445µs 2024-06-02T11:37:33.323550767Z: INFO ▶ [DATABASE] 00d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190818210133] - 206.153µs 2024-06-02T11:37:33.323779043Z: INFO ▶ [DATABASE] 00e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190920185205] - 198.069µs 2024-06-02T11:37:33.324027236Z: INFO ▶ [DATABASE] 00f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190922205826] - 211.594µs 2024-06-02T11:37:33.324301019Z: INFO ▶ [DATABASE] 010 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191008194238] - 237.965µs 2024-06-02T11:37:33.324567506Z: INFO ▶ [DATABASE] 011 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191010131430] - 233.084µs 2024-06-02T11:37:33.324827702Z: INFO ▶ [DATABASE] 012 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207204427] - 230.129µs 2024-06-02T11:37:33.325127916Z: INFO ▶ [DATABASE] 013 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207220736] - 238.005µs 2024-06-02T11:37:33.32536648Z: INFO ▶ [DATABASE] 014 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200120201756] - 210.04µs 2024-06-02T11:37:33.325627018Z: INFO ▶ [DATABASE] 015 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200219183248] - 229.058µs ❯ echo $? 0 ``` > Yes, it only contains the Vikunja binary. That reduces the attack surface and makes it more maintainable because we don't need to "maintain" the OS in the image. There is nothing else in that container, so what do you need the shell for anyways? I want to check that the backup file has been uploaded correctly.
Owner

The scripts terminates at that point, not even an indication of it having crashed:

Does kubectl forward the exit code from the command it ran?

I want to check that the backup file has been uploaded correctly.

Can't you check that in the volume?

> The scripts terminates at that point, not even an indication of it having crashed: Does `kubectl` forward the exit code from the command it ran? > I want to check that the backup file has been uploaded correctly. Can't you check that in the volume?
Author
Contributor

The file is not even read

❯ kubectl exec -n stackspout vikunja-test-api-77748766bf-mgzxl -it -- ./vikunja restore
Error: accepts 1 arg(s), received 0
Usage:
  vikunja restore [filename] [flags]

Flags:
  -h, --help   help for restore

accepts 1 arg(s), received 0
command terminated with exit code 1

❯ kubectl exec -n stackspout vikunja-test-api-77748766bf-mgzxl -it -- ./vikunja restore gibberish
2024-06-03T10:28:36.780618434Z: INFO	▶ config/InitConfig 001 Using config file: /etc/vikunja/config.yml
2024-06-03T10:28:36.78674198Z: INFO	▶ [DATABASE] 002 [SQL] SELECT tablename FROM pg_tables WHERE schemaname = $1 [public] - 5.97463ms
2024-06-03T10:28:36.792222791Z: INFO	▶ [DATABASE] 003 [SQL] SELECT column_name, column_default, is_nullable, data_type, character_maximum_length, description,
    CASE WHEN p.contype = 'p' THEN true ELSE false END AS primarykey,
    CASE WHEN p.contype = 'u' THEN true ELSE false END AS uniquekey
FROM pg_attribute f
    JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid
    LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum
    LEFT JOIN pg_description de ON f.attrelid=de.objoid AND f.attnum=de.objsubid
    LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
    LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey)
    LEFT JOIN pg_class AS g ON p.confrelid = g.oid
    LEFT JOIN INFORMATION_SCHEMA.COLUMNS s ON s.column_name=f.attname AND c.relname=s.table_name
WHERE n.nspname= s.table_schema AND c.relkind = 'r' AND c.relname = $1 AND s.table_schema = $2 AND f.attnum > 0 ORDER BY f.attnum; [migration public] - 5.411517ms
2024-06-03T10:28:36.793119295Z: INFO	▶ [DATABASE] 004 [SQL] SELECT indexname, indexdef FROM pg_indexes WHERE tablename=$1 AND schemaname=$2 [migration public] - 808.395µs
2024-06-03T10:28:36.793533391Z: INFO	▶ [DATABASE] 005 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [SCHEMA_INIT] - 292.633µs
2024-06-03T10:28:36.793786297Z: INFO	▶ [DATABASE] 006 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190324205606] - 158.635µs
2024-06-03T10:28:36.794021119Z: INFO	▶ [DATABASE] 007 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190328074430] - 150.209µs
2024-06-03T10:28:36.79427635Z: INFO	▶ [DATABASE] 008 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190430111111] - 175.247µs
2024-06-03T10:28:36.794509268Z: INFO	▶ [DATABASE] 009 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190511202210] - 148.677µs
2024-06-03T10:28:36.794771201Z: INFO	▶ [DATABASE] 00a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190514192749] - 109.129µs
2024-06-03T10:28:36.795116134Z: INFO	▶ [DATABASE] 00b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190524205441] - 186.258µs
2024-06-03T10:28:36.79550867Z: INFO	▶ [DATABASE] 00c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190718200716] - 221.116µs
2024-06-03T10:28:36.795801442Z: INFO	▶ [DATABASE] 00d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190818210133] - 226.045µs
2024-06-03T10:28:36.796100248Z: INFO	▶ [DATABASE] 00e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190920185205] - 207.248µs
2024-06-03T10:28:36.796418689Z: INFO	▶ [DATABASE] 00f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190922205826] - 215.965µs
2024-06-03T10:28:36.796689731Z: INFO	▶ [DATABASE] 010 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191008194238] - 185.307µs
2024-06-03T10:28:36.796944941Z: INFO	▶ [DATABASE] 011 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191010131430] - 172.391µs
2024-06-03T10:28:36.797236542Z: INFO	▶ [DATABASE] 012 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207204427] - 211.496µs
2024-06-03T10:28:36.797502394Z: INFO	▶ [DATABASE] 013 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207220736] - 178.082µs
2024-06-03T10:28:36.797750231Z: INFO	▶ [DATABASE] 014 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200120201756] - 166.891µs
2024-06-03T10:28:36.798106866Z: INFO	▶ [DATABASE] 015 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200219183248] - 197.96µs
2024-06-03T10:28:36.798397706Z: INFO	▶ [DATABASE] 016 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308205855] - 197.74µs
2024-06-03T10:28:36.798610154Z: INFO	▶ [DATABASE] 017 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308210130] - 186.208µs
2024-06-03T10:28:36.798817943Z: INFO	▶ [DATABASE] 018 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214440] - 184.725µs
2024-06-03T10:28:36.799027317Z: INFO	▶ [DATABASE] 019 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214624] - 168.224µs
2024-06-03T10:28:36.799220749Z: INFO	▶ [DATABASE] 01a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200417175201] - 172.342µs
2024-06-03T10:28:36.799404161Z: INFO	▶ [DATABASE] 01b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230432] - 162.031µs
2024-06-03T10:28:36.799573216Z: INFO	▶ [DATABASE] 01c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230605] - 149.677µs
2024-06-03T10:28:36.79974127Z: INFO	▶ [DATABASE] 01d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200420215928] - 148.957µs
2024-06-03T10:28:36.799910284Z: INFO	▶ [DATABASE] 01e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200425182634] - 148.626µs
The file is not even read ```sh ❯ kubectl exec -n stackspout vikunja-test-api-77748766bf-mgzxl -it -- ./vikunja restore Error: accepts 1 arg(s), received 0 Usage: vikunja restore [filename] [flags] Flags: -h, --help help for restore accepts 1 arg(s), received 0 command terminated with exit code 1 ❯ kubectl exec -n stackspout vikunja-test-api-77748766bf-mgzxl -it -- ./vikunja restore gibberish 2024-06-03T10:28:36.780618434Z: INFO ▶ config/InitConfig 001 Using config file: /etc/vikunja/config.yml 2024-06-03T10:28:36.78674198Z: INFO ▶ [DATABASE] 002 [SQL] SELECT tablename FROM pg_tables WHERE schemaname = $1 [public] - 5.97463ms 2024-06-03T10:28:36.792222791Z: INFO ▶ [DATABASE] 003 [SQL] SELECT column_name, column_default, is_nullable, data_type, character_maximum_length, description, CASE WHEN p.contype = 'p' THEN true ELSE false END AS primarykey, CASE WHEN p.contype = 'u' THEN true ELSE false END AS uniquekey FROM pg_attribute f JOIN pg_class c ON c.oid = f.attrelid JOIN pg_type t ON t.oid = f.atttypid LEFT JOIN pg_attrdef d ON d.adrelid = c.oid AND d.adnum = f.attnum LEFT JOIN pg_description de ON f.attrelid=de.objoid AND f.attnum=de.objsubid LEFT JOIN pg_namespace n ON n.oid = c.relnamespace LEFT JOIN pg_constraint p ON p.conrelid = c.oid AND f.attnum = ANY (p.conkey) LEFT JOIN pg_class AS g ON p.confrelid = g.oid LEFT JOIN INFORMATION_SCHEMA.COLUMNS s ON s.column_name=f.attname AND c.relname=s.table_name WHERE n.nspname= s.table_schema AND c.relkind = 'r' AND c.relname = $1 AND s.table_schema = $2 AND f.attnum > 0 ORDER BY f.attnum; [migration public] - 5.411517ms 2024-06-03T10:28:36.793119295Z: INFO ▶ [DATABASE] 004 [SQL] SELECT indexname, indexdef FROM pg_indexes WHERE tablename=$1 AND schemaname=$2 [migration public] - 808.395µs 2024-06-03T10:28:36.793533391Z: INFO ▶ [DATABASE] 005 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [SCHEMA_INIT] - 292.633µs 2024-06-03T10:28:36.793786297Z: INFO ▶ [DATABASE] 006 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190324205606] - 158.635µs 2024-06-03T10:28:36.794021119Z: INFO ▶ [DATABASE] 007 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190328074430] - 150.209µs 2024-06-03T10:28:36.79427635Z: INFO ▶ [DATABASE] 008 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190430111111] - 175.247µs 2024-06-03T10:28:36.794509268Z: INFO ▶ [DATABASE] 009 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190511202210] - 148.677µs 2024-06-03T10:28:36.794771201Z: INFO ▶ [DATABASE] 00a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190514192749] - 109.129µs 2024-06-03T10:28:36.795116134Z: INFO ▶ [DATABASE] 00b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190524205441] - 186.258µs 2024-06-03T10:28:36.79550867Z: INFO ▶ [DATABASE] 00c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190718200716] - 221.116µs 2024-06-03T10:28:36.795801442Z: INFO ▶ [DATABASE] 00d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190818210133] - 226.045µs 2024-06-03T10:28:36.796100248Z: INFO ▶ [DATABASE] 00e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190920185205] - 207.248µs 2024-06-03T10:28:36.796418689Z: INFO ▶ [DATABASE] 00f [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20190922205826] - 215.965µs 2024-06-03T10:28:36.796689731Z: INFO ▶ [DATABASE] 010 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191008194238] - 185.307µs 2024-06-03T10:28:36.796944941Z: INFO ▶ [DATABASE] 011 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191010131430] - 172.391µs 2024-06-03T10:28:36.797236542Z: INFO ▶ [DATABASE] 012 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207204427] - 211.496µs 2024-06-03T10:28:36.797502394Z: INFO ▶ [DATABASE] 013 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20191207220736] - 178.082µs 2024-06-03T10:28:36.797750231Z: INFO ▶ [DATABASE] 014 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200120201756] - 166.891µs 2024-06-03T10:28:36.798106866Z: INFO ▶ [DATABASE] 015 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200219183248] - 197.96µs 2024-06-03T10:28:36.798397706Z: INFO ▶ [DATABASE] 016 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308205855] - 197.74µs 2024-06-03T10:28:36.798610154Z: INFO ▶ [DATABASE] 017 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200308210130] - 186.208µs 2024-06-03T10:28:36.798817943Z: INFO ▶ [DATABASE] 018 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214440] - 184.725µs 2024-06-03T10:28:36.799027317Z: INFO ▶ [DATABASE] 019 [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200322214624] - 168.224µs 2024-06-03T10:28:36.799220749Z: INFO ▶ [DATABASE] 01a [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200417175201] - 172.342µs 2024-06-03T10:28:36.799404161Z: INFO ▶ [DATABASE] 01b [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230432] - 162.031µs 2024-06-03T10:28:36.799573216Z: INFO ▶ [DATABASE] 01c [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200418230605] - 149.677µs 2024-06-03T10:28:36.79974127Z: INFO ▶ [DATABASE] 01d [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200420215928] - 148.957µs 2024-06-03T10:28:36.799910284Z: INFO ▶ [DATABASE] 01e [SQL] SELECT count(*) FROM "migration" WHERE "id" IN ($1) [20200425182634] - 148.626µs ```
Owner

Can you update this to 0.24.1?

Can you update this to 0.24.1?
Author
Contributor

Working on it finally again ;)

Working on it finally again ;)
Author
Contributor

Update pushed, don't know how to test it locally so I would suggest we publish and then I test it

Update pushed, don't know how to test it locally so I would suggest we publish and then I test it
xeruf changed title from WIP: Remove frontend container for new merged docker to Remove frontend container for new merged docker 2024-12-11 17:23:25 +00:00
Owner

The last commit is still 7 months old, can you push again?

The last commit is still 7 months old, can you push again?
Author
Contributor

After pulling latest unstable it says Vikunja version v0.24.1-566-b3040b8466 even though unstable is 0.24.5, huh? https://hub.docker.com/r/vikunja/vikunja/tags

And then I get 2024-12-11T21:42:51Z: CRITICAL ▶ 0f2 It looks like your openid configuration is in the wrong format. Please check the docs for the correct format.

Format is the same as before, did anything change?
image.png

After pulling latest unstable it says ` Vikunja version v0.24.1-566-b3040b8466` even though unstable is 0.24.5, huh? https://hub.docker.com/r/vikunja/vikunja/tags And then I get `2024-12-11T21:42:51Z: CRITICAL ▶ 0f2 It looks like your openid configuration is in the wrong format. Please check the docs for the correct format. ` Format is the same as before, did anything change? ![image.png](/attachments/d6777173-6e48-489b-b7ce-529db060a64d)
Author
Contributor

https://kolaente.dev/xeruf/helm-chart/src/branch/merged-docker
I pushed, but something interesting is going on: image.png

https://kolaente.dev/xeruf/helm-chart/src/branch/merged-docker I pushed, but something interesting is going on: ![image.png](/attachments/306fcfc4-0e13-4ffb-98ce-6958e04790c1)
Author
Contributor

and I cannot upload the backup to test the restore properly either ^^

❯ kubectl cp vikunja-2024.zip -n stackspout vikunja-test-api-6fb5c5f7b5-bh9r9:vikunja-2024.zip
error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "1e193b3a673cfdded48ef973014d5970502da81735a345fc72dbc244a3ecbc20": OCI runtime exec failed: exec failed: unable to start container process: exec: "tar": executable file not found in $PATH: unknown
and I cannot upload the backup to test the restore properly either ^^ ```sh ❯ kubectl cp vikunja-2024.zip -n stackspout vikunja-test-api-6fb5c5f7b5-bh9r9:vikunja-2024.zip error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "1e193b3a673cfdded48ef973014d5970502da81735a345fc72dbc244a3ecbc20": OCI runtime exec failed: exec failed: unable to start container process: exec: "tar": executable file not found in $PATH: unknown ```
Owner

I pushed, but something interesting is going on: image.png

I've re-run the action in the admin dashboard, can you push again?

> I pushed, but something interesting is going on: image.png I've re-run the action in the admin dashboard, can you push again?
Owner

and I cannot upload the backup to test the restore properly either ^^

What does the command do? The Vikunja container only contains the Vikunja binary and nothing else.

> and I cannot upload the backup to test the restore properly either ^^ What does the command do? The Vikunja container only contains the Vikunja binary and nothing else.
Author
Contributor

managed to copy to the folder and initiate restore, but now I get:

2024-12-12T11:02:51Z: CRITICAL ▶ 070 export was created with version 0.20.4 but this is 0.24.1-566-b3040b8466 - please make sure you are running the same Vikunja version before restoring

How is that supposed to work?

managed to copy to the folder and initiate restore, but now I get: 2024-12-12T11:02:51Z: CRITICAL ▶ 070 export was created with version 0.20.4 but this is 0.24.1-566-b3040b8466 - please make sure you are running the same Vikunja version before restoring How is that supposed to work?
Author
Contributor

still get the message on the repo: Git hooks of this repository seem to be broken. Please follow the documentation to fix them, then push some commits to refresh the status.

still get the message on the repo: `Git hooks of this repository seem to be broken. Please follow the documentation to fix them, then push some commits to refresh the status.`
Owner

How is that supposed to work?

As the message says, you can't import a dump created on an older version of Vikunja.

> How is that supposed to work? As the message says, you can't import a dump created on an older version of Vikunja.
Author
Contributor

Alright, downgraded to 0.20.4, restored and then upgraded - and ended up with 2024-12-12T16:40:38Z: CRITICAL ▶ 065 Migration failed: migration 20221228112131 failed: pq: duplicate key value violates unique constraint "lists_pkey" again

Alright, downgraded to 0.20.4, restored and then upgraded - and ended up with `2024-12-12T16:40:38Z: CRITICAL ▶ 065 Migration failed: migration 20221228112131 failed: pq: duplicate key value violates unique constraint "lists_pkey"` again
Author
Contributor

what about the openid config?

what about the openid config?
Author
Contributor

dumping and importing our latest data with version 0.22.1 also produces an error: vikunja/vikunja#2934

dumping and importing our latest data with version 0.22.1 also produces an error: https://kolaente.dev/vikunja/vikunja/issues/2934
xeruf added 1 commit 2024-12-12 18:08:34 +00:00
Update dependencies in Chart.lock
All checks were successful
continuous-integration/drone/pr Build is passing
c05383b628
Author
Contributor

Update pushed, don't know how to test it locally so I would suggest we publish and then I test it

nevermind that one, remembered how to test it now ^^

> Update pushed, don't know how to test it locally so I would suggest we publish and then I test it nevermind that one, remembered how to test it now ^^
xeruf added 1 commit 2024-12-12 18:38:17 +00:00
Disable Bitnami Postgres networkPolicy by default
All checks were successful
continuous-integration/drone/pr Build is passing
417de30db1
xeruf added 2 commits 2024-12-12 20:52:19 +00:00
https://github.com/bjw-s/helm-charts/pull/284
with help of 3981facca6
Add bjw controller definition
All checks were successful
continuous-integration/drone/pr Build is passing
4abc90f129
xeruf force-pushed merged-docker from 4abc90f129 to 95fb307ff3 2024-12-13 10:43:14 +00:00 Compare
xeruf force-pushed merged-docker from 95fb307ff3 to 1a1496bcaf 2024-12-13 11:14:10 +00:00 Compare
xeruf changed title from Remove frontend container for new merged docker to WIP: Remove frontend container for new merged docker 2024-12-13 11:19:03 +00:00
xeruf force-pushed merged-docker from 1a1496bcaf to 8e130ffc68 2024-12-13 11:49:28 +00:00 Compare
xeruf added 2 commits 2024-12-13 17:55:15 +00:00
Do not override default tag
All checks were successful
continuous-integration/drone/pr Build is passing
72114c4222
Should default to appVersion then
Author
Contributor

phew, seems I finally have a working version

phew, seems I finally have a working version
Author
Contributor

openid works now, the only change I did was to add the scope field but removing it again also does not seem to make a difference ^^

openid works now, the only change I did was to add the `scope` field but removing it again also does not seem to make a difference ^^
Author
Contributor

my fork is also in order again, thanks :)

my fork is also in order again, thanks :)
Owner

what about the openid config?

The config format has been changed recently, but that's not yet released. Are you using a stable release or an unstable build?

my fork is also in order again, thanks :)

Is this ready to review now? (still marked WIP)

> what about the openid config? The config format has been changed recently, but that's not yet released. Are you using a stable release or an unstable build? > my fork is also in order again, thanks :) Is this ready to review now? (still marked WIP)
konrad requested changes 2024-12-13 20:26:28 +00:00
Chart.yaml Outdated
@ -38,4 +36,1 @@
url: https://vikunja.io
- name: Yurii Vlasov
email: yuriy@vlasov.pro
url: https://vlasov.pro
Owner

Why did you remove Yurii?

Why did you remove Yurii?
Author
Contributor

He does not seem active anymore, I understand this not as credits but as active maintainers so instead of adding in me and the others I thought keeping it minimal is best. Other charts I saw kept it similarly

He does not seem active anymore, I understand this not as credits but as active maintainers so instead of adding in me and the others I thought keeping it minimal is best. Other charts I saw kept it similarly
Owner

That makes sense.

That makes sense.
konrad marked this conversation as resolved
@ -50,4 +50,1 @@
# The configuration for Vikunja's api.
# https://vikunja.io/docs/config-options/
VIKUNJA_SERVICE_FRONTENDURL: "http://{{ index .Values.frontend.ingress.main.hosts 0 "host" }}{{ index .Values.frontend.ingress.main.hosts 0 "path" }}"
{{ end }}
Owner

Isn't this {{ end }} too much now?

Isn't this `{{ end }}` too much now?
Author
Contributor

nope it ends the hardcoded values definition

nope it ends the hardcoded values definition
Owner

Ahhh, I was looking for an opening {{ if }}

Ahhh, I was looking for an opening `{{ if }}`
konrad marked this conversation as resolved
values.yaml Outdated
@ -85,4 +62,0 @@
type: ClusterIP
# https://github.com/bjw-s/helm-charts/blob/a081de53024d8328d1ae9ff7e4f6bc500b0f3a29/charts/library/common/values.yaml#L393-L436
ingress:
Owner

Don't we still need an ingress?

Don't we still need an ingress?
Author
Contributor

still present, this is just the frontend

still present, this is just the frontend
Owner

gotcha.

gotcha.
konrad marked this conversation as resolved
values.yaml Outdated
@ -18,2 +13,2 @@
tag: 0.21.0
pullPolicy: IfNotPresent
repository: vikunja/vikunja
#tag: "stable"
Owner

This tag does not exist.

This tag does not exist.
Author
Contributor

my bad, correction coming up

my bad, correction coming up
xeruf marked this conversation as resolved
xeruf added 1 commit 2024-12-13 21:43:05 +00:00
Reset chart version to 1.0.0
All checks were successful
continuous-integration/drone/pr Build is passing
9569a6a595
Author
Contributor

what about the openid config?

The config format has been changed recently, but that's not yet released. Are you using a stable release or an unstable build?

ah, I was intermittently using unstable, that must have been it

my fork is also in order again, thanks :)

Is this ready to review now? (still marked WIP)

sort of - I am inclined to merge it as is and then make more major updates such as when updating the bjw subchart as I might need some helpe there and there is no urgency to it: https://github.com/bjw-s/helm-charts/issues/301#issuecomment-2541109674

> > what about the openid config? > > The config format has been changed recently, but that's not yet released. Are you using a stable release or an unstable build? ah, I was intermittently using unstable, that must have been it > > my fork is also in order again, thanks :) > > > Is this ready to review now? (still marked WIP) sort of - I am inclined to merge it as is and then make more major updates such as when updating the bjw subchart as I might need some helpe there and there is no urgency to it: https://github.com/bjw-s/helm-charts/issues/301#issuecomment-2541109674
konrad reviewed 2024-12-14 10:55:02 +00:00
@ -12,3 +10,2 @@
category: TaskTracker
version: 0.4.3
appVersion: 0.21.0
version: 1.0.0
Owner

Why 1.0.0 instead of 0.5.0?

Why 1.0.0 instead of 0.5.0?
Author
Contributor

Because this update is a breaking change, and I don't find dragging out a 1.0 productive anyway, also for Vikunja, it just makes the first number meaningless (See Java, where they dropped it at some point).
For exxample for Vikunja, I believe the upgrade 0.22 to 0.23 should have been a major bump since it changes the structure majorly.
For me 0.X is for software that is not properly usable, which Vikunja and this helm chart are long past.

Because this update is a breaking change, and I don't find dragging out a 1.0 productive anyway, also for Vikunja, it just makes the first number meaningless (See Java, where they dropped it at some point). For exxample for Vikunja, I believe the upgrade 0.22 to 0.23 should have been a major bump since it changes the structure majorly. For me 0.X is for software that is not properly usable, which Vikunja and this helm chart are long past.
Owner

I see your point, but you also said you want to merge this as is and then fix it later. That's not a definition of the very first 1.0 release 🙂

I see your point, but you also said you want to merge this as is and then fix it later. That's not a definition of the very first 1.0 release 🙂
xeruf marked this conversation as resolved
konrad reviewed 2024-12-14 10:57:19 +00:00
values.yaml Outdated
@ -11,4 +8,4 @@
# VIKUNJA COMPONENTS #
######################
# You can find the default values that this `values.yaml` overrides, in the comment at the top of this file.
api:
Owner

This might need renaming as it is not only the api anymore?

This might need renaming as it is not only the api anymore?
Author
Contributor

was deliberating to put that into v2.0 but maybe now is better yeah

was deliberating to put that into v2.0 but maybe now is better yeah
xeruf marked this conversation as resolved
Owner

sort of - I am inclined to merge it as is and then make more major updates such as when updating the bjw subchart as I might need some helpe there and there is no urgency to it: https://github.com/bjw-s/helm-charts/issues/301#issuecomment-2541109674

I'm fine to merge this as long as it works.

> sort of - I am inclined to merge it as is and then make more major updates such as when updating the bjw subchart as I might need some helpe there and there is no urgency to it: https://github.com/bjw-s/helm-charts/issues/301#issuecomment-2541109674 I'm fine to merge this as long as it works.
xeruf added 1 commit 2024-12-25 20:33:38 +00:00
Rename api key to vikunja
All checks were successful
continuous-integration/drone/pr Build is passing
4eb6bdb02e
Owner

What's the status here?

What's the status here?
xeruf added 2 commits 2025-01-14 00:40:49 +00:00
Add upgrade instructions to README
All checks were successful
continuous-integration/drone/pr Build is passing
ccf8a91a9f
xeruf added 1 commit 2025-01-14 01:35:06 +00:00
Update Vikunja version to 0.24.6
All checks were successful
continuous-integration/drone/pr Build is passing
3eef5c4ff1
xeruf changed title from WIP: Remove frontend container for new merged docker to Remove frontend container for new merged docker 2025-01-14 01:35:49 +00:00
Author
Contributor

I am now testing this on two instances, one old one new, but am yet to bring our old production instance over.
Polished and documented things so I think this is pretty much ready.

I am now testing this on two instances, one old one new, but am yet to bring our old production instance over. Polished and documented things so I think this is pretty much ready.
konrad reviewed 2025-01-14 08:23:27 +00:00
@ -5,3 +5,3 @@
- name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 12.1.9
version: 16.3.0
Owner

Is this postgres version 16? Because if we're upgrading this, why not upgrade directly to postgres 17?

Is this postgres version 16? Because if we're upgrading this, why not upgrade directly to postgres 17?
Author
Contributor

It is postgres 17, this is the chart version ;)

It is postgres 17, this is the chart version ;)
Owner

Gotcha!

Looks like 16.4.5 is out btw

Gotcha! Looks like 16.4.5 is out btw
xeruf marked this conversation as resolved
@ -19,1 +13,3 @@
pullPolicy: IfNotPresent
repository: vikunja/vikunja
#tag: "latest"
#pullPolicy: Always
Owner

Why comment this?

Why comment this?
Author
Contributor

Cause a specific chart version should always deploy a specific app version by default, which comes from the Chart.yaml. Commented sections are as reference for users.

Cause a specific chart version should always deploy a specific app version by default, which comes from the `Chart.yaml`. Commented sections are as reference for users.
Owner

If it's not used, it should be removed. The reference and possible options should be documented elsewhere.

If it's not used, it should be removed. The reference and possible options should be documented elsewhere.
Owner

Also right now it's not specified what it pulls, IMHO this should include the tag (but maybe not latest)

Also right now it's not specified what it pulls, IMHO this should include the tag (but maybe not latest)
Author
Contributor

it is specified in Chart.yaml, I would not want to duplicate it

there are other options which are also commented out, having the values.yaml as a reference is common practice in helm charts

it is specified in `Chart.yaml`, I would not want to duplicate it there are other options which are also commented out, having the values.yaml as a reference is common practice in helm charts
First-time contributor

Just to add a datapoint, I was able to get this working in my environment with very little effort, nice work! I did find one minor annoyance in that the Typesense URL needs to be prefixed with http://, e.g.

VIKUNJA_TYPESENSE_URL: "http://vikunja-typesense:8108"

Otherwise it worked well with no modification, just some configuration of the values.

Just to add a datapoint, I was able to get this working in my environment with very little effort, nice work! I did find one minor annoyance in that the Typesense URL needs to be prefixed with `http://`, e.g. ``` VIKUNJA_TYPESENSE_URL: "http://vikunja-typesense:8108" ``` Otherwise it worked well with no modification, just some configuration of the values.
sailorbob134280 reviewed 2025-01-19 21:58:59 +00:00
@ -0,0 +41,4 @@
{{ if .Values.redis.enabled }}
VIKUNJA_REDIS_ENABLED: "true"
{{ end }}
{{ if .Values.typesense.enabled }}
First-time contributor

Would be nice to add the following to this file to prevent permissions problems with uploads:

podSecurityContext:
  fsGroup: 1000
Would be nice to add the following to this file to prevent permissions problems with uploads: ``` podSecurityContext: fsGroup: 1000 ```
Author
Contributor

the right location to add it is probably templates/typesense.yaml - feel free to contribute an appropriate adjustment as I am not using typesense so I do not want to mess with it

the right location to add it is probably templates/typesense.yaml - feel free to contribute an appropriate adjustment as I am not using typesense so I do not want to mess with it
First-time contributor

Did you perchance mean for this to be on my other comment? This one is needed for the Vikunja uploads (e.g. backgrounds, attachments, etc) because of the container user/group ID.

Did you perchance mean for this to be on my other comment? This one is needed for the Vikunja uploads (e.g. backgrounds, attachments, etc) because of the container user/group ID.
Author
Contributor

ah you commented on the typesense values so I thought that was what it was about - if you know where to add it please comment appropriately and I wouldn't interfere ;) but what is the issue without that tag exactly?

ah you commented on the typesense values so I thought that was what it was about - if you know where to add it please comment appropriately and I wouldn't interfere ;) but what is the issue without that tag exactly?
First-time contributor

Ah, yeah there's an open bug with Gitea that makes it impossible to comment on a non-diffed line. As for why this is necessary, none of the attachments/file upload functionality will work without this because the persistent volumes that get created will not be owned by the correct group. It can just go anywhere in this file.

Ah, yeah there's an open bug with Gitea that makes it impossible to comment on a non-diffed line. As for why this is necessary, none of the attachments/file upload functionality will work without this because the persistent volumes that get created will not be owned by the correct group. It can just go anywhere in this file.
First-time contributor

Just passing by and I'm wondering about

values.yaml Line 31 in 3eef5c4ff1
nginx.ingress.kubernetes.io/proxy-body-size: "0"

If someone doesn't use nginx but something else like traefik this line is pretty much useless

Just passing by and I'm wondering about https://kolaente.dev/vikunja/helm-chart/src/commit/3eef5c4ff1b9b1e52ac7911167264d846a44415c/values.yaml#L31 If someone doesn't use nginx but something else like traefik this line is pretty much useless
xeruf added 1 commit 2025-01-22 11:41:43 +00:00
Update Postgres version
All checks were successful
continuous-integration/drone/pr Build is passing
161a5d485d
Author
Contributor

Just passing by and I'm wondering about

values.yaml Line 31 in 3eef5c4ff1
nginx.ingress.kubernetes.io/proxy-body-size: "0"

If someone doesn't use nginx but something else like traefik this line is pretty much useless

feel free to complement it for other tools, but there is no harm in including it, right?

> Just passing by and I'm wondering about > https://kolaente.dev/vikunja/helm-chart/src/commit/3eef5c4ff1b9b1e52ac7911167264d846a44415c/values.yaml#L31 > > If someone doesn't use nginx but something else like traefik this line is pretty much useless feel free to complement it for other tools, but there is no harm in including it, right?
xeruf requested review from konrad 2025-01-22 13:13:18 +00:00
Author
Contributor

I consider this ready to merge.
The new comments on here could be own contributions or issues, they are not really relevant to this PR in particular.

I consider this ready to merge. The new comments on here could be own contributions or issues, they are not really relevant to this PR in particular.
First-time contributor

I consider this ready to merge.
The new comments on here could be own contributions or issues, they are not really relevant to this PR in particular.

Well, the fsGroup issue means that file upload functionality is broken, so I'd recommend fixing that at least.

> I consider this ready to merge. > The new comments on here could be own contributions or issues, they are not really relevant to this PR in particular. Well, the `fsGroup` issue means that file upload functionality is broken, so I'd recommend fixing that at least.
xeruf added 1 commit 2025-01-29 12:32:20 +00:00
Add podSecurityContext fsgroup
All checks were successful
continuous-integration/drone/pr Build is passing
73a891e264
konrad merged commit 2bec3682f4 into main 2025-01-29 12:35:33 +00:00
konrad deleted branch merged-docker 2025-01-29 12:35:33 +00:00
Author
Contributor

@sailorbob134280 we did not have any issues with uploading on our instance but I added it in anyways

@sailorbob134280 we did not have any issues with uploading on our instance but I added it in anyways
First-time contributor

Is there any chance you were reusing existing PVs (e.g. from a previous deployment)? That's odd to me it would work because AFAIK it shouldn't. But since the container runs with group 1000, this shouldn't hurt anything.

Is there any chance you were reusing existing PVs (e.g. from a previous deployment)? That's odd to me it would work because AFAIK it shouldn't. But since the container runs with group 1000, this shouldn't hurt anything.
Author
Contributor

ah I use predefined PVs anyways, not the ones from the chart, for backup purposes

ah I use predefined PVs anyways, not the ones from the chart, for backup purposes
Sign in to join this conversation.
No Reviewers
4 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: vikunja/helm-chart#29
No description provided.