Arch Forum 2024-02-22¶
Participants: Backend developers, Magnus, Andy and Victor
Agenda¶
- .Net8 and Automapper update
- DB stuff
Notes¶
.Net8 and AutoMapper update: Victor started by setting expectations. The purpose for this update was to make everyone aware of the .Net8 and AutoMapping problems, and gather opinions.
Alex then gave an update of the .Net8 upgrade, and the issues with AutoMapper. It has turned out that the version of AutoMapper we use does not support .Net 8 very well. Instead, the recommendation is to upgrade AutoMapper to its latest version. However, this was not very easy to do. Alex spent a couple of days but did not manage to get the new AutoMapper to work with our mappings. Alex had looked into an alternative mapping library called Mapster which is similar to AutoMapper in its API, that could be a nice option.
- We could note that there's no big urgency, but we do need to resolve this by November 2024 when .Net 6 reach its End of Life.
- We also noted that we have discussed mapping previously, with an introduction to Mapperly by Shakib. At that time we concluded that by itself it would not be worth the effort to upgrade. However, given the difficulties upgrading AutoMapper the same effort could potentially be used to switch library instead.
- As we saw in the previous mapping discussion, performance is most likely not a concern, since our bottlenecks lie elsewhere.
No decisions made, but this will come back in future Arch Forum meeting for decision.
DB stuff
Next, Victor had a presentation of some important considerations around the database. Presentation here The content is also written down here on the wiki, at https://dev.azure.com/MAJORITY/Documentation/_wiki/wikis/Main/322/Database (and the wiki will be more comprehensive / up to date)
Saman had a couple of comments regarding big tables and tables with big columns:
- It is good to think about partitioning etc. as soon as we see tables growing big, and not wait until there's a real problem. Since doing partitioning and other operations become harder the bigger the table, we should make sure to be proactive before the table is 100s of GB big.
- It can be advantageous to put big tables and / or tables with huge columns (i.e. with full http responses) on separate filegroups.