Arch Forum 2023-08-24¶
Participants: Andy, JD, Liangxiong, Michel, Thani, Victor, Zak
Agenda¶
- Braze / marketing followup MIB-8499
- Risk decision log
Notes¶
Last week we discussed how to get historical FX data into Braze. It was clear that a discussion with Data were required, so this time Michel joined us in the meeting.
Braze / marketing followup A quick recap of the issue: Historical FX rates needs to be available in Braze for marketing purposes. However, this data is only available in the data warehouse (Snowflake). Data is sent to Braze through a AWS lambda function which listens on the Kinesis event stream, and is managed by the Data team. Since it only listens on the event stream, only data in published events and not historical data in Snowflake can be sent. As an additional complication, the assumption was that Braze has to call an API to get the data, not have a lambda call Braze.
Victor noted that in Roman's handover notes, it's mentioned that the lambda could be moved to Braze area instead. However, this would not solve this problem, and an advantage of the lambda is that no backend help is needed to modify what data is sent to Braze.
JD brought up that soon (Q4) data team will need to support a similar data flow, another 3rd party service that also needs data from Snowflake and not just event data. Given this it seemed very reasonable that also the FX rate would be handled the same.
Either a way to call Braze API to get the data in would need to be found, or we would need an API they could call. We could see that using S3 for this purpose would be a simple and efficient way to do it. A Airflow task can easily be scheduled to extract data and store on S3. Data team is already using S3 for many things, and an additional S3 bucket to manage would not be much overhead or complexity.
Given this, we decided that if possible we use a Braze API to call, but if not possible the S3 option should be used.
(No changes in Backend)
Risk decision log Zak brought up a case how to easily access previous risk decisions for a user. Use cases for this:
- Easily see risk decisions in Hydra
- For regression tests, there's a case where the test needs to validate which rule were hit and what values were used.
- Longer term it a user feed with all kinds of logs could be useful both in Hydra, in App and other places.
We could conclude that a User Feed is out of scope currently. The Hydra use case could be solved with a link to a look in Looker, as long as the support staff are given the correct access (which is possible). This is actually already done, but instead of a link in Hydra there's a link in Intercom.
What is left is the regression tests. Neither Risk nor the different other areas like PSP or Remittance think that it is important to verify the specifics of risk rules in the regression tests. There are many other more important areas to test. Without this requirement, regression tests does not need a risk decision log. This was also synced with QA.
All in all, currently there is no need to keep a risk decision log in addition to what is already stored in S3 and Snowflake.