todo BigSpender assumption noted

main
bel 2023-10-15 11:58:18 -06:00
parent bf776cef07
commit 98b1ddcbaf
1 changed files with 40 additions and 0 deletions

View File

@ -37,6 +37,8 @@ todo:
each category's size. Makes code look weird, though. Hm.
- analyzer.go:Analyzer:Largest should be an anz.transactions.Max() to match Sum
- analyzer.go:Analyzer:Count should be an anz.transactions.Len() to match Sum
- analyzer.go:Analyzer:BigSpenderReport I ASSUME is spend-specific--the previous
implementation of largestTransaction incldues refunds and should be ignored
scheduled: []
done:
- todo: hello world
@ -713,3 +715,41 @@ done:
expends to way-too-big-slices by end time. Could do 2 passes (still O(n)) to pre-compute
each category's size. Makes code look weird, though. Hm.
ts: Sun Oct 15 11:55:59 MDT 2023
- todo: go test
subtasks:
- TestAnalyzer_BigSpendersReport
- TestAnalyzer_TransactionsFromURLs
- TestAnalyzer_TransactionsFromURLsConcurrent
- amount.go:Rounded probably does NOT handle float precision well... it is float64
tho...
- my `go mod tidy` actually cleared `go.mod` file, probably weird localhost backwards
compatilble stuff
- transaction.go:Transaction:String not clear if FormatUSD or amount currency should
not be changed, or even what currency Amount is
- transaction.go:Transaction:Sum again doesnt care about Amount currency or vendor/vendee
drift
- analyzer.go:Analyzer:LargestTransaction doesn't specify how to break ties; stable
or latest?
- analyzer.go:Analyzer:Add should dedupe transactions added, but transactions.go:FromFile
will load duplicate transactions from json file so hmmmm
- todo: analyzer.go:Analzyer:Add dedupes each transaction, which is O(n**2)
details: |
* BUT there's no indicator whether order of the array matters, so it's unsafe for me to sort/heapify that stuff
* OR I can store a second copy of all entries in a map, but that risks drift syncing the two
* SO I could create a UniqueTransactions struct {
transactions []Transaction
dedupes map[Transaction]struct{}
}
but that's just doubling RAM usage in a thing that sounds like it could scale infinitely over time
SO I could do a [hash(Transaction)][]*Transaction and compare just a subset. Because it's in RAM and computed live, the hash cardinality could be changed on any release
<------------------ if I have time, do this
- analyzer.go:Analyzer:Add dedupes but what is a duplicate transaction? Transactions
can be pending and then later disappear to have their date updated OR be like
pre-charges on credit cards that later disappear
- analyzer.go:Analyzer:Add is not concurrency-safe
- analyzer.go:Analyzer:ByCategory probably allocates big slices out the gate and
expends to way-too-big-slices by end time. Could do 2 passes (still O(n)) to pre-compute
each category's size. Makes code look weird, though. Hm.
- analyzer.go:Analyzer:Largest should be an anz.transactions.Max() to match Sum
- analyzer.go:Analyzer:Count should be an anz.transactions.Len() to match Sum
ts: Sun Oct 15 11:57:55 MDT 2023