Compare commits

...

128 Commits

Author SHA1 Message Date
Bel LaPointe
31fe916000 DO AS I WILL 2024-04-19 15:15:08 -06:00
Bel LaPointe
d31042e971 prompt up again 2024-04-19 15:07:12 -06:00
Bel LaPointe
2e7d58fd13 prompt up 2024-04-19 14:56:21 -06:00
Bel LaPointe
93672e67a6 gr 2024-04-19 14:33:12 -06:00
Bel LaPointe
1b06f727fd OBEY 2024-04-19 14:25:01 -06:00
Bel LaPointe
3ae62390cf more prompt 2024-04-19 14:18:18 -06:00
Bel LaPointe
5cfb89bc64 up queue timeout to 10min for ai reasons 2024-04-19 14:14:07 -06:00
Bel LaPointe
b554be6282 update prompt 2024-04-19 14:10:48 -06:00
Bel LaPointe
5785ea37ae better recap prompts by doing an intro with the OP 2024-04-19 14:01:24 -06:00
Bel LaPointe
f27d416a5a get recap prompt from $RECAP_PROMPT 2024-04-19 13:32:12 -06:00
Bel LaPointe
81793876f8 readme 2024-04-19 13:24:48 -06:00
Bel LaPointe
6d81164161 impl PersistenceToRecap pipeline where each resolved event gets an ai recap of each of its threads that have messages persisted under the thread as a Recap column 2024-04-19 13:19:14 -06:00
Bel LaPointe
20256bd6b4 try and raise ollama timeout 2024-04-19 13:18:12 -06:00
Bel LaPointe
e5e98e2890 reenable queue new_persistence 2024-04-19 12:44:58 -06:00
Bel LaPointe
4fb26ec775 fix sqlite :memory: dont actually work 2024-04-19 12:42:36 -06:00
Bel LaPointe
782b9ec3cf if no therads or no messages then no ai 2024-04-19 12:39:50 -06:00
Bel LaPointe
9d7f69bd8a default ollama model to llama3 2024-04-19 12:36:25 -06:00
Bel LaPointe
12de99da57 impl GET /api/v1/rpc/aievent?id=123 2024-04-19 12:34:49 -06:00
Bel LaPointe
81fe8070ca impl storage GetEventThreads 2024-04-19 12:25:40 -06:00
Bel LaPointe
79de56e236 impl storage.GetThreadMessages 2024-04-19 12:21:31 -06:00
Bel LaPointe
f485b5ea88 from OLLAMA_U_R_L to OLLAMA_URL 2024-04-19 11:54:09 -06:00
Bel LaPointe
894536d209 oops drop bad log 2024-04-18 15:05:25 -06:00
Bel LaPointe
f8861a73b5 async slack scrape goes up to ?since 2024-04-18 14:56:33 -06:00
Bel LaPointe
14de286415 go test -tags=ai -v -run=AI works with ollama which is cool and fast with llama3 2024-04-18 14:08:28 -06:00
Bel LaPointe
8557ddc522 boo 2024-04-17 17:32:07 -06:00
Bel LaPointe
1e43c2a14e split 2024-04-17 17:28:48 -06:00
Bel LaPointe
04c574ffec f it 2024-04-17 16:58:06 -06:00
Bel LaPointe
b2f64037e2 GET /api/v1/version 2024-04-17 16:23:15 -06:00
Bel LaPointe
fbd151f9ef when initializing slack, stash token in driver 2024-04-17 16:07:43 -06:00
Bel LaPointe
5f21098fdc test and update asset pattern to catch ip addresses 2024-04-17 03:58:43 -06:00
Bel LaPointe
7c2d663401 do not cry to me 2024-04-16 15:08:21 -06:00
Bel LaPointe
95b0394199 slack can parse optional channel wrapper for scrape 2024-04-16 15:03:24 -06:00
Bel LaPointe
d9d91193dd oh yeah one big todo 2024-04-16 11:03:14 -06:00
Bel LaPointe
2033cefc2a todo now that we have https://grafana.render.com/d/a5df6a90-087b-4655-90d0-e51dcb89f568/2024-04-16-spocbotvr?orgId=1&from=now-7d&to=now&var-interval=1h&var-Teams=All&var-Assets=All&var-Name=All&var-Names=All 2024-04-16 11:01:45 -06:00
Bel LaPointe
600a2e0111 drop URL from Message as it is unfilled and superfluous 2024-04-16 09:13:44 -06:00
Bel LaPointe
4f3b8ec866 we are ready to ship 2024-04-16 09:09:44 -06:00
Bel LaPointe
c4a7eaf04a i am capped at 9args per a bug somewhere between pq and postgres but heck if i care 2024-04-16 09:06:32 -06:00
Bel LaPointe
5557c0920a pg passing mvp test with queue 2024-04-16 08:53:09 -06:00
Bel LaPointe
c7f5cdb040 TODO 2024-04-16 08:33:30 -06:00
Bel LaPointe
a8e8fdc451 synchronous fanout from channel to threads for scrape 2024-04-16 08:33:21 -06:00
Bel LaPointe
d88a8bb23a oof string != byte arr got it 2024-04-16 08:08:47 -06:00
Bel LaPointe
39c0056190 found running locally that i dont need rest, pipeline needs a way to drop messages as garbage 2024-04-16 07:56:54 -06:00
Bel LaPointe
d87af2fadc readme todo 2024-04-16 07:40:51 -06:00
Bel LaPointe
9bc47bfde6 added scraping Routing Team as Team 2024-04-16 07:40:43 -06:00
Bel LaPointe
098986eb07 nil ptr in main test on pipeline spinup race 2024-04-16 07:36:12 -06:00
Bel LaPointe
5bc068451f find Author from slack 2024-04-16 07:35:58 -06:00
Bel LaPointe
5fa21d0cd9 model to persist pipeline tests OK 2024-04-16 07:29:42 -06:00
Bel LaPointe
709f2ac254 slack to model pipeline tested and K 2024-04-16 07:19:25 -06:00
Bel LaPointe
ba06796b8c save what i need from old .Message 2024-04-16 07:05:30 -06:00
Bel LaPointe
f38c183fe8 stubbing 2024-04-16 06:53:30 -06:00
Bel LaPointe
8ae8f47753 tests run and fail again 2024-04-16 06:52:07 -06:00
Bel LaPointe
e372be4288 rm temp 2024-04-16 06:46:30 -06:00
Bel LaPointe
acfd95e5af everything just werks for storage 2024-04-16 06:45:40 -06:00
Bel LaPointe
d70a0e313f STORAGE TEST WERKS 2024-04-16 06:43:25 -06:00
Bel LaPointe
0cecd5ea04 add external id to .model 2024-04-16 05:55:35 -06:00
Bel LaPointe
a7d5d021d6 fill model 2024-04-16 05:51:08 -06:00
bel
44db0c6939 todo 2024-04-15 21:25:13 -06:00
bel
dd98aedb5d class diagram pretty 2024-04-15 21:04:54 -06:00
bel
254cb1ec0a yay 2024-04-15 20:43:43 -06:00
Bel LaPointe
38a68de67f todo 2024-04-15 17:10:34 -06:00
Bel LaPointe
3c62411927 wip normalize 2024-04-15 17:09:31 -06:00
Bel LaPointe
c84d80e8d3 test message to persistence 2024-04-15 17:00:15 -06:00
Bel LaPointe
74477fc09c data design time procrastination 2024-04-15 16:38:01 -06:00
Bel LaPointe
1fd4b72b22 todo 2024-04-15 16:36:25 -06:00
Bel LaPointe
d9244e4e1c t.Parallel pls 2024-04-15 16:35:51 -06:00
Bel LaPointe
c9d3b4998b sql :memory: dont work so make a helper NewTestDriver 2024-04-15 16:34:19 -06:00
Bel LaPointe
c5e1556f61 sp 2024-04-15 16:26:24 -06:00
Bel LaPointe
d76f8e2c15 stub second pipeline 2024-04-15 16:26:16 -06:00
Bel LaPointe
ff280997b1 main can run many pipelines 2024-04-15 16:23:41 -06:00
Bel LaPointe
83c0ee3f53 ok report still botched but im werkin on it 2024-04-15 16:13:31 -06:00
Bel LaPointe
9d7a175c62 at least main_test runs 2024-04-15 16:12:41 -06:00
Bel LaPointe
1dcffdd956 ew compile errs 2024-04-15 16:04:12 -06:00
Bel LaPointe
580068d98b revive message and test slack pipeline parses slack into message 2024-04-15 15:57:56 -06:00
Bel LaPointe
eec5c39725 go mod tidy 2024-04-15 15:50:29 -06:00
Bel LaPointe
9848492b1e no test driver non driver things 2024-04-15 15:50:18 -06:00
Bel LaPointe
a674022357 revive ai, config*.go 2024-04-15 15:49:48 -06:00
Bel LaPointe
80df07089f rename ingest to pipeline 2024-04-15 15:22:23 -06:00
Bel LaPointe
d792626c2f test ingest 2024-04-15 15:18:03 -06:00
Bel LaPointe
acac2a60b0 finish ingest loop 2024-04-15 14:19:53 -06:00
Bel LaPointe
eef78d6e39 noop queue and topics embedded 2024-04-15 14:18:39 -06:00
Bel LaPointe
42c5b7d7ad multi topic done 2024-04-15 13:30:35 -06:00
Bel LaPointe
e85a2d25a1 queue from Dequeue to Syn for SynAck 2024-04-15 13:18:14 -06:00
Bel LaPointe
8193bf7377 f sql jeez 2024-04-15 13:08:21 -06:00
Bel LaPointe
2f3739b24f functions are good 2024-04-15 09:14:59 -06:00
Bel LaPointe
d38352f050 ooooo it is pretty 2024-04-15 09:09:09 -06:00
Bel LaPointe
ba833fa315 this is getting k 2024-04-15 08:45:20 -06:00
Bel LaPointe
d7cbcb9926 d3js not for me 2024-04-15 08:23:10 -06:00
Bel LaPointe
961be827d0 log less 2024-04-15 07:35:39 -06:00
Bel LaPointe
6fbafe6700 PUT /api/v1/rpc/scrapeslack 2024-04-15 07:34:30 -06:00
Bel LaPointe
7df7528ccf can parse slack messages from scraping channel history too 2024-04-15 07:34:21 -06:00
Bel LaPointe
a91da082c7 no max width on report.tmpl 2024-04-15 06:52:27 -06:00
Bel LaPointe
af2ad44109 remove debug console.log 2024-04-15 06:51:21 -06:00
Bel LaPointe
cabc5c00b7 dynamic alert dump via filters 2024-04-15 06:50:41 -06:00
Bel LaPointe
84dec31e53 more filter fields 2024-04-15 06:00:28 -06:00
Bel LaPointe
f2a23e5d8a extract named Pattern result if a group is named 2024-04-15 05:55:30 -06:00
Bel LaPointe
a8270b524c oops tests are failing 2024-04-14 20:13:48 -06:00
bel
902ab96b2d wip pattern 2024-04-14 15:40:49 -06:00
bel
60017a8d3a a select 2024-04-14 10:22:43 -06:00
bel
39ed9280e1 Merge branch 'main' of https://gitea.inhome.blapointe.com/render/slack-bot-vr into main 2024-04-14 10:02:18 -06:00
Bel LaPointe
007611fb4f fix ai test and it runs in just 4s on laptop so it is feasible 2024-04-14 09:26:41 -06:00
Bel LaPointe
f8002053f5 delete comment 2024-04-14 09:21:52 -06:00
Bel LaPointe
4e3818046d ew 2024-04-14 09:19:29 -06:00
Bel LaPointe
c89a9a8ada Merge remote-tracking branch 'gitea/main' 2024-04-14 09:19:14 -06:00
bel
ac6bf30042 fun day 2024-04-14 01:16:54 -06:00
bel
4eb2117f21 yagni 2024-04-13 23:51:27 -06:00
bel
6411011e62 drop redundant 2024-04-13 23:50:54 -06:00
bel
444ca5d0ca table complete and linked 2024-04-13 23:49:46 -06:00
bel
bb72ff4bfa OH HEY 2024-04-13 23:35:51 -06:00
bel
e8d52274e7 again verbose 2024-04-13 23:16:20 -06:00
bel
8a67f505ce ok template input a lot more verbose 2024-04-13 22:50:38 -06:00
bel
f4b04e01d3 ok but now server side make template ez 2024-04-13 21:26:17 -06:00
bel
e33d1a6a4b i think it is time to pipe or draw 2024-04-13 19:26:37 -06:00
bel
4ac55e2eea fix naming rows vs columns 2024-04-13 19:18:31 -06:00
bel
b0c9c1cf9e column and row tags ezpz 2024-04-13 19:04:49 -06:00
bel
bf34835305 ew 2024-04-13 14:02:41 -06:00
bel
a5f332b991 ready to dev 2024-04-13 10:46:37 -06:00
bel
258a51af0b i think i need to build gui now 2024-04-13 10:30:28 -06:00
bel
a3630a8fda refactor 2024-04-13 10:28:55 -06:00
bel
6b962ea509 parse datacenter from Tags field 2024-04-13 10:24:02 -06:00
bel
b1d93a7698 add /events and /eventnames 2024-04-13 10:15:29 -06:00
bel
4111ce9153 drop TODO 2024-04-13 10:04:52 -06:00
bel
85d589a570 remove # from event 2024-04-13 10:03:54 -06:00
bel
10630df394 Accept $ASSET_PATTERN 2024-04-13 10:00:21 -06:00
bel
1324376399 also accpt text/tsv 2024-04-13 09:52:53 -06:00
bel
e58fa50656 accept Accept:text/csv 2024-04-13 09:48:28 -06:00
bel
9bfbcf2d70 wip text/csv 2024-04-13 09:29:00 -06:00
bel
847cd83fd5 extract into writeJSON 2024-04-13 09:26:02 -06:00
Bel LaPointe
da0125c663 tried 2024-04-12 18:15:31 -06:00
37 changed files with 2978 additions and 1202 deletions

222
.message.go Normal file
View File

@@ -0,0 +1,222 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"regexp"
"strconv"
"strings"
"time"
)
var (
ErrIrrelevantMessage = errors.New("message isnt relevant to spoc bot vr")
)
type Message struct {
ID string
TS uint64
Source string
Channel string
Thread string
EventName string
Event string
Plaintext string
Asset string
Resolved bool
Datacenter string
}
func (m Message) Empty() bool {
return m == (Message{})
}
func (m Message) Time() time.Time {
return time.Unix(int64(m.TS), 0)
}
func (m Message) Serialize() []byte {
b, err := json.Marshal(m)
if err != nil {
panic(err)
}
return b
}
func MustDeserialize(b []byte) Message {
m, err := Deserialize(b)
if err != nil {
panic(err)
}
return m
}
func Deserialize(b []byte) (Message, error) {
var m Message
err := json.Unmarshal(b, &m)
return m, err
}
type (
slackMessage struct {
slackEvent
Type string
TS uint64 `json:"event_time"`
Event slackEvent
MessageTS string `json:"ts"`
}
slackEvent struct {
ID string `json:"event_ts"`
Channel string
// rewrites
Nested *slackEvent `json:"message"`
PreviousMessage *slackEvent `json:"previous_message"`
// human
ParentID string `json:"thread_ts"`
Text string
Blocks []slackBlock
// bot
Bot slackBot `json:"bot_profile"`
Attachments []slackAttachment
}
slackBlock struct {
Elements []slackElement
}
slackElement struct {
Elements []slackElement
RichText string `json:"text"`
}
slackBot struct {
Name string
}
slackAttachment struct {
Color string
Title string
Text string
Fields []slackField
Actions []slackAction
}
slackField struct {
Value string
Title string
}
slackAction struct{}
)
func ParseSlack(b []byte, assetPattern, datacenterPattern, eventNamePattern string) (Message, error) {
return ParseSlackFromChannel(b, assetPattern, datacenterPattern, eventNamePattern, "")
}
func ParseSlackFromChannel(b []byte, assetPattern, datacenterPattern, eventNamePattern string, ch string) (Message, error) {
m, err := parseSlackJSON(b, ch)
if err != nil {
return Message{}, err
}
for pattern, ptr := range map[string]*string{
assetPattern: &m.Asset,
datacenterPattern: &m.Datacenter,
eventNamePattern: &m.EventName,
} {
r := regexp.MustCompile(pattern)
parsed := r.FindString(*ptr)
for i, name := range r.SubexpNames() {
if i > 0 && name != "" {
parsed = r.FindStringSubmatch(*ptr)[i]
}
}
*ptr = parsed
}
return m, nil
}
func parseSlackJSON(b []byte, ch string) (Message, error) {
s, err := _parseSlackJSON(b)
if err != nil {
return Message{}, err
}
if ch != "" {
s.Event.Channel = ch
}
if s.Event.Bot.Name != "" {
if len(s.Event.Attachments) == 0 {
return Message{}, ErrIrrelevantMessage
} else if !strings.Contains(s.Event.Attachments[0].Title, ": Firing: ") {
return Message{}, ErrIrrelevantMessage
}
var tagsField string
for _, field := range s.Event.Attachments[0].Fields {
if field.Title == "Tags" {
tagsField = field.Value
}
}
return Message{
ID: fmt.Sprintf("%s/%v", s.Event.ID, s.TS),
TS: s.TS,
Source: fmt.Sprintf(`https://renderinc.slack.com/archives/%s/p%s`, s.Event.Channel, strings.ReplaceAll(s.Event.ID, ".", "")),
Channel: s.Event.Channel,
Thread: s.Event.ID,
EventName: strings.Split(s.Event.Attachments[0].Title, ": Firing: ")[1],
Event: strings.TrimPrefix(strings.Split(s.Event.Attachments[0].Title, ":")[0], "#"),
Plaintext: s.Event.Attachments[0].Text,
Asset: s.Event.Attachments[0].Text,
Resolved: !strings.HasPrefix(s.Event.Attachments[0].Color, "F"),
Datacenter: tagsField,
}, nil
}
if s.Event.ParentID == "" {
return Message{}, ErrIrrelevantMessage
}
return Message{
ID: fmt.Sprintf("%s/%v", s.Event.ParentID, s.TS),
TS: s.TS,
Source: fmt.Sprintf(`https://renderinc.slack.com/archives/%s/p%s`, s.Event.Channel, strings.ReplaceAll(s.Event.ParentID, ".", "")),
Channel: s.Event.Channel,
Thread: s.Event.ParentID,
EventName: "",
Event: "",
Plaintext: s.Event.Text,
Asset: "",
Datacenter: "",
}, nil
}
func _parseSlackJSON(b []byte) (slackMessage, error) {
var result slackMessage
err := json.Unmarshal(b, &result)
switch result.Type {
case "message":
result.Event = result.slackEvent
result.TS, _ = strconv.ParseUint(strings.Split(result.MessageTS, ".")[0], 10, 64)
result.Event.ID = result.MessageTS
}
if result.Event.Nested != nil && !result.Event.Nested.Empty() {
result.Event.Blocks = result.Event.Nested.Blocks
result.Event.Bot = result.Event.Nested.Bot
result.Event.Attachments = result.Event.Nested.Attachments
result.Event.Nested = nil
}
if result.Event.PreviousMessage != nil {
if result.Event.PreviousMessage.ID != "" {
result.Event.ID = result.Event.PreviousMessage.ID
}
result.Event.PreviousMessage = nil
}
return result, err
}
func (this slackEvent) Empty() bool {
return fmt.Sprintf("%+v", this) == fmt.Sprintf("%+v", slackEvent{})
}

148
.report.go Normal file
View File

@@ -0,0 +1,148 @@
package main
import (
"context"
_ "embed"
"encoding/json"
"errors"
"io"
"slices"
"sort"
"text/template"
"time"
)
//go:embed report.tmpl
var reportTMPL string
func ReportSince(ctx context.Context, w io.Writer, s Storage, t time.Time) error {
tmpl := template.New("report").Funcs(map[string]any{
"time": func(foo string, args ...any) (any, error) {
switch foo {
case "Unix":
seconds, _ := args[0].(uint64)
return time.Unix(int64(seconds), 0), nil
case "Time.Format":
t, _ := args[1].(time.Time)
return t.Format(args[0].(string)), nil
}
return nil, errors.New("not impl")
},
"json": func(foo string, args ...any) (any, error) {
switch foo {
case "Marshal":
b, err := json.Marshal(args[0])
return string(b), err
}
return nil, errors.New("not impl")
},
})
tmpl, err := tmpl.Parse(reportTMPL)
if err != nil {
return err
}
messages, err := s.MessagesSince(ctx, t)
if err != nil {
return err
}
eventNames, err := s.EventNamesSince(ctx, t)
if err != nil {
return err
}
eventIDs, err := s.EventsSince(ctx, t)
if err != nil {
return err
}
type aThread struct {
Thread string
Messages []Message
First Message
Last Message
}
type anEvent struct {
Event string
Threads []aThread
First Message
Last Message
}
type someEvents struct {
Events []anEvent
}
return tmpl.Execute(w, map[string]any{
"since": t.Format("2006-01-02"),
"events": func() someEvents {
events := make([]anEvent, len(eventIDs))
for i, event := range eventIDs {
events[i] = func() anEvent {
threadNames := []string{}
for _, m := range messages {
if m.Event == event {
threadNames = append(threadNames, m.Thread)
}
}
slices.Sort(threadNames)
slices.Compact(threadNames)
threads := make([]aThread, len(threadNames))
for i, thread := range threadNames {
threads[i] = func() aThread {
someMessages := []Message{}
for _, m := range messages {
if m.Thread == thread {
someMessages = append(someMessages, m)
}
}
sort.Slice(someMessages, func(i, j int) bool {
return someMessages[i].TS < someMessages[j].TS
})
return aThread{
Thread: thread,
Messages: someMessages,
First: func() Message {
if len(someMessages) == 0 {
return Message{}
}
return someMessages[0]
}(),
Last: func() Message {
if len(someMessages) == 0 {
return Message{}
}
return someMessages[len(someMessages)-1]
}(),
}
}()
}
sort.Slice(threads, func(i, j int) bool {
return threads[i].First.TS < threads[j].First.TS
})
return anEvent{
Event: event,
Threads: threads,
First: func() Message {
if len(threads) == 0 {
return Message{}
}
return threads[0].First
}(),
Last: func() Message {
if len(threads) == 0 {
return Message{}
}
return threads[len(threads)-1].Last
}(),
}
}()
}
return someEvents{
Events: events,
}
}(),
"messages": messages,
"eventNames": eventNames,
})
}

227
.report.tmpl Normal file
View File

@@ -0,0 +1,227 @@
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/water.css@2/out/water.css">
<script src="https://code.highcharts.com/10/highcharts.js"></script>
<script type="module">
const allMessages = {{ json "Marshal" .messages }};
function fillForm() {
const filterableFields = [
"Asset",
"Channel",
"Event",
"EventName",
"Resolved",
"Thread",
];
const fieldsToOptions = {};
filterableFields.map((field) => {fieldsToOptions[field] = {}});
allMessages.map((message) => {
Object.keys(fieldsToOptions).map((field) => {fieldsToOptions[field][message[field]] = true});
});
Object.keys(fieldsToOptions).map((field) => {fieldsToOptions[field] = Object.keys(fieldsToOptions[field]); fieldsToOptions[field].sort();});
document.getElementById("form").innerHTML = Object.keys(fieldsToOptions).map((field) => {
return `
<label for="${field}">${field}</label>
<select name="${field}" multiple ${fieldsToOptions[field].length > 10 ? "size=10" : `size=${fieldsToOptions[field].length}`}>
${fieldsToOptions[field].map((option) => `
<option selected>${option}</option>
`)}
</select>
`
}).join("\n");
}
window.fillForm = fillForm;
function drawAll() {
const messages = filterMessages(allMessages)
dumpEvents(messages);
drawEventVolume(messages)
drawEventVolumeByHour(messages)
drawEventVolumeByAsset(messages)
}
window.drawAll = drawAll;
function dumpEvents(messages) {
const eventToThreads = {};
for(var m of messages) {
if (!eventToThreads[m.Event])
eventToThreads[m.Event] = [];
eventToThreads[m.Event].push(m.Thread);
}
const threadToMessages = {};
for(var m of messages) {
if (!threadToMessages[m.Thread])
threadToMessages[m.Thread] = [];
threadToMessages[m.Thread].push(m);
}
const eventToMessages = {};
for(var e in eventToThreads) {
if (!eventToMessages[e])
eventToMessages[e] = [];
for (var thread of eventToThreads[e])
eventToMessages[e] = eventToMessages[e].concat(threadToMessages[thread]);
}
for(var e in eventToMessages)
eventToMessages[e].sort((a, b) => a.TS - b.TS);
var events = Object.keys(eventToMessages);
events.sort();
events.reverse();
var keys = ["TS", "Event", "EventName", "Latest"];
document.getElementById("events").innerHTML = `
<tr>
<th>TS</th>
<th>Event</th>
<th>EventName</th>
<th>Latest</th>
</tr>
${events.map((e) => `
<tr>
<td><a href="${eventToMessages[e][0].Source}">${new Date(eventToMessages[e][0].TS * 1000).toDateString()}</a></td>
<td><a href="${eventToMessages[e][0].Source}">${eventToMessages[e][0].Event}</a></td>
<td>${eventToMessages[e][0].EventName}</td>
<td><a href="${eventToMessages[e].at(-1).Source}">${eventToMessages[e].at(-1).Plaintext}</a></td>
</tr>
`).join("")}
`;
}
function filterMessages(messages) {
const selects = document.getElementById("form").getElementsByTagName("select");
const fieldsToOptions = {};
for(var select of selects) {
fieldsToOptions[select.name] = [];
for(var option of select.getElementsByTagName("option"))
if (option.selected)
fieldsToOptions[select.name].push(option.innerHTML);
}
return messages.map((m) => {
for(var k in fieldsToOptions) {
if (fieldsToOptions[k].filter((v) => `${v}` == `${m[k]}`).length == 0) {
return null;
}
}
return m;
}).filter((m) => { return m != null });
}
function drawEventVolume(messages) {
drawEventVolumeWith(
messages,
"eventVolume",
(ts) => new Date(1000 * ts).
toLocaleDateString('en-US', {month: 'numeric', day: 'numeric', weekday: 'short'}),
(m) => m.EventName,
);
}
function drawEventVolumeWith(messages, documentId, kify, nameify) {
const points = [];
messages.forEach((m) => {
points.push({x: m.TS, name: nameify(m)});
});
var xs = points.map((point) => point.x);
if (xs && !isNaN(parseFloat(kify(xs[0])))) {
xs = xs.map(kify);
xs.sort((a, b) => parseFloat(a) - parseFloat(b));
} else {
xs.sort();
xs = xs.map(kify);
}
xs = [...new Set(xs)];
const names = [...new Set(points.map((p) => p.name))];
const nameAndData = names.map((name) => {
return {
name: name,
data: xs.map((x) => points.filter((p) => { return p.name == name && kify(p.x) == x }).length),
}
});
draw(documentId, xs, nameAndData);
}
function drawEventVolumeByHour(messages) {
drawEventVolumeWith(
messages,
"eventVolumeByHour",
(ts) => new Date(1000 * ts).getHours(),
(m) => m.EventName,
);
}
function drawEventVolumeByAsset(messages) {}
function draw(documentId, xs, nameAndData) {
document.getElementById(documentId).innerHTML = "";
Highcharts.chart(documentId, {
chart: { type: 'column' },
title: { text: '' },
xAxis: { categories: xs },
yAxis: { allowDecimals: false, title: { text: '' } },
//legend: { enabled: false },
series: nameAndData,
plotOptions: { column: { stacking: 'normal' } },
});
}
</script>
<style>
rows {
display: flex;
flex-direction: column;
flex-grow: 1;
}
columns {
display: flex;
flex-direction: row;
flex-grow: 1;
}
rows, columns { border: 1px solid red; }
</style>
</head>
<body onload="fillForm(); drawAll();" style="max-width: inherit;">
<h1>Report</h1>
<columns>
<form style="width: 16em; flex-shrink: 0;" onsubmit="drawAll(); return false;">
<columns>
<button type="submit">Apply</button>
</columns>
<rows id="form"></rows>
</form>
<rows>
<rows>
<rows>
<h2>Event Volume</h2>
<div id="eventVolume"></div>
</rows>
<columns>
<rows>
<h3>by Hour</h3>
<div id="eventVolumeByHour"></div>
</rows>
</columns>
<rows>
<h3>by Asset</h3>
<div>DRAW ME</div>
</rows>
</rows>
<rows>
<div>
<h2>Events</h2>
<table id="events">
</table>
</div>
</rows>
</rows>
</columns>
</body>
<footer>
</footer>
</html>

32
.report_test.go Normal file
View File

@@ -0,0 +1,32 @@
package main
import (
"bytes"
"context"
"os"
"path"
"testing"
"time"
)
func TestReport(t *testing.T) {
ctx, can := context.WithTimeout(context.Background(), time.Minute)
defer can()
w := bytes.NewBuffer(nil)
db := NewRAM()
FillWithTestdata(ctx, db, renderAssetPattern, renderDatacenterPattern, renderEventNamePattern)
s := NewStorage(db)
if err := ReportSince(ctx, w, s, time.Now().Add(-1*time.Hour*24*365*20)); err != nil {
t.Fatal(err)
}
p := path.Join(os.TempDir(), "test_report.html")
if env := os.Getenv("TEST_REPORT_PATH"); env != "" {
p = env
}
os.WriteFile(p, w.Bytes(), os.ModePerm)
t.Log(p)
}

View File

@@ -1,3 +1,36 @@
# Spoc Bot v. Render
Thank you, [Sean](https://www.linkedin.com/in/sean-moore-1755a619/)
## TODO
- what SLO/SLI can I help benoit with
- scott; like to keep state in incident.io and zendesk
- @spoc -ignore, @spoc -s summary
- limit queue retries
```
erDiagram
%% thread event eventName
EVENT ||--|{ THREAD: "spawns"
THREAD ||--|{ MESSAGE: "populated by"
MESSAGE {
ID str
URL str
TS number
Plaintext str
}
THREAD {
ID str
URL str
Channel str
}
EVENT {
ID str
Name str
Asset str
Resolved bool
Datacenter str
}
```

139
ai.go
View File

@@ -1,13 +1,10 @@
package main
import (
"bytes"
"context"
"os"
"strings"
"net/http"
"time"
nn "github.com/nikolaydubina/llama2.go/exp/nnfast"
"github.com/nikolaydubina/llama2.go/llama2"
"github.com/tmc/langchaingo/llms"
"github.com/tmc/langchaingo/llms/ollama"
)
@@ -37,133 +34,23 @@ func NewAIOllama(url, model string) AIOllama {
}
func (ai AIOllama) Do(ctx context.Context, prompt string) (string, error) {
c := &http.Client{
Timeout: time.Hour,
Transport: &http.Transport{
//DisableKeepAlives: true,
IdleConnTimeout: time.Hour,
ResponseHeaderTimeout: time.Hour,
ExpectContinueTimeout: time.Hour,
},
}
defer c.CloseIdleConnections()
llm, err := ollama.New(
ollama.WithModel(ai.model),
ollama.WithServerURL(ai.url),
ollama.WithHTTPClient(c),
)
if err != nil {
return "", err
}
return llms.GenerateFromSinglePrompt(ctx, llm, prompt)
}
type AILocal struct {
checkpointPath string
tokenizerPath string
temperature float64
steps int
topp float64
}
func NewAILocal(
checkpointPath string,
tokenizerPath string,
temperature float64,
steps int,
topp float64,
) AILocal {
return AILocal{
checkpointPath: checkpointPath,
tokenizerPath: tokenizerPath,
temperature: temperature,
steps: steps,
topp: topp,
}
}
// https://github.com/nikolaydubina/llama2.go/blob/master/main.go
func (ai AILocal) Do(ctx context.Context, prompt string) (string, error) {
checkpointFile, err := os.OpenFile(ai.checkpointPath, os.O_RDONLY, 0)
if err != nil {
return "", err
}
defer checkpointFile.Close()
config, err := llama2.NewConfigFromCheckpoint(checkpointFile)
if err != nil {
return "", err
}
isSharedWeights := config.VocabSize > 0
if config.VocabSize < 0 {
config.VocabSize = -config.VocabSize
}
tokenizerFile, err := os.OpenFile(ai.tokenizerPath, os.O_RDONLY, 0)
if err != nil {
return "", err
}
defer tokenizerFile.Close()
vocab := llama2.NewVocabFromFile(config.VocabSize, tokenizerFile)
w := llama2.NewTransformerWeightsFromCheckpoint(config, checkpointFile, isSharedWeights)
// right now we cannot run for more than config.SeqLen steps
steps := ai.steps
if steps <= 0 || steps > config.SeqLen {
steps = config.SeqLen
}
runState := llama2.NewRunState(config)
promptTokens := vocab.Encode(strings.ReplaceAll(prompt, "\n", "<0x0A>"))
out := bytes.NewBuffer(nil)
// the current position we are in
var token int = 1 // 1 = BOS token in llama-2 sentencepiece
var pos = 0
for pos < steps {
// forward the transformer to get logits for the next token
llama2.Transformer(token, pos, config, runState, w)
var next int
if pos < len(promptTokens) {
next = promptTokens[pos]
} else {
// sample the next token
if ai.temperature == 0 {
// greedy argmax sampling
next = nn.ArgMax(runState.Logits)
} else {
// apply the temperature to the logits
for q := 0; q < config.VocabSize; q++ {
runState.Logits[q] /= float32(ai.temperature)
}
// apply softmax to the logits to the probabilities for next token
nn.SoftMax(runState.Logits)
// we now want to sample from this distribution to get the next token
if ai.topp <= 0 || ai.topp >= 1 {
// simply sample from the predicted probability distribution
next = nn.Sample(runState.Logits)
} else {
// top-p (nucleus) sampling, clamping the least likely tokens to zero
next = nn.SampleTopP(runState.Logits, float32(ai.topp))
}
}
}
pos++
// data-dependent terminating condition: the BOS (1) token delimits sequences
if next == 1 {
break
}
// following BOS (1) token, sentencepiece decoder strips any leading whitespace
var tokenStr string
if token == 1 && vocab.Words[next][0] == ' ' {
tokenStr = vocab.Words[next][1:]
} else {
tokenStr = vocab.Words[next]
}
out.Write([]byte(tokenStr))
// advance forward
token = next
}
out.Write([]byte("\n"))
return strings.ReplaceAll(string(out.Bytes()), "<0x0A>", "\n"), nil
}

View File

@@ -4,68 +4,20 @@ package main
import (
"context"
"fmt"
"io"
"net/http"
"os"
"path"
"testing"
"time"
)
func TestAINoop(t *testing.T) {
t.Parallel()
ai := NewAINoop()
testAI(t, ai)
}
func TestAIOllama(t *testing.T) {
ai := NewAIOllama("http://localhost:11434", "gemma:2b")
testAI(t, ai)
}
func TestAILocal(t *testing.T) {
d := os.TempDir()
checkpoints := "checkpoints"
tokenizer := "tokenizer"
for u, p := range map[string]*string{
"https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.bin": &checkpoints,
"https://github.com/karpathy/llama2.c/raw/master/tokenizer.bin": &tokenizer,
} {
func() {
*p = path.Base(u)
if _, err := os.Stat(path.Join(d, *p)); os.IsNotExist(err) {
t.Logf("downloading %s from %s", u, *p)
resp, err := http.Get(u)
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
f, err := os.Create(path.Join(d, *p))
if err != nil {
t.Fatal(err)
}
defer f.Close()
if _, err := io.Copy(f, resp.Body); err != nil {
f.Close()
os.Remove(path.Join(d, *p))
t.Fatal(err)
}
}
}()
}
ai := NewAILocal(
path.Join(d, checkpoints),
path.Join(d, tokenizer),
0.9,
256,
0.9,
)
t.Parallel()
ai := NewAIOllama("http://localhost:11434", "llama3")
testAI(t, ai)
}
@@ -75,7 +27,7 @@ func testAI(t *testing.T, ai AI) {
defer can()
t.Run("mvp", func(t *testing.T) {
if result, err := ai.Do(ctx, "hello world"); err != nil {
if result, err := ai.Do(ctx, "Tell me a fun fact."); err != nil {
t.Fatal(err)
} else if len(result) < 3 {
t.Error(result)
@@ -84,9 +36,10 @@ func testAI(t *testing.T, ai AI) {
}
})
/*
t.Run("simulation", func(t *testing.T) {
d := NewRAM()
FillWithTestdata(ctx, d)
FillWithTestdata(ctx, d, renderAssetPattern, renderDatacenterPattern, renderEventNamePattern)
s := NewStorage(d)
threads, err := s.Threads(ctx)
@@ -111,5 +64,5 @@ func testAI(t *testing.T, ai AI) {
}
t.Logf("\n\t%s\n->\n\t%s", input, result)
})
*/
}

View File

@@ -3,7 +3,9 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"os"
"regexp"
"strconv"
@@ -17,20 +19,32 @@ type Config struct {
InitializeSlack bool
SlackToken string
SlackChannels []string
PostgresConn string
DriverConn string
BasicAuthUser string
BasicAuthPassword string
FillWithTestdata bool
OllamaURL string
OllamaUrl string
OllamaModel string
LocalCheckpoint string
LocalTokenizer string
storage Storage
queue Queue
RecapPromptIntro string
RecapPrompt string
AssetPattern string
DatacenterPattern string
EventNamePattern string
driver Driver
storage Storage
ai AI
slackToModelPipeline Pipeline
slackScrapePipeline Pipeline
modelToPersistencePipeline Pipeline
persistenceToRecapPipeline Pipeline
}
var (
renderAssetPattern = `(dpg|svc|red)-[a-z0-9-]*[a-z0-9]|ip-[0-9]+-[0-9]+-[0-9]+-[0-9]+\.[a-z]+-[a-z]+-[0-9]+\.compute\.internal`
renderDatacenterPattern = `[a-z]{4}[a-z]*-[0-9]`
renderEventNamePattern = `(\[[^\]]*\] *)?(?P<result>.*)`
)
func newConfig(ctx context.Context) (Config, error) {
return newConfigFromEnv(ctx, os.Getenv)
}
@@ -38,7 +52,12 @@ func newConfig(ctx context.Context) (Config, error) {
func newConfigFromEnv(ctx context.Context, getEnv func(string) string) (Config, error) {
def := Config{
Port: 38080,
OllamaModel: "gemma:2b",
OllamaModel: "llama3",
AssetPattern: renderAssetPattern,
DatacenterPattern: renderDatacenterPattern,
EventNamePattern: renderEventNamePattern,
RecapPromptIntro: "A Slack thread began with the following original post.",
RecapPrompt: "What is the summary of the responses to the Slack thread consisting of the following messages? Limit the summary to one sentence. Do not include any leading text. Be as brief as possible. No context is needed.",
}
var m map[string]any
@@ -92,31 +111,58 @@ func newConfigFromEnv(ctx context.Context, getEnv func(string) string) (Config,
return Config{}, err
}
result.driver = NewRAM()
if result.PostgresConn != "" {
ctx, can := context.WithTimeout(ctx, time.Second*10)
ctx, can := context.WithTimeout(ctx, time.Minute)
defer can()
pg, err := NewPostgres(ctx, result.PostgresConn)
driver, err := NewDriver(ctx, result.DriverConn)
if err != nil {
return Config{}, err
}
result.driver = pg
result.driver = driver
if !result.FillWithTestdata {
//} else if err := result.driver.FillWithTestdata(ctx, result.AssetPattern, result.DatacenterPattern, result.EventNamePattern); err != nil {
} else {
return Config{}, errors.New("not impl")
}
if result.FillWithTestdata {
if err := FillWithTestdata(ctx, result.driver); err != nil {
if result.Debug {
log.Printf("connected to driver at %s (%s @%s)", result.DriverConn, result.driver.engine, result.driver.conn)
}
storage, err := NewStorage(ctx, result.driver)
if err != nil {
return Config{}, err
}
}
result.storage = NewStorage(result.driver)
result.queue = NewQueue(result.driver)
result.storage = storage
if result.OllamaURL != "" {
result.ai = NewAIOllama(result.OllamaURL, result.OllamaModel)
} else if result.LocalCheckpoint != "" && result.LocalTokenizer != "" {
result.ai = NewAILocal(result.LocalCheckpoint, result.LocalTokenizer, 0.9, 128, 0.9)
if result.OllamaUrl != "" {
result.ai = NewAIOllama(result.OllamaUrl, result.OllamaModel)
} else {
result.ai = NewAINoop()
}
slackToModelPipeline, err := NewSlackToModelPipeline(ctx, result)
if err != nil {
return Config{}, err
}
result.slackToModelPipeline = slackToModelPipeline
modelToPersistencePipeline, err := NewModelToPersistencePipeline(ctx, result)
if err != nil {
return Config{}, err
}
result.modelToPersistencePipeline = modelToPersistencePipeline
slackScrapePipeline, err := NewSlackScrapePipeline(ctx, result)
if err != nil {
return Config{}, err
}
result.slackScrapePipeline = slackScrapePipeline
persistenceToRecapPipeline, err := NewPersistenceToRecapPipeline(ctx, result)
if err != nil {
return Config{}, err
}
result.persistenceToRecapPipeline = persistenceToRecapPipeline
return result, nil
}

View File

@@ -6,6 +6,7 @@ import (
)
func TestNewConfig(t *testing.T) {
t.Parallel()
if got, err := newConfigFromEnv(context.Background(), func(k string) string {
t.Logf("getenv(%s)", k)
switch k {

322
driver.go
View File

@@ -5,25 +5,67 @@ import (
"database/sql"
"errors"
"fmt"
"io/ioutil"
"net/url"
"os"
"path"
"sync"
"time"
"go.etcd.io/bbolt"
"testing"
_ "github.com/glebarez/go-sqlite"
_ "github.com/lib/pq"
)
type Driver interface {
Close() error
ForEach(context.Context, string, func(string, []byte) error) error
Get(context.Context, string, string) ([]byte, error)
Set(context.Context, string, string, []byte) error
type Driver struct {
engine string
conn string
*sql.DB
}
func FillWithTestdata(ctx context.Context, driver Driver) error {
func NewTestDriver(t *testing.T, optionalP ...string) Driver {
p := path.Join(t.TempDir(), "db")
if len(optionalP) > 0 {
p = optionalP[0]
}
driver, err := NewDriver(context.Background(), p)
if err != nil {
t.Fatal(err)
}
t.Cleanup(func() { driver.Close() })
return driver
}
func NewDriver(ctx context.Context, conn string) (Driver, error) {
engine := "sqlite"
if conn == "" {
f, err := os.CreateTemp(os.TempDir(), "spoc-bot-vr-undef-*.db")
if err != nil {
return Driver{}, err
}
f.Close()
conn = f.Name()
} else {
if u, err := url.Parse(conn); err != nil {
return Driver{}, err
} else if u.Scheme != "" {
engine = u.Scheme
}
}
db, err := sql.Open(engine, conn)
if err != nil {
return Driver{}, err
}
driver := Driver{DB: db, conn: conn, engine: engine}
if err := driver.setup(ctx); err != nil {
driver.Close()
return Driver{}, fmt.Errorf("failed setup: %w", err)
}
return driver, nil
}
/*
func (driver Driver) FillWithTestdata(ctx context.Context, assetPattern, datacenterPattern, eventNamePattern string) error {
d := "./testdata/slack_events"
entries, err := os.ReadDir(d)
if err != nil {
@@ -37,7 +79,7 @@ func FillWithTestdata(ctx context.Context, driver Driver) error {
if err != nil {
return err
}
m, err := ParseSlack(b)
m, err := ParseSlack(b, assetPattern, datacenterPattern, eventNamePattern)
if errors.Is(err, ErrIrrelevantMessage) {
continue
} else if err != nil {
@@ -49,61 +91,17 @@ func FillWithTestdata(ctx context.Context, driver Driver) error {
}
return nil
}
*/
type Postgres struct {
db *sql.DB
}
func NewPostgres(ctx context.Context, conn string) (Postgres, error) {
db, err := sql.Open("postgres", conn)
if err != nil {
return Postgres{}, err
}
pg := Postgres{db: db}
if err := pg.setup(ctx); err != nil {
pg.Close()
return Postgres{}, fmt.Errorf("failed setup: %w", err)
}
return pg, nil
}
func (pg Postgres) setup(ctx context.Context) error {
tableQ, err := pg.table("q")
if err != nil {
func (driver Driver) setup(ctx context.Context) error {
_, err := driver.ExecContext(ctx, `
DROP TABLE IF EXISTS spoc_bot_vr_q;
DROP TABLE IF EXISTS spoc_bot_vr_messages;
`)
return err
}
tableM, err := pg.table("m")
if err != nil {
return err
}
if _, err := pg.db.ExecContext(ctx, fmt.Sprintf(`
CREATE TABLE IF NOT EXISTS %s (
id TEXT NOT NULL,
v JSONB NOT NULL
);
CREATE TABLE IF NOT EXISTS %s (
id TEXT NOT NULL,
v JSONB NOT NULL
);
ALTER TABLE %s DROP CONSTRAINT IF EXISTS %s_id_unique;
ALTER TABLE %s ADD CONSTRAINT %s_id_unique UNIQUE (id);
ALTER TABLE %s DROP CONSTRAINT IF EXISTS %s_id_unique;
ALTER TABLE %s ADD CONSTRAINT %s_id_unique UNIQUE (id);
`, tableQ,
tableM,
tableQ, tableQ,
tableQ, tableQ,
tableM, tableM,
tableM, tableM,
)); err != nil {
return err
}
return nil
}
func (pg Postgres) table(s string) (string, error) {
func (d Driver) table(s string) (string, error) {
switch s {
case "q":
return "spoc_bot_vr_q", nil
@@ -112,201 +110,3 @@ func (pg Postgres) table(s string) (string, error) {
}
return "", errors.New("invalid table " + s)
}
func (pg Postgres) Close() error {
return pg.db.Close()
}
func (pg Postgres) ForEach(ctx context.Context, ns string, cb func(string, []byte) error) error {
table, err := pg.table(ns)
if err != nil {
return err
}
rows, err := pg.db.QueryContext(ctx, fmt.Sprintf(`SELECT id, v FROM %s;`, table))
if err != nil {
return err
}
defer rows.Close()
for rows.Next() {
var id string
var v []byte
if err := rows.Scan(&id, &v); err != nil {
return err
} else if err := cb(id, v); err != nil {
return err
}
}
return ctx.Err()
}
func (pg Postgres) Get(ctx context.Context, ns, id string) ([]byte, error) {
table, err := pg.table(ns)
if err != nil {
return nil, err
}
row := pg.db.QueryRowContext(ctx, fmt.Sprintf(`SELECT v FROM %s WHERE id='%s';`, table, id))
if err := row.Err(); err != nil {
return nil, err
}
var v []byte
if err := row.Scan(&v); err != nil && !errors.Is(err, sql.ErrNoRows) {
return nil, err
}
return v, nil
}
func (pg Postgres) Set(ctx context.Context, ns, id string, v []byte) error {
table, err := pg.table(ns)
if err != nil {
return err
}
if v == nil {
_, err = pg.db.ExecContext(ctx, fmt.Sprintf(`DELETE FROM %s WHERE id='%s';`, table, id))
return err
}
_, err = pg.db.ExecContext(ctx, fmt.Sprintf(`INSERT INTO %s (id, v) VALUES ('%s', '%s') ON CONFLICT (id) DO UPDATE SET v = '%s'`, table, id, v, v))
return err
}
type RAM struct {
m map[string]map[string][]byte
lock *sync.RWMutex
}
func NewRAM() RAM {
return RAM{
m: make(map[string]map[string][]byte),
lock: &sync.RWMutex{},
}
}
func (ram RAM) Close() error {
return nil
}
func (ram RAM) ForEach(ctx context.Context, ns string, cb func(string, []byte) error) error {
ram.lock.RLock()
defer ram.lock.RUnlock()
for k, v := range ram.m[ns] {
if ctx.Err() != nil {
break
}
if err := cb(k, v); err != nil {
return err
}
}
return ctx.Err()
}
func (ram RAM) Get(_ context.Context, ns, id string) ([]byte, error) {
ram.lock.RLock()
defer ram.lock.RUnlock()
if _, ok := ram.m[ns]; !ok {
return nil, nil
}
return ram.m[ns][id], nil
}
func (ram RAM) Set(_ context.Context, ns, id string, v []byte) error {
ram.lock.Lock()
defer ram.lock.Unlock()
if _, ok := ram.m[ns]; !ok {
ram.m[ns] = map[string][]byte{}
}
ram.m[ns][id] = v
if v == nil {
delete(ram.m[ns], id)
}
return nil
}
type BBolt struct {
db *bbolt.DB
}
func NewTestDBIn(d string) BBolt {
d, err := ioutil.TempDir(d, "test-db-*")
if err != nil {
panic(err)
}
db, err := NewDB(path.Join(d, "bb"))
if err != nil {
panic(err)
}
return db
}
func NewDB(p string) (BBolt, error) {
db, err := bbolt.Open(p, 0600, &bbolt.Options{
Timeout: time.Second,
})
return BBolt{db: db}, err
}
func (bb BBolt) Close() error {
return bb.db.Close()
}
func (bb BBolt) ForEach(ctx context.Context, db string, cb func(string, []byte) error) error {
return bb.db.View(func(tx *bbolt.Tx) error {
bkt := tx.Bucket([]byte(db))
if bkt == nil {
return nil
}
c := bkt.Cursor()
for k, v := c.First(); k != nil && ctx.Err() == nil; k, v = c.Next() {
if err := cb(string(k), v); err != nil {
return err
}
}
return ctx.Err()
})
}
func (bb BBolt) Get(_ context.Context, db, id string) ([]byte, error) {
var b []byte
err := bb.db.View(func(tx *bbolt.Tx) error {
bkt := tx.Bucket([]byte(db))
if bkt == nil {
return nil
}
b = bkt.Get([]byte(id))
return nil
})
return b, err
}
func (bb BBolt) Set(_ context.Context, db, id string, value []byte) error {
return bb.db.Update(func(tx *bbolt.Tx) error {
bkt := tx.Bucket([]byte(db))
if bkt == nil {
var err error
bkt, err = tx.CreateBucket([]byte(db))
if err != nil {
return err
}
}
if value == nil {
return bkt.Delete([]byte(id))
}
return bkt.Put([]byte(id), value)
})
}

View File

@@ -1,4 +1,4 @@
//go:build postgres
//go:build integration
package main
@@ -7,16 +7,47 @@ import (
"os"
"testing"
"time"
"github.com/breel-render/spoc-bot-vr/model"
)
func TestPostgres(t *testing.T) {
ctx, can := context.WithTimeout(context.Background(), time.Second*15)
func TestDriverIntegration(t *testing.T) {
ctx, can := context.WithTimeout(context.Background(), time.Second*30)
defer can()
conn := os.Getenv("INTEGRATION_POSTGRES_CONN")
pg, err := NewPostgres(ctx, conn)
driver, err := NewDriver(ctx, os.Getenv("DRIVER_CONN"))
if err != nil {
t.Fatal(err)
}
testDriver(t, pg)
defer driver.Close()
q, err := NewQueue(ctx, t.Name(), driver)
if err != nil {
t.Fatal(err)
}
qV := []byte("hello")
if err := q.Enqueue(ctx, qV); err != nil {
t.Error("q cannot enqueue:", err)
} else if reservation, v, err := q.Syn(ctx); err != nil {
t.Error("q cannot syn:", err)
} else if string(v) != string(qV) {
t.Error("q enqueued wrong:", string(v))
} else if len(reservation) == 0 {
t.Error("q didnt have reservation")
} else if err := q.Ack(ctx, reservation); err != nil {
t.Error("q cannot ack:", err)
}
s, err := NewStorage(ctx, driver)
if err != nil {
t.Fatal(err)
}
evt := model.Event{ID: "x", Name: "y"}
if err := s.UpsertEvent(ctx, evt); err != nil {
t.Error("s cannot upsert:", err)
} else if e, err := s.GetEvent(ctx, evt.ID); err != nil {
t.Error("s cannot get:", err)
} else if e != evt {
t.Error("s upserted wrong:", e)
}
}

View File

@@ -2,91 +2,23 @@ package main
import (
"context"
"errors"
"io"
"testing"
"time"
)
func TestDriverRAM(t *testing.T) {
testDriver(t, NewRAM())
func TestNewTestDriver(t *testing.T) {
t.Parallel()
NewTestDriver(t)
}
func TestFillTestdata(t *testing.T) {
func TestDriver(t *testing.T) {
t.Parallel()
ctx, can := context.WithTimeout(context.Background(), time.Second*15)
defer can()
ram := NewRAM()
if err := FillWithTestdata(ctx, ram); err != nil {
d, err := NewDriver(ctx, "")
if err != nil {
t.Fatal(err)
}
n := 0
if err := ram.ForEach(context.Background(), "m", func(_ string, _ []byte) error {
n += 1
return nil
}); err != nil {
t.Fatal(err)
}
t.Log(n)
}
func TestDriverBBolt(t *testing.T) {
testDriver(t, NewTestDBIn(t.TempDir()))
}
func testDriver(t *testing.T, d Driver) {
ctx, can := context.WithTimeout(context.Background(), time.Second*15)
defer can()
defer d.Close()
if b, err := d.Get(ctx, "m", "id"); err != nil {
t.Error("cannot get from empty:", err)
} else if b != nil {
t.Error("got fake from empty")
}
if err := d.ForEach(ctx, "m", func(string, []byte) error {
return errors.New("should have no hits")
}); err != nil {
t.Error("failed to forEach empty:", err)
}
if err := d.Set(ctx, "m", "id", []byte(`"hello world"`)); err != nil {
t.Error("cannot set from empty:", err)
}
if b, err := d.Get(ctx, "m", "id"); err != nil {
t.Error("cannot get from full:", err)
} else if string(b) != `"hello world"` {
t.Error("got fake from full")
}
if err := d.ForEach(ctx, "m", func(id string, v []byte) error {
if id != "id" {
t.Error("for each id weird:", id)
}
if string(v) != `"hello world"` {
t.Error("for each value weird:", string(v))
}
return io.EOF
}); err != io.EOF {
t.Error("failed to forEach full:", err)
}
if err := d.Set(ctx, "m", "id", nil); err != nil {
t.Error("cannot set from full:", err)
}
if err := d.ForEach(ctx, "m", func(string, []byte) error {
return errors.New("should have no hits")
}); err != nil {
t.Error("failed to forEach empty:", err)
}
if b, err := d.Get(ctx, "m", "id"); err != nil {
t.Error("cannot get from deleted:", err)
} else if b != nil {
t.Error("got fake from deleted")
}
}

18
go.mod
View File

@@ -3,17 +3,25 @@ module github.com/breel-render/spoc-bot-vr
go 1.22.1
require (
github.com/go-errors/errors v1.5.1
github.com/glebarez/go-sqlite v1.21.2
github.com/google/uuid v1.6.0
github.com/lib/pq v1.10.9
github.com/nikolaydubina/llama2.go v0.7.1
github.com/tmc/langchaingo v0.1.8
go.etcd.io/bbolt v1.3.9
golang.org/x/time v0.5.0
gotest.tools v2.2.0+incompatible
)
require (
github.com/dlclark/regexp2 v1.10.0 // indirect
github.com/gage-technologies/mistral-go v1.0.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/google/go-cmp v0.6.0 // indirect
github.com/mattn/go-isatty v0.0.19 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pkoukk/tiktoken-go v0.1.6 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
golang.org/x/sys v0.16.0 // indirect
modernc.org/libc v1.22.5 // indirect
modernc.org/mathutil v1.5.0 // indirect
modernc.org/memory v1.5.0 // indirect
modernc.org/sqlite v1.23.1 // indirect
)

38
go.sum
View File

@@ -2,29 +2,47 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dlclark/regexp2 v1.10.0 h1:+/GIL799phkJqYW+3YbOd8LCcbHzT0Pbo8zl70MHsq0=
github.com/dlclark/regexp2 v1.10.0/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
github.com/gage-technologies/mistral-go v1.0.0 h1:Hwk0uJO+Iq4kMX/EwbfGRUq9zkO36w7HZ/g53N4N73A=
github.com/gage-technologies/mistral-go v1.0.0/go.mod h1:tF++Xt7U975GcLlzhrjSQb8l/x+PrriO9QEdsgm9l28=
github.com/go-errors/errors v1.5.1 h1:ZwEMSLRCapFLflTpT7NKaAc7ukJ8ZPEjzlxt8rPN8bk=
github.com/go-errors/errors v1.5.1/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/glebarez/go-sqlite v1.21.2 h1:3a6LFC4sKahUunAmynQKLZceZCOzUthkRkEAl9gAXWo=
github.com/glebarez/go-sqlite v1.21.2/go.mod h1:sfxdZyhQjTM2Wry3gVYWaW072Ri1WMdWJi0k6+3382k=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/pprof v0.0.0-20221118152302-e6195bd50e26 h1:Xim43kblpZXfIBQsbuBVKCudVG457BR2GZFIz3uw3hQ=
github.com/google/pprof v0.0.0-20221118152302-e6195bd50e26/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/nikolaydubina/llama2.go v0.7.1 h1:ORmH1XbwFYGIOPHprkjtUPOEovlVXhnmnMjbMckaSyE=
github.com/nikolaydubina/llama2.go v0.7.1/go.mod h1:ggXhXOaDnEAgSSkcYsomqx/RLjInxe5ZAbcJ+/Y2mTM=
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkoukk/tiktoken-go v0.1.6 h1:JF0TlJzhTbrI30wCvFuiw6FzP2+/bR+FIxUdgEAcUsw=
github.com/pkoukk/tiktoken-go v0.1.6/go.mod h1:9NiV+i9mJKGj1rYOT+njbv+ZwA/zJxYdewGl6qVatpg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tmc/langchaingo v0.1.8 h1:nrImgh0aWdu3stJTHz80N60WGwPWY8HXCK10gQny7bA=
github.com/tmc/langchaingo v0.1.8/go.mod h1:iNBfS9e6jxBKsJSPWnlqNhoVWgdA3D1g5cdFJjbIZNQ=
go.etcd.io/bbolt v1.3.9 h1:8x7aARPEXiXbHmtUwAIv7eV2fQFHrLLavdiJ3uzJXoI=
go.etcd.io/bbolt v1.3.9/go.mod h1:zaO32+Ti0PK1ivdPtgMESzuzL2VPoIG1PCQNvOdo/dE=
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/time v0.5.0 h1:o7cqy6amK/52YcAKIPlM3a+Fpj35zvRj2TP+e1xFSfk=
golang.org/x/time v0.5.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
modernc.org/libc v1.22.5 h1:91BNch/e5B0uPbJFgqbxXuOnxBQjlS//icfQEGmvyjE=
modernc.org/libc v1.22.5/go.mod h1:jj+Z7dTNX8fBScMVNRAYZ/jF91K8fdT2hYMThc3YjBY=
modernc.org/mathutil v1.5.0 h1:rV0Ko/6SfM+8G+yKiyI830l3Wuz1zRutdslNoQ0kfiQ=
modernc.org/mathutil v1.5.0/go.mod h1:mZW8CKdRPY1v87qxC/wUdX5O1qDzXMP5TH3wjfpga6E=
modernc.org/memory v1.5.0 h1:N+/8c5rE6EqugZwHii4IFsaJ7MUhoWX07J5tC/iI5Ds=
modernc.org/memory v1.5.0/go.mod h1:PkUhL0Mugw21sHPeskwZW4D6VscE/GQJOnIpCnW6pSU=
modernc.org/sqlite v1.23.1 h1:nrSBg4aRQQwq59JpvGEQ15tNxoO5pX/kUjcRNwSAGQM=
modernc.org/sqlite v1.23.1/go.mod h1:OrDj17Mggn6MhE+iPbBNf7RGKODDE9NFT0f3EwDzJqk=

220
main.go
View File

@@ -4,13 +4,13 @@ import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"net"
"net/http"
"os/signal"
"sort"
"strconv"
"strings"
"syscall"
@@ -36,11 +36,35 @@ func run(ctx context.Context, cfg Config) error {
select {
case <-ctx.Done():
return ctx.Err()
case err := <-processPipelines(ctx,
cfg.slackToModelPipeline,
cfg.modelToPersistencePipeline,
cfg.slackScrapePipeline,
cfg.persistenceToRecapPipeline,
):
return err
case err := <-listenAndServe(ctx, cfg):
return err
}
}
func processPipelines(ctx context.Context, first Pipeline, pipelines ...Pipeline) chan error {
ctx, can := context.WithCancel(ctx)
pipelines = append(pipelines, first)
errs := make(chan error)
for i := range pipelines {
go func(i int) {
defer can()
select {
case errs <- pipelines[i].Process(ctx):
case <-ctx.Done():
}
}(i)
}
return errs
}
func listenAndServe(ctx context.Context, cfg Config) chan error {
s := http.Server{
Addr: fmt.Sprintf(":%d", cfg.Port),
@@ -63,10 +87,10 @@ func listenAndServe(ctx context.Context, cfg Config) chan error {
func newHandler(cfg Config) http.HandlerFunc {
mux := http.NewServeMux()
mux.Handle("GET /api/v1/version", http.HandlerFunc(newHandlerGetAPIV1Version))
mux.Handle("POST /api/v1/events/slack", http.HandlerFunc(newHandlerPostAPIV1EventsSlack(cfg)))
mux.Handle("GET /api/v1/messages", http.HandlerFunc(newHandlerGetAPIV1Messages(cfg)))
mux.Handle("GET /api/v1/threads", http.HandlerFunc(newHandlerGetAPIV1Threads(cfg)))
mux.Handle("GET /api/v1/threads/{thread}", http.HandlerFunc(newHandlerGetAPIV1ThreadsThread(cfg)))
mux.Handle("PUT /api/v1/rpc/scrapeslack", http.HandlerFunc(newHandlerPutAPIV1RPCScrapeSlack(cfg)))
mux.Handle("GET /api/v1/rpc/recapevent", http.HandlerFunc(newHandlerGetAPIV1RPCRecapEvent(cfg)))
return func(w http.ResponseWriter, r *http.Request) {
if cfg.Debug {
@@ -79,7 +103,29 @@ func newHandler(cfg Config) http.HandlerFunc {
}
}
func newHandlerGetAPIV1Messages(cfg Config) http.HandlerFunc {
var Version = "undef"
func newHandlerGetAPIV1Version(w http.ResponseWriter, _ *http.Request) {
json.NewEncoder(w).Encode(map[string]any{"version": Version})
}
func newHandlerGetAPIV1RPCRecapEvent(cfg Config) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !basicAuth(cfg, w, r) {
return
}
event := r.URL.Query().Get("id")
b, _ := json.Marshal(ModelIDs{Event: event})
if err := cfg.persistenceToRecapPipeline.reader.Enqueue(r.Context(), b); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
json.NewEncoder(w).Encode(map[string]any{"event": event})
}
}
func newHandlerPutAPIV1RPCScrapeSlack(cfg Config) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !basicAuth(cfg, w, r) {
return
@@ -91,53 +137,17 @@ func newHandlerGetAPIV1Messages(cfg Config) http.HandlerFunc {
return
}
messages, err := cfg.storage.MessagesSince(r.Context(), since)
if err != nil {
job, _ := json.Marshal(SlackScrape{
Latest: time.Now().Unix(),
Oldest: since.Unix(),
ThreadTS: "",
Channel: r.Header.Get("slack-channel"),
Token: r.Header.Get("slack-oauth-token"),
})
if err := cfg.slackScrapePipeline.reader.Enqueue(r.Context(), job); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
json.NewEncoder(w).Encode(map[string]any{"messages": messages})
}
}
func newHandlerGetAPIV1Threads(cfg Config) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !basicAuth(cfg, w, r) {
return
}
since, err := parseSince(r.URL.Query().Get("since"))
if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
threads, err := cfg.storage.ThreadsSince(r.Context(), since)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
json.NewEncoder(w).Encode(map[string]any{"threads": threads})
}
}
func newHandlerGetAPIV1ThreadsThread(cfg Config) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !basicAuth(cfg, w, r) {
return
}
thread := strings.Split(strings.Split(r.URL.Path, "/threads/")[1], "/")[0]
messages, err := cfg.storage.Thread(r.Context(), thread)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
json.NewEncoder(w).Encode(map[string]any{"thread": map[string]any{"messages": messages}})
}
}
@@ -151,12 +161,13 @@ func basicAuth(cfg Config, w http.ResponseWriter, r *http.Request) bool {
func newHandlerPostAPIV1EventsSlack(cfg Config) http.HandlerFunc {
if cfg.InitializeSlack {
return handlerPostAPIV1EventsSlackInitialize
return handlerPostAPIV1EventsSlackInitialize(cfg)
}
return _newHandlerPostAPIV1EventsSlack(cfg)
}
func handlerPostAPIV1EventsSlackInitialize(w http.ResponseWriter, r *http.Request) {
func handlerPostAPIV1EventsSlackInitialize(cfg Config) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
b, _ := io.ReadAll(r.Body)
var challenge struct {
Token string
@@ -167,14 +178,32 @@ func handlerPostAPIV1EventsSlackInitialize(w http.ResponseWriter, r *http.Reques
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
json.NewEncoder(w).Encode(map[string]any{"challenge": challenge.Challenge})
cfg.driver.ExecContext(r.Context(), `
CREATE TABLE
IF NOT EXISTS
initialization (
label TEXT,
token TEXT,
updated TIMESTAMP
)
`)
if _, err := cfg.driver.ExecContext(r.Context(), `
INSERT
INTO initialization (label, token, updated)
VALUES ('slack_events_webhook_token', $1, $2)
`, challenge.Token, time.Now().UTC()); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
log.Println("stashed new slack initialization token", challenge.Token)
encodeResponse(w, r, map[string]any{"challenge": challenge.Challenge})
}
}
func _newHandlerPostAPIV1EventsSlack(cfg Config) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
b, _ := io.ReadAll(r.Body)
r.Body = io.NopCloser(bytes.NewReader(b))
body, _ := io.ReadAll(r.Body)
r.Body = io.NopCloser(bytes.NewReader(body))
var allowList struct {
Token string
@@ -182,7 +211,7 @@ func _newHandlerPostAPIV1EventsSlack(cfg Config) http.HandlerFunc {
Channel string
}
}
if err := json.Unmarshal(b, &allowList); err != nil {
if err := json.Unmarshal(body, &allowList); err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
} else if allowList.Token != cfg.SlackToken {
@@ -199,20 +228,12 @@ func _newHandlerPostAPIV1EventsSlack(cfg Config) http.HandlerFunc {
return
}
m, err := ParseSlack(b)
if errors.Is(err, ErrIrrelevantMessage) {
return
} else if err != nil {
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
if err := cfg.storage.Upsert(r.Context(), m); err != nil {
log.Printf("failed to ingest %+v: %v", m, err)
if err := cfg.slackToModelPipeline.reader.Enqueue(r.Context(), body); err != nil {
log.Printf("failed to ingest: %v", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
log.Printf("ingested %v", m.ID)
log.Printf("ingested")
}
}
@@ -243,3 +264,68 @@ func parseSince(s string) (time.Time, error) {
return time.Time{}, fmt.Errorf("failed to parse since=%q", s)
}
func encodeResponse(w http.ResponseWriter, r *http.Request, v interface{}) error {
if strings.Contains(r.Header.Get("Accept"), "text/csv") {
return encodeCSVResponse(w, v)
}
if strings.Contains(r.Header.Get("Accept"), "text/tsv") {
return encodeTSVResponse(w, v)
}
return encodeJSONResponse(w, v)
}
func encodeJSONResponse(w http.ResponseWriter, v interface{}) error {
return json.NewEncoder(w).Encode(v)
}
func encodeTSVResponse(w http.ResponseWriter, v interface{}) error {
return encodeSVResponse(w, v, "\t")
}
func encodeCSVResponse(w http.ResponseWriter, v interface{}) error {
return encodeSVResponse(w, v, ",")
}
func encodeSVResponse(w http.ResponseWriter, v interface{}, delim string) error {
b, err := json.Marshal(v)
if err != nil {
return err
}
var data map[string][]map[string]json.RawMessage
if err := json.Unmarshal(b, &data); err != nil {
return err
}
var objects []map[string]json.RawMessage
for k := range data {
objects = data[k]
}
fields := []string{}
for i := range objects {
for k := range objects[i] {
b, _ := json.Marshal(k)
fields = append(fields, string(b))
}
break
}
sort.Strings(fields)
w.Write([]byte(strings.Join(fields, delim)))
w.Write([]byte("\n"))
for _, object := range objects {
for j, field := range fields {
json.Unmarshal([]byte(field), &field)
if j > 0 {
w.Write([]byte(delim))
}
w.Write(object[field])
}
w.Write([]byte("\n"))
}
return nil
}

View File

@@ -3,7 +3,6 @@ package main
import (
"bytes"
"context"
"encoding/json"
"fmt"
"io"
"net/http"
@@ -18,6 +17,7 @@ import (
)
func TestRun(t *testing.T) {
t.Parallel()
ctx, can := context.WithTimeout(context.Background(), time.Second*10)
defer can()
@@ -36,14 +36,25 @@ func TestRun(t *testing.T) {
return int(port)
}()
u := fmt.Sprintf("http://localhost:%d", port)
var err error
cfg := Config{}
cfg.DatacenterPattern = renderDatacenterPattern
cfg.AssetPattern = renderAssetPattern
cfg.EventNamePattern = renderEventNamePattern
cfg.Port = port
cfg.driver = NewRAM()
cfg.storage = NewStorage(cfg.driver)
cfg.queue = NewQueue(cfg.driver)
cfg.driver = NewTestDriver(t)
cfg.storage, _ = NewStorage(ctx, cfg.driver)
cfg.ai = NewAINoop()
cfg.SlackToken = "redacted"
cfg.SlackChannels = []string{"C06U1DDBBU4"}
cfg.slackToModelPipeline, _ = NewSlackToModelPipeline(ctx, cfg)
cfg.slackScrapePipeline, _ = NewSlackScrapePipeline(ctx, cfg)
cfg.modelToPersistencePipeline, _ = NewModelToPersistencePipeline(ctx, cfg)
cfg.persistenceToRecapPipeline, err = NewPersistenceToRecapPipeline(ctx, cfg)
if err != nil {
t.Fatal(err)
}
go func() {
if err := run(ctx, cfg); err != nil && ctx.Err() == nil {
@@ -80,75 +91,40 @@ func TestRun(t *testing.T) {
}
})
t.Run("GET /api/v1/messages", func(t *testing.T) {
resp, err := http.Get(fmt.Sprintf("%s/api/v1/messages", u))
t.Run("GET /api/v1/rpc/recapevent", func(t *testing.T) {
b, err := os.ReadFile(path.Join("testdata", "slack_events", "human_thread_message_from_opsgenie_alert.json"))
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b, _ := io.ReadAll(resp.Body)
t.Fatalf("(%d) %s", resp.StatusCode, b)
}
var result struct {
Messages []Message
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
if err := cfg.slackToModelPipeline.reader.Enqueue(ctx, b); err != nil {
t.Fatal(err)
} else if len(result.Messages) != 1 {
t.Fatal(result.Messages)
} else {
t.Logf("%+v", result)
}
})
t.Run("GET /api/v1/threads", func(t *testing.T) {
resp, err := http.Get(fmt.Sprintf("%s/api/v1/threads", u))
b, err = os.ReadFile(path.Join("testdata", "slack_events", "opsgenie_alert.json"))
if err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b, _ := io.ReadAll(resp.Body)
t.Fatalf("(%d) %s", resp.StatusCode, b)
}
var result struct {
Threads []string
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatal(err)
} else if result.Threads[0] != "1712911957.023359" {
t.Fatal(result.Threads)
} else {
t.Logf("%+v", result)
}
})
t.Run("GET /api/v1/threads/1712911957.023359", func(t *testing.T) {
resp, err := http.Get(fmt.Sprintf("%s/api/v1/threads/1712911957.023359", u))
if err != nil {
if err := cfg.slackToModelPipeline.reader.Enqueue(ctx, b); err != nil {
t.Fatal(err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
b, _ := io.ReadAll(resp.Body)
t.Fatalf("(%d) %s", resp.StatusCode, b)
for ctx.Err() == nil {
if thread, _ := cfg.storage.GetThread(ctx, "1712927439.728409"); thread.Recap != "" {
break
}
select {
case <-ctx.Done():
case <-time.After(time.Millisecond * 100):
}
}
if err := ctx.Err(); err != nil {
t.Fatal("timed out waiting for recap")
}
var result struct {
Thread struct {
Messages []Message
}
}
if err := json.NewDecoder(resp.Body).Decode(&result); err != nil {
t.Fatal(err)
} else if len(result.Thread.Messages) != 1 {
t.Fatal(result.Thread)
} else {
t.Logf("%+v", result)
thread, _ := cfg.storage.GetThread(ctx, "1712927439.728409")
if thread.Recap == "" {
t.Error(thread.Recap)
}
t.Log(thread.Recap)
})
}

View File

@@ -1,170 +0,0 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"strings"
"time"
)
var (
ErrIrrelevantMessage = errors.New("message isnt relevant to spoc bot vr")
)
type Message struct {
ID string
TS uint64
Source string
Channel string
Thread string
EventName string
Event string
Plaintext string
Asset string
Resolved bool
}
func (m Message) Empty() bool {
return m == (Message{})
}
func (m Message) Time() time.Time {
return time.Unix(int64(m.TS), 0)
}
func (m Message) Serialize() []byte {
b, err := json.Marshal(m)
if err != nil {
panic(err)
}
return b
}
func MustDeserialize(b []byte) Message {
m, err := Deserialize(b)
if err != nil {
panic(err)
}
return m
}
func Deserialize(b []byte) (Message, error) {
var m Message
err := json.Unmarshal(b, &m)
return m, err
}
type (
slackMessage struct {
TS uint64 `json:"event_time"`
Event slackEvent
}
slackEvent struct {
ID string `json:"event_ts"`
Channel string
// rewrites
Nested *slackEvent `json:"message"`
PreviousMessage *slackEvent `json:"previous_message"`
// human
ParentID string `json:"thread_ts"`
Text string
Blocks []slackBlock
// bot
Bot slackBot `json:"bot_profile"`
Attachments []slackAttachment
}
slackBlock struct {
Elements []slackElement
}
slackElement struct {
Elements []slackElement
RichText string `json:"text"`
}
slackBot struct {
Name string
}
slackAttachment struct {
Color string
Title string
Text string
Fields []slackField
Actions []slackAction
}
slackField struct {
Value string
Title string
}
slackAction struct{}
)
func ParseSlack(b []byte) (Message, error) {
s, err := parseSlack(b)
if err != nil {
return Message{}, err
}
if s.Event.Bot.Name != "" {
if len(s.Event.Attachments) == 0 {
return Message{}, ErrIrrelevantMessage
} else if !strings.Contains(s.Event.Attachments[0].Title, ": Firing: ") {
return Message{}, ErrIrrelevantMessage
}
return Message{
ID: fmt.Sprintf("%s/%v", s.Event.ID, s.TS),
TS: s.TS,
Source: fmt.Sprintf(`https://renderinc.slack.com/archives/%s/p%s`, s.Event.Channel, strings.ReplaceAll(s.Event.ID, ".", "")),
Channel: s.Event.Channel,
Thread: s.Event.ID,
EventName: strings.Split(s.Event.Attachments[0].Title, ": Firing: ")[1],
Event: strings.Split(s.Event.Attachments[0].Title, ":")[0],
Plaintext: s.Event.Attachments[0].Text,
Asset: "TODO",
Resolved: !strings.HasPrefix(s.Event.Attachments[0].Color, "F"),
}, nil
}
if s.Event.ParentID == "" {
return Message{}, ErrIrrelevantMessage
}
return Message{
ID: fmt.Sprintf("%s/%v", s.Event.ParentID, s.TS),
TS: s.TS,
Source: fmt.Sprintf(`https://renderinc.slack.com/archives/%s/p%s`, s.Event.Channel, strings.ReplaceAll(s.Event.ParentID, ".", "")),
Channel: s.Event.Channel,
Thread: s.Event.ParentID,
EventName: "TODO",
Event: "TODO",
Plaintext: s.Event.Text,
Asset: "TODO",
}, nil
}
func parseSlack(b []byte) (slackMessage, error) {
var result slackMessage
err := json.Unmarshal(b, &result)
if result.Event.Nested != nil && !result.Event.Nested.Empty() {
result.Event.Blocks = result.Event.Nested.Blocks
result.Event.Bot = result.Event.Nested.Bot
result.Event.Attachments = result.Event.Nested.Attachments
result.Event.Nested = nil
}
if result.Event.PreviousMessage != nil {
if result.Event.PreviousMessage.ID != "" {
result.Event.ID = result.Event.PreviousMessage.ID
}
result.Event.PreviousMessage = nil
}
return result, err
}
func (this slackEvent) Empty() bool {
return fmt.Sprintf("%+v", this) == fmt.Sprintf("%+v", slackEvent{})
}

View File

@@ -1,151 +0,0 @@
package main
import (
"fmt"
"os"
"path"
"testing"
)
func TestParseSlackTestdata(t *testing.T) {
cases := map[string]struct {
slackMessage slackMessage
message Message
}{
"human_thread_message_from_opsgenie_alert.json": {
slackMessage: slackMessage{
TS: 1712930706,
Event: slackEvent{
ID: "1712930706.598629",
Channel: "C06U1DDBBU4",
ParentID: "1712927439.728409",
Text: "I gotta do this",
Blocks: []slackBlock{{
Elements: []slackElement{{
Elements: []slackElement{{
RichText: "I gotta do this",
}},
}},
}},
Bot: slackBot{
Name: "",
},
Attachments: []slackAttachment{},
},
},
message: Message{
ID: "1712927439.728409/1712930706",
TS: 1712930706,
Source: "https://renderinc.slack.com/archives/C06U1DDBBU4/p1712927439728409",
Channel: "C06U1DDBBU4",
Thread: "1712927439.728409",
EventName: "TODO",
Event: "TODO",
Plaintext: "I gotta do this",
Asset: "TODO",
},
},
"opsgenie_alert.json": {
slackMessage: slackMessage{
TS: 1712927439,
Event: slackEvent{
ID: "1712927439.728409",
Channel: "C06U1DDBBU4",
Bot: slackBot{
Name: "Opsgenie for Alert Management",
},
Attachments: []slackAttachment{{
Color: "F4511E",
Title: "#11071: [Grafana]: Firing: Alertconfig Workflow Failed",
Text: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Fields: []slackField{
{Value: "P3", Title: "Priority"},
{Value: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb", Title: "Tags"},
{Value: "Datastores Non-Critical", Title: "Routed Teams"},
},
Actions: []slackAction{{}, {}, {}},
}},
},
},
message: Message{
ID: "1712927439.728409/1712927439",
TS: 1712927439,
Source: "https://renderinc.slack.com/archives/C06U1DDBBU4/p1712927439728409",
Channel: "C06U1DDBBU4",
Thread: "1712927439.728409",
EventName: "Alertconfig Workflow Failed",
Event: "#11071",
Plaintext: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Asset: "TODO",
},
},
"opsgenie_alert_resolved.json": {
slackMessage: slackMessage{
TS: 1712916339,
Event: slackEvent{
ID: "1712916339.000300",
Channel: "C06U1DDBBU4",
Bot: slackBot{
Name: "Opsgenie for Alert Management",
},
Attachments: []slackAttachment{{
Color: "2ecc71",
Title: "#11069: [Grafana]: Firing: Alertconfig Workflow Failed",
Text: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Fields: []slackField{
{Value: "P3", Title: "Priority"},
{Value: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb", Title: "Tags"},
{Value: "Datastores Non-Critical", Title: "Routed Teams"},
},
Actions: []slackAction{},
}},
},
},
message: Message{
ID: "1712916339.000300/1712916339",
TS: 1712916339,
Source: "https://renderinc.slack.com/archives/C06U1DDBBU4/p1712916339000300",
Channel: "C06U1DDBBU4",
Thread: "1712916339.000300",
EventName: "Alertconfig Workflow Failed",
Event: "#11069",
Plaintext: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Asset: "TODO",
Resolved: true,
},
},
}
for name, d := range cases {
want := d
t.Run(name, func(t *testing.T) {
b, err := os.ReadFile(path.Join("testdata", "slack_events", name))
if err != nil {
t.Fatal(err)
}
t.Run("parseSlack", func(t *testing.T) {
got, err := parseSlack(b)
if err != nil {
t.Fatal(err)
}
if fmt.Sprintf("%+v", got) != fmt.Sprintf("%+v", want.slackMessage) {
t.Errorf("wanted \n\t%+v, got\n\t%+v", want.slackMessage, got)
}
})
t.Run("ParseSlack", func(t *testing.T) {
got, err := ParseSlack(b)
if err != nil {
t.Fatal(err)
}
if got != want.message {
t.Errorf("wanted \n\t%+v, got\n\t%+v", want.message, got)
}
if time := got.Time(); time.Unix() != int64(got.TS) {
t.Error("not unix time", got.TS, time)
}
})
})
}
}

37
model/event.go Normal file
View File

@@ -0,0 +1,37 @@
package model
import "time"
type Event struct {
Updated uint64
ID string
URL string
TS uint64
Name string
Asset string
Datacenter string
Team string
Resolved bool
}
func NewEvent(ID, URL string, TS uint64, Name, Asset, Datacenter, Team string, Resolved bool) Event {
return Event{
Updated: updated(),
ID: ID,
URL: URL,
TS: TS,
Name: Name,
Asset: Asset,
Datacenter: Datacenter,
Team: Team,
Resolved: Resolved,
}
}
func (e Event) Empty() bool {
return e == (Event{})
}
func updated() uint64 {
return uint64(time.Now().UnixNano() / int64(time.Millisecond))
}

26
model/message.go Normal file
View File

@@ -0,0 +1,26 @@
package model
// THREAD ||--|{ MESSAGE: "populated by"
type Message struct {
Updated uint64
ID string
TS uint64
Author string
Plaintext string
ThreadID string
}
func NewMessage(ID string, TS uint64, Author, Plaintext string, ThreadID string) Message {
return Message{
Updated: updated(),
ID: ID,
TS: TS,
Author: Author,
Plaintext: Plaintext,
ThreadID: ThreadID,
}
}
func (m Message) Empty() bool {
return m == (Message{})
}

30
model/model.go Normal file
View File

@@ -0,0 +1,30 @@
package model
var _ = `
erDiagram
%% thread event eventName
EVENT ||--|{ THREAD: "spawns"
THREAD ||--|{ MESSAGE: "populated by"
MESSAGE {
ID str
URL str
TS number
Plaintext str
Author str
}
THREAD {
ID str
URL str
Channel str
}
EVENT {
ID str
Name str
Asset str
Resolved bool
Datacenter str
Team str
}
`

27
model/thread.go Normal file
View File

@@ -0,0 +1,27 @@
package model
// EVENT ||--|{ THREAD: "spawns"
type Thread struct {
Updated uint64
ID string
URL string
TS uint64
Channel string
EventID string
Recap string
}
func NewThread(ID, URL string, TS uint64, Channel string, EventID string) Thread {
return Thread{
Updated: updated(),
ID: ID,
URL: URL,
TS: TS,
Channel: Channel,
EventID: EventID,
}
}
func (t Thread) Empty() bool {
return t == (Thread{})
}

67
persistence.go Normal file
View File

@@ -0,0 +1,67 @@
package main
import (
"context"
"encoding/json"
"fmt"
"log"
)
type ModelToPersistence struct {
pipeline Pipeline
}
type ModelIDs struct {
Event string
Message string
Thread string
}
func NewModelToPersistencePipeline(ctx context.Context, cfg Config) (Pipeline, error) {
reader, err := NewQueue(ctx, "new_models", cfg.driver)
if err != nil {
return Pipeline{}, err
}
writer, err := NewQueue(ctx, "new_persistence", cfg.driver)
if err != nil {
return Pipeline{}, err
}
return Pipeline{
writer: writer,
reader: reader,
process: newModelToPersistenceProcess(cfg, cfg.storage),
}, nil
}
func newModelToPersistenceProcess(cfg Config, storage Storage) processFunc {
return func(ctx context.Context, models []byte) ([]byte, error) {
var m Models
if err := json.Unmarshal(models, &m); err != nil {
return nil, fmt.Errorf("received non models payload: %w", err)
}
if m.Event.Empty() {
} else if err := storage.UpsertEvent(ctx, m.Event); err != nil {
return nil, fmt.Errorf("failed to persist event: %w", err)
}
if m.Thread.Empty() {
} else if err := storage.UpsertThread(ctx, m.Thread); err != nil {
return nil, fmt.Errorf("failed to persist thread: %w", err)
}
if m.Message.Empty() {
} else if err := storage.UpsertMessage(ctx, m.Message); err != nil {
return nil, fmt.Errorf("failed to persist message: %w", err)
}
if cfg.Debug {
log.Printf("persisted models")
}
return json.Marshal(ModelIDs{
Event: m.Event.ID,
Thread: m.Thread.ID,
Message: m.Message.ID,
})
}
}

63
persistence_test.go Normal file
View File

@@ -0,0 +1,63 @@
package main
import (
"context"
"encoding/json"
"testing"
"time"
"github.com/breel-render/spoc-bot-vr/model"
)
func TestModelToPersistenceProcessor(t *testing.T) {
t.Parallel()
ctx, can := context.WithTimeout(context.Background(), time.Second*10)
defer can()
d := NewTestDriver(t)
s, _ := NewStorage(ctx, d)
process := newModelToPersistenceProcess(Config{}, s)
_, _ = ctx, process
inputModels := Models{
Event: model.Event{ID: "event", Asset: "event-asset"},
//Thread: {ID: "thread", Channel: "thread-channel"},
Message: model.Message{ID: "message", Plaintext: "message-plaintext"},
}
input, _ := json.Marshal(inputModels)
var outputModelIDs ModelIDs
var n int
if output, err := process(ctx, input); err != nil {
t.Fatal(err)
} else if err := json.Unmarshal(output, &outputModelIDs); err != nil {
t.Fatal(err)
} else if outputModelIDs != (ModelIDs{Event: "event", Message: "message"}) {
t.Error(outputModelIDs)
}
if row := d.QueryRowContext(ctx, `SELECT COUNT(*) FROM events`); row.Err() != nil {
t.Error("cant count events:", row.Err())
} else if err := row.Scan(&n); err != nil {
t.Error("cant count events:", err)
} else if n != 1 {
t.Error("bad event count:", n)
}
if row := d.QueryRowContext(ctx, `SELECT COUNT(*) FROM threads`); row.Err() != nil {
t.Error("cant count threads:", row.Err())
} else if err := row.Scan(&n); err != nil {
t.Error("cant count threads:", err)
} else if n != 0 {
t.Error("bad thread count:", n)
}
if row := d.QueryRowContext(ctx, `SELECT COUNT(*) FROM messages`); row.Err() != nil {
t.Error("cant count messages:", row.Err())
} else if err := row.Scan(&n); err != nil {
t.Error("cant count messages:", err)
} else if n != 1 {
t.Error("bad message count:", n)
}
}

55
pipeline.go Normal file
View File

@@ -0,0 +1,55 @@
package main
import (
"context"
"log"
)
type (
Pipeline struct {
writer Queue
reader Queue
process processFunc
}
processFunc func(context.Context, []byte) ([]byte, error)
)
func NewPipeline(writer, reader Queue, process processFunc) Pipeline {
return Pipeline{
writer: writer,
reader: reader,
process: process,
}
}
func (p Pipeline) Process(ctx context.Context) error {
ctx, can := context.WithCancel(ctx)
defer can()
err := p.processUntilErr(ctx)
if err != nil {
log.Printf("pipeline failed to process: %v", err)
}
return err
}
func (p Pipeline) processUntilErr(ctx context.Context) error {
for ctx.Err() == nil {
reservation, read, err := p.reader.Syn(ctx)
if err != nil {
return err
}
processed, err := p.process(ctx, read)
if err != nil {
return err
}
if processed == nil {
} else if err := p.writer.Enqueue(ctx, processed); err != nil {
return err
}
if err := p.reader.Ack(ctx, reservation); err != nil {
return err
}
}
return ctx.Err()
}

92
pipeline_test.go Normal file
View File

@@ -0,0 +1,92 @@
package main
import (
"context"
"testing"
"time"
)
func TestPipelineDoesntPushEmptyMessage(t *testing.T) {
t.Parallel()
ctx, can := context.WithTimeout(context.Background(), time.Second*10)
defer can()
output, _ := NewQueue(ctx, "output", NewTestDriver(t))
input, _ := NewQueue(ctx, "input", NewTestDriver(t))
calls := 0
process := func(_ context.Context, v []byte) ([]byte, error) {
calls += 1
return nil, nil
}
if err := input.Enqueue(ctx, []byte("hello")); err != nil {
t.Error(err)
}
ing := NewPipeline(output, input, process)
go func() {
defer can()
if err := ing.Process(ctx); err != nil && ctx.Err() == nil {
t.Fatal(err)
}
}()
for ctx.Err() == nil {
if calls != 0 {
break
}
select {
case <-ctx.Done():
case <-time.After(time.Millisecond * 100):
}
}
if r, _, _ := output.syn(ctx); len(r) != 0 {
t.Error("something was pushed to out queue even though processor didnt emit content")
}
}
func TestPipeline(t *testing.T) {
t.Parallel()
ctx, can := context.WithTimeout(context.Background(), time.Second*10)
defer can()
output, err := NewQueue(ctx, "output", NewTestDriver(t))
if err != nil {
t.Fatal(err)
}
input, err := NewQueue(ctx, "input", NewTestDriver(t))
if err != nil {
t.Fatal(err)
}
found := map[string]struct{}{}
process := func(_ context.Context, v []byte) ([]byte, error) {
found[string(v)] = struct{}{}
return []byte("world"), nil
}
if err := input.Enqueue(ctx, []byte("hello")); err != nil {
t.Error(err)
}
ing := NewPipeline(output, input, process)
go func() {
defer can()
if err := ing.Process(ctx); err != nil && ctx.Err() == nil {
t.Fatal(err)
}
}()
if r, p, err := output.Syn(ctx); err != nil {
t.Error(err)
} else if string(p) != "world" {
t.Errorf("Syn() = (%q, %q, %v)", r, p, err)
} else if err := output.Ack(ctx, r); err != nil {
t.Error(err)
}
if len(found) != 1 {
t.Error(found)
}
}

133
queue.go
View File

@@ -2,57 +2,138 @@ package main
import (
"context"
"fmt"
"strings"
"time"
"github.com/go-errors/errors"
"github.com/google/uuid"
)
type Queue struct {
driver Driver
topic string
}
func NewQueue(driver Driver) Queue {
return Queue{driver: driver}
func NewNoopQueue() Queue {
return Queue{}
}
func (q Queue) Push(ctx context.Context, m Message) error {
return q.driver.Set(ctx, "q", m.ID, m.Serialize())
func NewQueue(ctx context.Context, topic string, driver Driver) (Queue, error) {
if _, err := driver.ExecContext(ctx, `
CREATE TABLE IF NOT EXISTS queue (
id TEXT PRIMARY KEY,
topic TEXT NOT NULL,
updated INTEGER NOT NULL,
reservation TEXT,
payload TEXT
);
`); err != nil {
return Queue{}, fmt.Errorf("failed to create table: %w", err)
}
return Queue{topic: topic, driver: driver}, nil
}
func (q Queue) PeekFirst(ctx context.Context) (Message, error) {
for {
m, err := q.peekFirst(ctx)
func (q Queue) Enqueue(ctx context.Context, b []byte) error {
if q.driver.DB == nil {
return nil
}
result, err := q.driver.ExecContext(ctx, `
INSERT INTO queue (id, topic, updated, payload) VALUES ($1, $2, $3, $4)
`,
uuid.New().String(),
q.topic,
time.Now().Unix(),
b,
)
if err != nil {
return m, err
return err
}
if n, err := result.RowsAffected(); err != nil {
return err
} else if n != 1 {
return fmt.Errorf("insert into queue %s affected %v rows", b, n)
}
return nil
}
if !m.Empty() {
return m, nil
func (q Queue) Syn(ctx context.Context) (string, []byte, error) {
if q.driver.DB == nil {
return "", nil, nil
}
for {
reservation, m, err := q.syn(ctx)
if reservation != nil || err != nil {
return string(reservation), m, err
}
select {
case <-ctx.Done():
return Message{}, ctx.Err()
case <-time.After(time.Second):
return "", nil, ctx.Err()
case <-time.After(time.Millisecond * 500):
}
}
}
func (q Queue) Ack(ctx context.Context, id string) error {
return q.driver.Set(ctx, "q", id, nil)
func (q Queue) syn(ctx context.Context) ([]byte, []byte, error) {
now := time.Now().Unix()
reservation := []byte(uuid.New().String())
var payload []byte
if result, err := q.driver.ExecContext(ctx, `
UPDATE queue
SET
updated = $1, reservation = $2
WHERE
id IN (
SELECT id
FROM queue
WHERE
topic = $3
AND (
reservation IS NULL
OR $4 - updated > 600
)
LIMIT 1
)
`, now, reservation, q.topic, now); err != nil {
return nil, nil, fmt.Errorf("failed to assign reservation: %w", err)
} else if n, err := result.RowsAffected(); err != nil {
return nil, nil, fmt.Errorf("failed to assign reservation: no count: %w", err)
} else if n == 0 {
return nil, nil, nil
}
func (q Queue) peekFirst(ctx context.Context) (Message, error) {
var m Message
subctx, subcan := context.WithCancel(ctx)
defer subcan()
err := q.driver.ForEach(subctx, "q", func(_ string, value []byte) error {
m = MustDeserialize(value)
subcan()
row := q.driver.QueryRowContext(ctx, `
SELECT payload
FROM queue
WHERE reservation=$1
LIMIT 1
`, reservation)
if err := row.Err(); err != nil {
return nil, nil, fmt.Errorf("failed to query reservation: %w", err)
} else if err := row.Scan(&payload); err != nil && !strings.Contains(err.Error(), "no rows in result") {
return nil, nil, fmt.Errorf("failed to parse reservation: %w", err)
}
return reservation, payload, nil
}
func (q Queue) Ack(ctx context.Context, reservation string) error {
return q.ack(ctx, []byte(reservation))
}
func (q Queue) ack(ctx context.Context, reservation []byte) error {
if q.driver.DB == nil {
return nil
})
if errors.Is(err, subctx.Err()) {
err = nil
}
return m, err
result, err := q.driver.ExecContext(ctx, `
DELETE FROM queue
WHERE reservation=$1
`, reservation)
if err != nil {
return err
}
if n, _ := result.RowsAffected(); n != 1 {
return fmt.Errorf("failed to ack %s: %v rows affected", reservation, n)
}
return err
}

View File

@@ -8,27 +8,67 @@ import (
)
func TestQueue(t *testing.T) {
t.Parallel()
ctx, can := context.WithTimeout(context.Background(), time.Second*10)
defer can()
q := NewQueue(NewRAM())
for i := 0; i < 39; i++ {
if err := q.Push(ctx, Message{ID: strconv.Itoa(i), TS: uint64(i)}); err != nil {
t.Fatal(i, err)
}
q, err := NewQueue(ctx, "", NewTestDriver(t))
if err != nil {
t.Fatal(err)
}
qOther, _ := NewQueue(ctx, "other", q.driver)
found := map[uint64]struct{}{}
for i := 0; i < 39; i++ {
if m, err := q.PeekFirst(ctx); err != nil {
t.Fatal(i, err)
} else if _, ok := found[m.TS]; ok {
t.Error(i, m.TS)
} else if err := q.Ack(ctx, m.ID); err != nil {
t.Fatal(i, err)
if reservation, _, err := q.syn(ctx); reservation != nil {
t.Errorf("able to syn before any enqueues created: %v", err)
} else {
found[m.TS] = struct{}{}
t.Logf("sync before enqueues: %v", err)
}
t.Run("enqueue", func(t *testing.T) {
for i := 0; i < 39; i++ {
if err := q.Enqueue(ctx, []byte(strconv.Itoa(i))); err != nil {
t.Fatal(i, err)
}
}
})
if err := qOther.Enqueue(ctx, []byte(strconv.Itoa(100))); err != nil {
t.Fatal(err)
}
t.Run("syn ack", func(t *testing.T) {
found := map[string]struct{}{}
for i := 0; i < 39; i++ {
if reservation, b, err := q.Syn(ctx); err != nil {
t.Fatal(i, "syn err", err)
} else if _, ok := found[string(b)]; ok {
t.Errorf("syn'd %q twice (%+v)", b, found)
} else if err := q.Ack(ctx, reservation); err != nil {
t.Fatal(i, "failed to ack", err)
} else {
found[string(b)] = struct{}{}
}
}
})
if reservation, _, err := q.syn(ctx); reservation != nil {
t.Errorf("able to syn 1 more message than created: %v", err)
} else if reservation, _, err := qOther.syn(ctx); reservation == nil {
t.Errorf("unable to syn from other topic: %v", err)
} else {
t.Logf("empty q.syn = %v", err)
}
t.Run("noop", func(t *testing.T) {
q := NewNoopQueue()
if err := q.Enqueue(nil, nil); err != nil {
t.Error(err)
}
if _, _, err := q.Syn(nil); err != nil {
t.Error(err)
}
if err := q.Ack(nil, ""); err != nil {
t.Error(err)
}
})
}

87
recap.go Normal file
View File

@@ -0,0 +1,87 @@
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"strings"
)
type PersistenceToRecap struct {
pipeline Pipeline
}
func NewPersistenceToRecapPipeline(ctx context.Context, cfg Config) (Pipeline, error) {
reader, err := NewQueue(ctx, "new_persistence", cfg.driver)
if err != nil {
return Pipeline{}, err
}
writer := NewNoopQueue()
return Pipeline{
writer: writer,
reader: reader,
process: newPersistenceToRecapProcess(cfg),
}, nil
}
func newPersistenceToRecapProcess(cfg Config) processFunc {
return func(ctx context.Context, modelIDs []byte) ([]byte, error) {
var m ModelIDs
if err := json.Unmarshal(modelIDs, &m); err != nil {
return nil, fmt.Errorf("received non model ids payload: %w", err)
}
if m.Event == "" {
} else if event, err := cfg.storage.GetEvent(ctx, m.Event); err != nil {
return nil, err
} else if !event.Resolved {
} else if err := func() error {
threads, err := cfg.storage.GetEventThreads(ctx, event.ID)
if err != nil {
return err
}
for _, thread := range threads {
messages, err := cfg.storage.GetThreadMessages(ctx, thread.ID)
if err != nil {
return err
} else if len(messages) < 2 {
continue
}
prompt := []string{
cfg.RecapPromptIntro,
"---",
messages[0].Plaintext,
"---",
cfg.RecapPrompt,
"---",
}
for _, message := range messages[1:] {
prompt = append(prompt, fmt.Sprintf("%s\n%s", message.Author, message.Plaintext))
}
recap, err := cfg.ai.Do(ctx, strings.Join(prompt, "\n\n"))
if err != nil {
return err
}
thread.Recap = recap
if err := cfg.storage.UpsertThread(ctx, thread); err != nil {
return err
}
log.Println("recapped", thread.ID)
if cfg.Debug {
log.Printf("Recapped %q as %q from %q/%q and %+v", thread.ID, thread.Recap, cfg.RecapPromptIntro, cfg.RecapPrompt, messages)
}
}
return nil
}(); err != nil {
return nil, err
}
if cfg.Debug {
log.Printf("persisted recap")
}
return nil, nil
}
}

50
recap_test.go Normal file
View File

@@ -0,0 +1,50 @@
package main
import (
"context"
"encoding/json"
"testing"
"time"
"github.com/breel-render/spoc-bot-vr/model"
)
func TestNewPersistenceToRecapProcess(t *testing.T) {
ctx, can := context.WithTimeout(context.Background(), time.Second*10)
defer can()
d := NewTestDriver(t)
s, _ := NewStorage(ctx, d)
cfg := Config{
driver: d,
storage: s,
ai: NewAINoop(),
Debug: true,
}
proc := newPersistenceToRecapProcess(cfg)
if err := s.UpsertEvent(ctx, model.NewEvent("Event", "", 0, "", "", "", "", true)); err != nil {
t.Fatal(err)
} else if err := s.UpsertThread(ctx, model.NewThread("Thread", "", 0, "", "Event")); err != nil {
t.Fatal(err)
} else if err := s.UpsertMessage(ctx, model.NewMessage("Root", 0, "bot", "an alert has fired", "Thread")); err != nil {
t.Fatal(err)
} else if err := s.UpsertMessage(ctx, model.NewMessage("Message", 0, "me", "hello world", "Thread")); err != nil {
t.Fatal(err)
}
b, _ := json.Marshal(ModelIDs{Event: "Event"})
if _, err := proc(ctx, b); err != nil {
t.Error(err)
}
if thread, err := s.GetThread(ctx, "Thread"); err != nil {
t.Error(err)
} else if thread.Recap == "" {
t.Error("no recap:", thread.Recap)
} else {
t.Logf("%+v", thread)
}
}

278
slack.go Normal file
View File

@@ -0,0 +1,278 @@
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"regexp"
"strconv"
"strings"
"time"
"github.com/breel-render/spoc-bot-vr/model"
)
var (
ErrIrrelevantMessage = errors.New("message isnt relevant to spoc bot vr")
)
type SlackToModel struct {
pipeline Pipeline
}
type Models struct {
Event model.Event
Message model.Message
Thread model.Thread
}
func NewSlackToModelPipeline(ctx context.Context, cfg Config) (Pipeline, error) {
reader, err := NewQueue(ctx, "slack_event", cfg.driver)
if err != nil {
return Pipeline{}, err
}
writer, err := NewQueue(ctx, "new_models", cfg.driver)
if err != nil {
return Pipeline{}, err
}
return Pipeline{
writer: writer,
reader: reader,
process: newSlackToModelProcess(cfg),
}, nil
}
func newSlackToModelProcess(cfg Config) processFunc {
return func(ctx context.Context, slack []byte) ([]byte, error) {
s, err := parseSlack(slack)
if cfg.Debug {
log.Printf("%v: %s => %+v", err, slack, s)
}
if errors.Is(err, ErrIrrelevantMessage) {
return nil, nil
} else if err != nil {
return nil, fmt.Errorf("failed to deserialize slack %v", err)
}
for pattern, ptr := range map[string]*string{
cfg.AssetPattern: &s.Asset,
cfg.DatacenterPattern: &s.Datacenter,
cfg.EventNamePattern: &s.EventName,
} {
*ptr = withPattern(pattern, *ptr)
}
event := model.Event{}
if s.Event != "" && s.Source != "" && s.TS > 0 && s.EventName != "" {
event = model.NewEvent(s.Event, s.Source, s.TS, s.EventName, s.Asset, s.Datacenter, s.Team, s.Resolved)
}
message := model.Message{}
if s.ID != "" && s.Source != "" && s.TS > 0 && s.Thread != "" {
message = model.NewMessage(s.ID, s.TS, s.Author, s.Plaintext, s.Thread)
}
thread := model.Thread{}
if s.Thread != "" && s.Source != "" && s.TS > 0 && s.Event != "" {
thread = model.NewThread(s.Thread, s.Source, s.TS, s.Channel, s.Event)
}
if cfg.Debug {
log.Printf("parsed slack message into models")
}
return json.Marshal(Models{
Event: event,
Message: message,
Thread: thread,
})
}
}
func withPattern(pattern string, given string) string {
r := regexp.MustCompile(pattern)
parsed := r.FindString(given)
for i, name := range r.SubexpNames() {
if i > 0 && name != "" {
parsed = r.FindStringSubmatch(given)[i]
}
}
return parsed
}
type (
parsedSlackMessage struct {
ID string
TS uint64
Source string
Channel string
Thread string
EventName string
Event string
Plaintext string
Asset string
Resolved bool
Datacenter string
Author string
Team string
}
slackMessage struct {
slackEvent
Type string
TS uint64 `json:"event_time"`
Event slackEvent
MessageTS string `json:"ts"`
}
slackEvent struct {
ID string `json:"event_ts"`
Channel string
// rewrites
Nested *slackEvent `json:"message"`
PreviousMessage *slackEvent `json:"previous_message"`
// human
ParentID string `json:"thread_ts"`
Text string
Blocks []slackBlock
User string
// bot
Bot slackBot `json:"bot_profile"`
Attachments []slackAttachment
}
slackBlock struct {
Elements []slackElement
}
slackElement struct {
Elements []slackElement
RichText string `json:"text"`
}
slackBot struct {
Name string
}
slackAttachment struct {
Color string
Title string
Text string
Fields []slackField
Actions []slackAction
}
slackField struct {
Value string
Title string
}
slackAction struct{}
)
func parseSlack(b []byte) (parsedSlackMessage, error) {
s, err := _parseSlack(b)
if err != nil {
return parsedSlackMessage{}, err
}
/*
if ch != "" {
s.Event.Channel = ch
}
*/
if s.Event.Bot.Name != "" {
if len(s.Event.Attachments) == 0 {
return parsedSlackMessage{}, ErrIrrelevantMessage
} else if !strings.Contains(s.Event.Attachments[0].Title, ": Firing: ") {
return parsedSlackMessage{}, ErrIrrelevantMessage
}
var tagsField string
var teamField string
for _, field := range s.Event.Attachments[0].Fields {
switch field.Title {
case "Tags":
tagsField = field.Value
case "Routed Teams":
teamField = field.Value
}
}
return parsedSlackMessage{
ID: fmt.Sprintf("%s/%v", s.Event.ID, s.TS),
TS: s.TS,
Source: fmt.Sprintf(`https://renderinc.slack.com/archives/%s/p%s`, s.Event.Channel, strings.ReplaceAll(s.Event.ID, ".", "")),
Channel: s.Event.Channel,
Thread: s.Event.ID,
EventName: strings.Split(s.Event.Attachments[0].Title, ": Firing: ")[1],
Event: strings.TrimPrefix(strings.Split(s.Event.Attachments[0].Title, ":")[0], "#"),
Plaintext: s.Event.Attachments[0].Text,
Asset: s.Event.Attachments[0].Text,
Resolved: !strings.HasPrefix(s.Event.Attachments[0].Color, "F"),
Datacenter: tagsField,
Author: s.Event.Bot.Name,
Team: teamField,
}, nil
}
if s.Event.ParentID == "" {
return parsedSlackMessage{}, ErrIrrelevantMessage
}
return parsedSlackMessage{
ID: fmt.Sprintf("%s/%v", s.Event.ParentID, s.TS),
TS: s.TS,
Source: fmt.Sprintf(`https://renderinc.slack.com/archives/%s/p%s`, s.Event.Channel, strings.ReplaceAll(s.Event.ParentID, ".", "")),
Channel: s.Event.Channel,
Thread: s.Event.ParentID,
EventName: "",
Event: "",
Plaintext: s.Event.Text,
Asset: "",
Datacenter: "",
Author: s.Event.User,
}, nil
}
func _parseSlack(b []byte) (slackMessage, error) {
var wrapper ChannelWrapper
if err := json.Unmarshal(b, &wrapper); err == nil && len(wrapper.V) > 0 {
b = wrapper.V
}
var result slackMessage
err := json.Unmarshal(b, &result)
switch result.Type {
case "message":
result.Event = result.slackEvent
result.TS, _ = strconv.ParseUint(strings.Split(result.MessageTS, ".")[0], 10, 64)
result.Event.ID = result.MessageTS
}
if result.Event.Nested != nil && !result.Event.Nested.Empty() {
result.Event.Blocks = result.Event.Nested.Blocks
result.Event.Bot = result.Event.Nested.Bot
result.Event.Attachments = result.Event.Nested.Attachments
result.Event.Nested = nil
}
if result.Event.PreviousMessage != nil {
if result.Event.PreviousMessage.ID != "" {
result.Event.ID = result.Event.PreviousMessage.ID
}
result.Event.PreviousMessage = nil
}
if wrapper.Channel != "" {
result.Event.Channel = wrapper.Channel
}
return result, err
}
func (this slackEvent) Empty() bool {
return fmt.Sprintf("%+v", this) == fmt.Sprintf("%+v", slackEvent{})
}
func (this parsedSlackMessage) Time() time.Time {
return time.Unix(int64(this.TS), 0)
}
type ChannelWrapper struct {
Channel string
V json.RawMessage
}

303
slack_test.go Normal file
View File

@@ -0,0 +1,303 @@
package main
import (
"context"
"encoding/json"
"os"
"path"
"testing"
"time"
"github.com/breel-render/spoc-bot-vr/model"
"gotest.tools/assert"
)
func TestSlackToModelPipeline(t *testing.T) {
t.Parallel()
ctx, can := context.WithTimeout(context.Background(), time.Second*5)
defer can()
pipeline, err := NewSlackToModelPipeline(ctx, Config{
driver: NewTestDriver(t),
AssetPattern: renderAssetPattern,
DatacenterPattern: renderDatacenterPattern,
EventNamePattern: renderEventNamePattern,
})
if err != nil {
t.Fatal(err)
}
go func() {
if err := pipeline.Process(ctx); err != nil && ctx.Err() == nil {
t.Fatal(err)
}
}()
want := Models{
Event: model.NewEvent(
"11071",
"https://renderinc.slack.com/archives/C06U1DDBBU4/p1712927439728409",
1712927439,
"Alertconfig Workflow Failed",
"",
"",
"Datastores Non-Critical",
true,
),
Message: model.NewMessage(
"1712927439.728409/1712927439",
1712927439,
"Opsgenie for Alert Management",
"At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
"1712927439.728409",
),
Thread: model.NewThread(
"1712927439.728409",
"https://renderinc.slack.com/archives/C06U1DDBBU4/p1712927439728409",
1712927439,
"C06U1DDBBU4",
"11071",
),
/*
ID: "1712927439.728409/1712927439",
TS: 1712927439,
Source: "https://renderinc.slack.com/archives/C06U1DDBBU4/p1712927439728409",
Channel: "C06U1DDBBU4",
Thread: "1712927439.728409",
EventName: "",
Event: "11071",
Plaintext: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Asset: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Datacenter: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb",
*/
}
b, _ := os.ReadFile("testdata/slack_events/opsgenie_alert.json")
if err := pipeline.reader.Enqueue(ctx, b); err != nil {
t.Fatal("failed to enqueue", err)
}
var got Models
if _, b2, err := pipeline.writer.Syn(ctx); err != nil {
t.Fatal("failed to syn", err)
} else if err := json.Unmarshal(b2, &got); err != nil {
t.Fatal("failed to parse outqueue:", err)
} else {
want.Event.Updated = 0
want.Message.Updated = 0
want.Thread.Updated = 0
got.Event.Updated = 0
got.Message.Updated = 0
got.Thread.Updated = 0
assert.DeepEqual(t, want, got)
}
}
func TestParseSlackTestdata(t *testing.T) {
t.Parallel()
cases := map[string]struct {
slackMessage slackMessage
message parsedSlackMessage
}{
"human_thread_message_from_opsgenie_alert.json": {
slackMessage: slackMessage{
TS: 1712930706,
Event: slackEvent{
ID: "1712930706.598629",
Channel: "C06U1DDBBU4",
ParentID: "1712927439.728409",
Text: "I gotta do this",
Blocks: []slackBlock{{
Elements: []slackElement{{
Elements: []slackElement{{
RichText: "I gotta do this",
}},
}},
}},
Bot: slackBot{
Name: "",
},
Attachments: []slackAttachment{},
},
},
message: parsedSlackMessage{
ID: "1712927439.728409/1712930706",
TS: 1712930706,
Source: "https://renderinc.slack.com/archives/C06U1DDBBU4/p1712927439728409",
Channel: "C06U1DDBBU4",
Thread: "1712927439.728409",
EventName: "",
Event: "",
Plaintext: "I gotta do this",
Asset: "",
Author: "U06868T6ADV",
},
},
"opsgenie_alert.json": {
slackMessage: slackMessage{
TS: 1712927439,
Event: slackEvent{
ID: "1712927439.728409",
Channel: "C06U1DDBBU4",
Bot: slackBot{
Name: "Opsgenie for Alert Management",
},
Attachments: []slackAttachment{{
Color: "2ecc71",
Title: "#11071: [Grafana]: Firing: Alertconfig Workflow Failed",
Text: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Fields: []slackField{
{Value: "P3", Title: "Priority"},
{Value: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb", Title: "Tags"},
{Value: "Datastores Non-Critical", Title: "Routed Teams"},
},
Actions: []slackAction{{}, {}, {}},
}},
},
},
message: parsedSlackMessage{
ID: "1712927439.728409/1712927439",
TS: 1712927439,
Source: "https://renderinc.slack.com/archives/C06U1DDBBU4/p1712927439728409",
Channel: "C06U1DDBBU4",
Thread: "1712927439.728409",
EventName: "Alertconfig Workflow Failed",
Event: "11071",
Plaintext: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Asset: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Datacenter: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb",
Author: "Opsgenie for Alert Management",
Team: "Datastores Non-Critical",
Resolved: true,
},
},
"opsgenie_alert_resolved.json": {
slackMessage: slackMessage{
TS: 1712916339,
Event: slackEvent{
ID: "1712916339.000300",
Channel: "C06U1DDBBU4",
Bot: slackBot{
Name: "Opsgenie for Alert Management",
},
Attachments: []slackAttachment{{
Color: "2ecc71",
Title: "#11069: [Grafana]: Firing: Alertconfig Workflow Failed",
Text: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Fields: []slackField{
{Value: "P3", Title: "Priority"},
{Value: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb", Title: "Tags"},
{Value: "Datastores Non-Critical", Title: "Routed Teams"},
},
Actions: []slackAction{},
}},
},
},
message: parsedSlackMessage{
ID: "1712916339.000300/1712916339",
TS: 1712916339,
Source: "https://renderinc.slack.com/archives/C06U1DDBBU4/p1712916339000300",
Channel: "C06U1DDBBU4",
Thread: "1712916339.000300",
EventName: "Alertconfig Workflow Failed",
Event: "11069",
Plaintext: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Asset: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Resolved: true,
Datacenter: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb",
Author: "Opsgenie for Alert Management",
Team: "Datastores Non-Critical",
},
},
"reingested_alert.json": {
message: parsedSlackMessage{
ID: "1712892637.037639/1712892637",
TS: 1712892637,
Source: "https://renderinc.slack.com/archives//p1712892637037639",
//Channel: "C06U1DDBBU4",
Thread: "1712892637.037639",
EventName: "Alertconfig Workflow Failed",
Event: "11061",
Plaintext: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Asset: "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
Resolved: true,
Datacenter: "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb",
Author: "Opsgenie for Alert Management",
Team: "Datastores Non-Critical",
},
},
}
for name, d := range cases {
want := d
t.Run(name, func(t *testing.T) {
b, err := os.ReadFile(path.Join("testdata", "slack_events", name))
if err != nil {
t.Fatal(err)
}
t.Run("parseSlack", func(t *testing.T) {
got, err := parseSlack(b)
if err != nil {
t.Fatal(err)
}
if got != want.message {
assert.DeepEqual(t, want.message, got)
t.Errorf("wanted \n\t%+v, got\n\t%+v", want.message, got)
}
if time := got.Time(); time.Unix() != int64(got.TS) {
t.Error("not unix time", got.TS, time)
}
})
})
}
}
func TestWrappedSlack(t *testing.T) {
b, _ := os.ReadFile("testdata/slack_events/human_thread_message_from_opsgenie_alert.json")
b2, _ := json.Marshal(ChannelWrapper{Channel: "X", V: json.RawMessage(b)})
if got, err := _parseSlack(b); err != nil {
t.Fatal(err)
} else if got2, err := _parseSlack(b2); err != nil {
t.Fatal(err)
} else if got2.Event.Channel != "X" {
t.Error(got2.Event.Channel)
} else if got2.Event.ParentID == "" {
t.Error(got2.Event)
} else if got.Event.ParentID != got2.Event.ParentID {
t.Error(got, got2)
}
}
func TestWithPattern(t *testing.T) {
cases := map[string]struct {
given string
pattern string
want string
}{
"pods unavailable on node": {
given: `pods are unavailable on node ip-12-345-67-890.xx-yyyyy-1.compute.internal.`,
pattern: renderAssetPattern,
want: `ip-12-345-67-890.xx-yyyyy-1.compute.internal`,
},
"redis err": {
given: `Redis instance red-abc123 is emitting Some error repeatedly`,
pattern: renderAssetPattern,
want: `red-abc123`,
},
"pg err": {
given: `db dpg-xyz123 is in a pinch`,
pattern: renderAssetPattern,
want: `dpg-xyz123`,
},
}
for name, d := range cases {
c := d
t.Run(name, func(t *testing.T) {
got := withPattern(c.pattern, c.given)
if got != c.want {
t.Errorf("withPattern(%q, %q) expected %q but got %q", c.pattern, c.given, c.want, got)
}
})
}
}

149
slackscrape.go Normal file
View File

@@ -0,0 +1,149 @@
package main
import (
"context"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"net/url"
"strconv"
"time"
"golang.org/x/time/rate"
)
type SlackScrape struct {
Latest int64
Oldest int64
ThreadTS string
Channel string
Token string
}
func NewSlackScrapePipeline(ctx context.Context, cfg Config) (Pipeline, error) {
writer, err := NewQueue(ctx, "new_persistence", cfg.driver)
if err != nil {
return Pipeline{}, err
}
cfg.slackScrapePipeline.reader, err = NewQueue(ctx, "slack_channels_to_scrape", cfg.driver)
if err != nil {
return Pipeline{}, err
}
return Pipeline{
writer: writer,
reader: cfg.slackScrapePipeline.reader,
process: newSlackScrapeProcess(cfg),
}, nil
}
func newSlackScrapeProcess(cfg Config) processFunc {
limiter := rate.NewLimiter(0.5, 1)
return func(ctx context.Context, jobb []byte) ([]byte, error) {
if err := limiter.Wait(ctx); err != nil {
return nil, err
}
var job SlackScrape
if err := json.Unmarshal(jobb, &job); err != nil {
return nil, fmt.Errorf("received non SlackScrape payload: %w", err)
}
u := url.URL{
Scheme: "https",
Host: "slack.com",
Path: "/api/conversations.history",
}
q := url.Values{}
q.Set("channel", job.Channel)
q.Set("latest", strconv.FormatInt(job.Latest, 10))
q.Set("limit", "999")
q.Set("inclusive", "true")
if job.ThreadTS != "" {
u.Path = "/api/conversations.replies"
q.Set("ts", job.ThreadTS)
}
if job.Oldest != 0 {
q.Set("oldest", strconv.FormatInt(job.Oldest, 10))
}
u.RawQuery = q.Encode()
url := u.String()
req, err := http.NewRequest(http.MethodGet, url, nil)
if err != nil {
return nil, err
}
req.Header.Set("Authorization", "Bearer "+job.Token)
req = req.WithContext(ctx)
httpc := http.Client{Timeout: time.Second}
resp, err := httpc.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
defer io.Copy(io.Discard, resp.Body)
if resp.StatusCode != http.StatusOK {
b, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("(%d) %s", resp.StatusCode, b)
}
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
var page struct {
Messages []json.RawMessage
}
if err := json.Unmarshal(body, &page); err != nil {
return nil, err
}
newLatest := float64(job.Latest)
for _, messageJSON := range page.Messages {
if cfg.Debug {
log.Printf("slackScrapePipeline %s => %s", url, messageJSON)
}
b, _ := json.Marshal(ChannelWrapper{Channel: job.Channel, V: messageJSON})
if err := cfg.slackToModelPipeline.reader.Enqueue(ctx, b); err != nil {
return nil, err
}
var peekTS struct {
TS float64 `json:"ts,string"`
}
if err := json.Unmarshal(messageJSON, &peekTS); err == nil && peekTS.TS > 0 && peekTS.TS < newLatest {
newLatest = peekTS.TS
}
if job.ThreadTS == "" {
var peek struct {
ThreadTS string `json:"thread_ts"`
}
json.Unmarshal(messageJSON, &peek)
if peek.ThreadTS != "" {
clone := job
clone.ThreadTS = peek.ThreadTS
clone.Oldest = 0
b, _ := json.Marshal(clone)
if err := cfg.slackScrapePipeline.reader.Enqueue(ctx, b); err != nil {
return nil, err
}
log.Printf("fanout thread scrape for %s/%s", job.Channel, peek.ThreadTS)
}
}
}
if len(page.Messages) == 999 {
clone := job
clone.Latest = int64(newLatest)
b, _ := json.Marshal(clone)
if err := cfg.slackScrapePipeline.reader.Enqueue(ctx, b); err != nil {
return nil, err
}
log.Printf("fanout page scrape for %s up to %v", job.Channel, clone.Latest)
}
log.Printf("scraped %v from %s", len(page.Messages), url)
return nil, nil
}
}

View File

@@ -2,83 +2,256 @@ package main
import (
"context"
"errors"
"sort"
"time"
)
"encoding/json"
"fmt"
"strings"
var (
ErrNotFound = errors.New("not found")
"github.com/breel-render/spoc-bot-vr/model"
)
type Storage struct {
driver Driver
}
func NewStorage(driver Driver) Storage {
return Storage{driver: driver}
func NewStorage(ctx context.Context, driver Driver) (Storage, error) {
if _, err := driver.ExecContext(ctx, `
CREATE TABLE IF NOT EXISTS events (ID TEXT UNIQUE);
CREATE TABLE IF NOT EXISTS messages (ID TEXT UNIQUE);
CREATE TABLE IF NOT EXISTS threads (ID TEXT UNIQUE);
`); err != nil {
return Storage{}, err
}
func (s Storage) MessagesSince(ctx context.Context, t time.Time) ([]Message, error) {
return s.messagesWhere(ctx, func(m Message) bool {
return !t.After(m.Time())
})
for table, v := range map[string]any{
"events": model.Event{},
"messages": model.Message{},
"threads": model.Thread{},
} {
b, _ := json.Marshal(v)
var m map[string]struct{}
json.Unmarshal(b, &m)
for k := range m {
if k == `ID` {
continue
}
driver.ExecContext(ctx, fmt.Sprintf(`ALTER TABLE %s ADD COLUMN %s TEXT`, table, k))
}
}
func (s Storage) Threads(ctx context.Context) ([]string, error) {
return s.ThreadsSince(ctx, time.Unix(0, 0))
return Storage{driver: driver}, nil
}
func (s Storage) ThreadsSince(ctx context.Context, t time.Time) ([]string, error) {
messages, err := s.MessagesSince(ctx, t)
func (s Storage) GetEvent(ctx context.Context, ID string) (model.Event, error) {
v := model.Event{}
err := s.selectOne(ctx, "events", &v, "ID = $1", ID)
return v, err
}
func (s Storage) UpsertEvent(ctx context.Context, event model.Event) error {
return s.upsert(ctx, "events", event)
}
func (s Storage) GetMessage(ctx context.Context, ID string) (model.Message, error) {
v := model.Message{}
err := s.selectOne(ctx, "messages", &v, "ID = $1", ID)
return v, err
}
func (s Storage) UpsertMessage(ctx context.Context, message model.Message) error {
return s.upsert(ctx, "messages", message)
}
func (s Storage) GetThread(ctx context.Context, ID string) (model.Thread, error) {
v := model.Thread{}
err := s.selectOne(ctx, "threads", &v, "ID = $1", ID)
return v, err
}
func (s Storage) GetEventThreads(ctx context.Context, ID string) ([]model.Thread, error) {
return s.selectThreadsWhere(ctx, "EventID = $1", ID)
}
func (s Storage) GetThreadMessages(ctx context.Context, ID string) ([]model.Message, error) {
return s.selectMessagesWhere(ctx, "ThreadID = $1", ID)
}
func (s Storage) UpsertThread(ctx context.Context, thread model.Thread) error {
return s.upsert(ctx, "threads", thread)
}
func (s Storage) selectThreadsWhere(ctx context.Context, clause string, args ...any) ([]model.Thread, error) {
keys, _, _, _, err := keysArgsKeyargsValues(model.Thread{})
if err != nil {
return nil, err
}
threads := map[string]struct{}{}
for _, m := range messages {
threads[m.Thread] = struct{}{}
}
result := make([]string, 0, len(threads))
for k := range threads {
result = append(result, k)
}
sort.Strings(result)
return result, nil
args2 := make([]any, len(args))
for i := range args {
args2[i], _ = json.Marshal(args[i])
}
scanTargets := make([]any, len(keys))
func (s Storage) Thread(ctx context.Context, thread string) ([]Message, error) {
return s.messagesWhere(ctx, func(m Message) bool {
return m.Thread == thread
})
}
func (s Storage) messagesWhere(ctx context.Context, where func(Message) bool) ([]Message, error) {
result := make([]Message, 0)
err := s.driver.ForEach(ctx, "m", func(_ string, v []byte) error {
m := MustDeserialize(v)
if !where(m) {
return nil
}
result = append(result, m)
return nil
})
sort.Slice(result, func(i, j int) bool {
return result[i].TS < result[j].TS
})
return result, err
}
func (s Storage) Upsert(ctx context.Context, m Message) error {
return s.driver.Set(ctx, "m", m.ID, m.Serialize())
}
func (s Storage) Get(ctx context.Context, id string) (Message, error) {
b, err := s.driver.Get(ctx, "m", id)
q := fmt.Sprintf(`
SELECT %s FROM threads WHERE %s
ORDER BY TS ASC
`, strings.Join(keys, ", "), clause)
rows, err := s.driver.QueryContext(ctx, q, args2...)
if err != nil {
return Message{}, err
return nil, err
}
if b == nil {
return Message{}, ErrNotFound
defer rows.Close()
var result []model.Thread
for rows.Next() {
for i := range scanTargets {
scanTargets[i] = &[]byte{}
}
return MustDeserialize(b), nil
if err := rows.Scan(scanTargets...); err != nil {
return nil, err
}
m := map[string]json.RawMessage{}
for i, k := range keys {
m[k] = *scanTargets[i].(*[]byte)
}
b, _ := json.Marshal(m)
var one model.Thread
if err := json.Unmarshal(b, &one); err != nil {
return nil, err
}
result = append(result, one)
}
return result, rows.Err()
}
func (s Storage) selectMessagesWhere(ctx context.Context, clause string, args ...any) ([]model.Message, error) {
keys, _, _, _, err := keysArgsKeyargsValues(model.Message{})
if err != nil {
return nil, err
}
args2 := make([]any, len(args))
for i := range args {
args2[i], _ = json.Marshal(args[i])
}
scanTargets := make([]any, len(keys))
q := fmt.Sprintf(`
SELECT %s FROM messages WHERE %s
ORDER BY TS ASC
`, strings.Join(keys, ", "), clause)
rows, err := s.driver.QueryContext(ctx, q, args2...)
if err != nil {
return nil, err
}
defer rows.Close()
var result []model.Message
for rows.Next() {
for i := range scanTargets {
scanTargets[i] = &[]byte{}
}
if err := rows.Scan(scanTargets...); err != nil {
return nil, err
}
m := map[string]json.RawMessage{}
for i, k := range keys {
m[k] = *scanTargets[i].(*[]byte)
}
b, _ := json.Marshal(m)
var one model.Message
if err := json.Unmarshal(b, &one); err != nil {
return nil, err
}
result = append(result, one)
}
return result, rows.Err()
}
func (s Storage) selectOne(ctx context.Context, table string, v any, clause string, args ...any) error {
if questions := strings.Count(clause, "$"); questions != len(args) {
return fmt.Errorf("expected %v args for clause but found %v", questions, len(args))
}
keys, _, _, _, err := keysArgsKeyargsValues(v)
if err != nil {
return err
}
for i := range args {
args[i], _ = json.Marshal(args[i])
}
q := fmt.Sprintf(`
SELECT %s FROM %s WHERE %s
`, strings.Join(keys, ", "), table, clause)
row := s.driver.QueryRowContext(ctx, q, args...)
if err := row.Err(); err != nil {
return err
}
scanTargets := make([]any, len(keys))
for i := range scanTargets {
scanTargets[i] = &[]byte{}
}
if err := row.Scan(scanTargets...); err != nil {
return err
}
m := map[string]json.RawMessage{}
for i, k := range keys {
m[k] = *scanTargets[i].(*[]byte)
}
b, _ := json.Marshal(m)
return json.Unmarshal(b, v)
}
func (s Storage) upsert(ctx context.Context, table string, v any) error {
keys, args, keyArgs, values, err := keysArgsKeyargsValues(v)
if err != nil || len(keys) == 0 {
return err
}
q := fmt.Sprintf(`
INSERT INTO %s (%s) VALUES (%s)
ON CONFLICT (ID) DO UPDATE SET %s
`, table, strings.Join(keys, ", "), strings.Join(args, ", "), strings.Join(keyArgs, ", "))
if result, err := s.driver.ExecContext(ctx, q, values...); err != nil {
return err
} else if n, err := result.RowsAffected(); err != nil {
return err
} else if n != 1 {
return fmt.Errorf("UpsertMessage affected %v rows", n)
}
return nil
}
func keysArgsKeyargsValues(v any) ([]string, []string, []string, []any, error) {
b, _ := json.Marshal(v)
var m map[string]json.RawMessage
err := json.Unmarshal(b, &m)
keys := []string{}
for k := range m {
keys = append(keys, k)
}
args := make([]string, len(keys))
for i := range args {
args[i] = fmt.Sprintf("$%d", i+1)
}
keyArgs := make([]string, len(keys))
for i := range keyArgs {
keyArgs[i] = fmt.Sprintf("%s=$%d", keys[i], i+1)
}
values := make([]any, len(keys))
for i := range values {
values[i] = []byte(m[keys[i]])
}
return keys, args, keyArgs, values, err
}

View File

@@ -2,68 +2,149 @@ package main
import (
"context"
"fmt"
"math/rand"
"testing"
"time"
"github.com/breel-render/spoc-bot-vr/model"
)
//func newStorageFromTestdata(t *testing.T) {
func TestStorage(t *testing.T) {
ctx, can := context.WithTimeout(context.Background(), time.Second)
ctx, can := context.WithTimeout(context.Background(), time.Minute)
defer can()
t.Run("Threads", func(t *testing.T) {
s := NewStorage(NewRAM())
mX1 := Message{ID: "1", Thread: "X", TS: 1}
mX2 := Message{ID: "2", Thread: "X", TS: 2}
mY1 := Message{ID: "1", Thread: "Y", TS: 3}
for _, m := range []Message{mX1, mX2, mY1} {
if err := s.Upsert(ctx, m); err != nil {
s, err := NewStorage(ctx, NewTestDriver(t))
if err != nil {
t.Fatal(err)
}
t.Run("upsert get event", func(t *testing.T) {
m := model.NewEvent(
"ID",
"URL",
1,
"Name",
"Asset",
"Datacenter",
"Team",
true,
)
if err := s.UpsertEvent(ctx, m); err != nil {
t.Fatal("unexpected error on insert:", err)
} else if err := s.UpsertEvent(ctx, m); err != nil {
t.Fatal("unexpected error on noop update:", err)
}
if threads, err := s.Threads(ctx); err != nil {
t.Error(err)
} else if len(threads) != 2 {
t.Error(threads)
} else if threads[0] != "X" {
t.Error(threads, "X")
} else if threads[1] != "Y" {
t.Error(threads, "Y")
}
if threads, err := s.ThreadsSince(ctx, time.Unix(3, 0)); err != nil {
t.Error(err)
} else if len(threads) != 1 {
t.Error(threads)
} else if threads[0] != "Y" {
t.Error(threads[0])
if got, err := s.GetEvent(ctx, m.ID); err != nil {
t.Fatal("unexpected error on get:", err)
} else if got != m {
t.Fatal("unexpected result from get:", got)
}
})
t.Run("Get Upsert", func(t *testing.T) {
s := NewStorage(NewRAM())
t.Run("upsert get thread", func(t *testing.T) {
m := model.NewThread(
"ID",
"URL",
1,
"Channel",
"EventID",
)
if _, err := s.Get(ctx, "id"); err != ErrNotFound {
t.Error("failed to get 404", err)
if err := s.UpsertThread(ctx, m); err != nil {
t.Fatal("unexpected error on insert:", err)
} else if err := s.UpsertThread(ctx, m); err != nil {
t.Fatal("unexpected error on noop update:", err)
}
m := Message{
ID: "id",
TS: 1,
if got, err := s.GetThread(ctx, m.ID); err != nil {
t.Fatal("unexpected error on get:", err)
} else if got != m {
t.Fatal("unexpected result from get:", got)
}
})
t.Run("upsert get message", func(t *testing.T) {
m := model.NewMessage(
"ID",
1,
"Author",
"Plaintext",
"ThreadID",
)
if err := s.UpsertMessage(ctx, m); err != nil {
t.Fatal("unexpected error on insert:", err)
} else if err := s.UpsertMessage(ctx, m); err != nil {
t.Fatal("unexpected error on noop update:", err)
}
if err := s.Upsert(ctx, m); err != nil {
t.Error("failed to upsert", err)
if got, err := s.GetMessage(ctx, m.ID); err != nil {
t.Fatal("unexpected error on get:", err)
} else if got != m {
t.Fatal("unexpected result from get:", got)
}
})
t.Run("get thread messages", func(t *testing.T) {
thread := fmt.Sprintf("thread-%d", rand.Int())
m := model.NewMessage(
"ID",
1,
"Author",
"Plaintext",
thread,
)
if err := s.UpsertMessage(ctx, m); err != nil {
t.Fatal("unexpected error on insert:", err)
} else if m2, err := s.GetMessage(ctx, m.ID); err != nil {
t.Fatal("unexpected error on upsert-get:", err)
} else if m2 != m {
t.Errorf("expected %+v but got %+v", m, m2)
}
if m2, err := s.Get(ctx, "id"); err != nil {
t.Error("failed to get", err)
} else if m != m2 {
t.Error(m2)
msgs, err := s.GetThreadMessages(ctx, thread)
if err != nil {
t.Fatal(err)
} else if len(msgs) != 1 {
t.Fatal(msgs)
} else if msgs[0].ThreadID != m.ThreadID {
t.Fatal(msgs[0].ThreadID)
} else if msgs[0] != m {
t.Fatalf("wanted msgs like %+v but got %+v", m, msgs[0])
}
})
t.Run("get event threads", func(t *testing.T) {
event := fmt.Sprintf("event-%d", rand.Int())
m := model.NewThread(
"ID",
"URL",
1,
"Channel",
event,
)
if err := s.UpsertThread(ctx, m); err != nil {
t.Fatal("unexpected error on insert:", err)
} else if m2, err := s.GetThread(ctx, m.ID); err != nil {
t.Fatal("unexpected error on upsert-get:", err)
} else if m2 != m {
t.Errorf("expected %+v but got %+v", m, m2)
}
msgs, err := s.GetEventThreads(ctx, event)
if err != nil {
t.Fatal(err)
} else if len(msgs) != 1 {
t.Fatal(msgs)
} else if msgs[0].EventID != m.EventID {
t.Fatal(msgs[0].EventID)
} else if msgs[0] != m {
t.Fatalf("wanted msgs like %+v but got %+v", m, msgs[0])
}
})
}

View File

@@ -29,7 +29,8 @@
"attachments": [
{
"id": 1,
"color": "F4511E",
"realcolor": "F4511E",
"color": "2ecc71",
"fallback": "New alert: \"[Grafana]: Firing: Alertconfig Workflow Failed\" <https://opsg.in/a/i/render/38152bc5-bc5d-411d-9feb-d285af5b6481-1712927439305|11071>\nTags: alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb",
"text": "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
"title": "#11071: [Grafana]: Firing: Alertconfig Workflow Failed",

View File

@@ -0,0 +1,57 @@
{
"user": "U03RUK7FBUY",
"type": "message",
"ts": "1712892637.037639",
"edited": {
"user": "B03RHGBPH2M",
"ts": "1712896236.000000"
},
"bot_id": "B03RHGBPH2M",
"app_id": "A286WATV2",
"text": "",
"team": "T9RQLQ0KV",
"bot_profile": {
"id": "B03RHGBPH2M",
"app_id": "A286WATV2",
"name": "Opsgenie for Alert Management",
"icons": {
"image_36": "https://avatars.slack-edge.com/2019-05-30/652285939191_7831939cc30ef7159561_36.png",
"image_48": "https://avatars.slack-edge.com/2019-05-30/652285939191_7831939cc30ef7159561_48.png",
"image_72": "https://avatars.slack-edge.com/2019-05-30/652285939191_7831939cc30ef7159561_72.png"
},
"deleted": false,
"updated": 1658887059,
"team_id": "T9RQLQ0KV"
},
"attachments": [
{
"id": 1,
"color": "2ecc71",
"fallback": "\"[Grafana]: Firing: Alertconfig Workflow Failed\" <https://opsg.in/a/i/render/bdbbe5a6-738b-4643-9267-39d8dfcb2ead-1712892636514|11061>\nTags: alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb",
"text": "At least one alertconfig run has failed unexpectedly.\nDashboard: <https://grafana.render.com/d/VLZU83YVk?orgId=1>\nPanel: <https://grafana.render.com/d/VLZU83YVk?orgId=1&amp;viewPanel=17>\nSource: <https://grafana.render.com/alerting/grafana/fa7b06b8-b4d8-4979-bce7-5e1c432edd81/view?orgId=1>",
"title": "#11061: [Grafana]: Firing: Alertconfig Workflow Failed",
"title_link": "https://opsg.in/a/i/render/bdbbe5a6-738b-4643-9267-39d8dfc$2ead-1712892636514",
"callback_id": "bbd4a269-08a9-470e-ba79-ce238ac03dc7_05fa2e9b-bec4-4a7e-842d-36043d267a13_11061",
"fields": [
{
"value": "P3",
"title": "Priority",
"short": true
},
{
"value": "alertname:Alertconfig Workflow Failed, grafana_folder:Datastores, rule_uid:a7639f7e-6950-41be-850a-b22119f74cbb",
"title": "Tags",
"short": true
},
{
"value": "Datastores Non-Critical",
"title": "Routed Teams",
"short": true
}
],
"mrkdwn_in": [
"text"
]
}
]
}