diff --git a/vendor/github.com/PuerkitoBio/goquery/LICENSE b/vendor/github.com/PuerkitoBio/goquery/LICENSE
new file mode 100644
index 0000000..f743d37
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/LICENSE
@@ -0,0 +1,12 @@
+Copyright (c) 2012-2016, Martin Angers & Contributors
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
+
+* Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/PuerkitoBio/goquery/README.md b/vendor/github.com/PuerkitoBio/goquery/README.md
new file mode 100644
index 0000000..41f6512
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/README.md
@@ -0,0 +1,178 @@
+# goquery - a little like that j-thing, only in Go
+[](http://travis-ci.org/PuerkitoBio/goquery) [](http://godoc.org/github.com/PuerkitoBio/goquery) [](https://sourcegraph.com/github.com/PuerkitoBio/goquery?badge)
+
+goquery brings a syntax and a set of features similar to [jQuery][] to the [Go language][go]. It is based on Go's [net/html package][html] and the CSS Selector library [cascadia][]. Since the net/html parser returns nodes, and not a full-featured DOM tree, jQuery's stateful manipulation functions (like height(), css(), detach()) have been left off.
+
+Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML. See the [wiki][] for various options to do this.
+
+Syntax-wise, it is as close as possible to jQuery, with the same function names when possible, and that warm and fuzzy chainable interface. jQuery being the ultra-popular library that it is, I felt that writing a similar HTML-manipulating library was better to follow its API than to start anew (in the same spirit as Go's `fmt` package), even though some of its methods are less than intuitive (looking at you, [index()][index]...).
+
+## Table of Contents
+
+* [Installation](#installation)
+* [Changelog](#changelog)
+* [API](#api)
+* [Examples](#examples)
+* [Related Projects](#related-projects)
+* [Support](#support)
+* [License](#license)
+
+## Installation
+
+Please note that because of the net/html dependency, goquery requires Go1.1+.
+
+ $ go get github.com/PuerkitoBio/goquery
+
+(optional) To run unit tests:
+
+ $ cd $GOPATH/src/github.com/PuerkitoBio/goquery
+ $ go test
+
+(optional) To run benchmarks (warning: it runs for a few minutes):
+
+ $ cd $GOPATH/src/github.com/PuerkitoBio/goquery
+ $ go test -bench=".*"
+
+## Changelog
+
+**Note that goquery's API is now stable, and will not break.**
+
+* **2018-06-07 (v1.4.1)** : Add `NewDocumentFromReader` examples.
+* **2018-03-24 (v1.4.0)** : Deprecate `NewDocument(url)` and `NewDocumentFromResponse(response)`.
+* **2018-01-28 (v1.3.0)** : Add `ToEnd` constant to `Slice` until the end of the selection (thanks to @davidjwilkins for raising the issue).
+* **2018-01-11 (v1.2.0)** : Add `AddBack*` and deprecate `AndSelf` (thanks to @davidjwilkins).
+* **2017-02-12 (v1.1.0)** : Add `SetHtml` and `SetText` (thanks to @glebtv).
+* **2016-12-29 (v1.0.2)** : Optimize allocations for `Selection.Text` (thanks to @radovskyb).
+* **2016-08-28 (v1.0.1)** : Optimize performance for large documents.
+* **2016-07-27 (v1.0.0)** : Tag version 1.0.0.
+* **2016-06-15** : Invalid selector strings internally compile to a `Matcher` implementation that never matches any node (instead of a panic). So for example, `doc.Find("~")` returns an empty `*Selection` object.
+* **2016-02-02** : Add `NodeName` utility function similar to the DOM's `nodeName` property. It returns the tag name of the first element in a selection, and other relevant values of non-element nodes (see godoc for details). Add `OuterHtml` utility function similar to the DOM's `outerHTML` property (named `OuterHtml` in small caps for consistency with the existing `Html` method on the `Selection`).
+* **2015-04-20** : Add `AttrOr` helper method to return the attribute's value or a default value if absent. Thanks to [piotrkowalczuk][piotr].
+* **2015-02-04** : Add more manipulation functions - Prepend* - thanks again to [Andrew Stone][thatguystone].
+* **2014-11-28** : Add more manipulation functions - ReplaceWith*, Wrap* and Unwrap - thanks again to [Andrew Stone][thatguystone].
+* **2014-11-07** : Add manipulation functions (thanks to [Andrew Stone][thatguystone]) and `*Matcher` functions, that receive compiled cascadia selectors instead of selector strings, thus avoiding potential panics thrown by goquery via `cascadia.MustCompile` calls. This results in better performance (selectors can be compiled once and reused) and more idiomatic error handling (you can handle cascadia's compilation errors, instead of recovering from panics, which had been bugging me for a long time). Note that the actual type expected is a `Matcher` interface, that `cascadia.Selector` implements. Other matcher implementations could be used.
+* **2014-11-06** : Change import paths of net/html to golang.org/x/net/html (see https://groups.google.com/forum/#!topic/golang-nuts/eD8dh3T9yyA). Make sure to update your code to use the new import path too when you call goquery with `html.Node`s.
+* **v0.3.2** : Add `NewDocumentFromReader()` (thanks jweir) which allows creating a goquery document from an io.Reader.
+* **v0.3.1** : Add `NewDocumentFromResponse()` (thanks assassingj) which allows creating a goquery document from an http response.
+* **v0.3.0** : Add `EachWithBreak()` which allows to break out of an `Each()` loop by returning false. This function was added instead of changing the existing `Each()` to avoid breaking compatibility.
+* **v0.2.1** : Make go-getable, now that [go.net/html is Go1.0-compatible][gonet] (thanks to @matrixik for pointing this out).
+* **v0.2.0** : Add support for negative indices in Slice(). **BREAKING CHANGE** `Document.Root` is removed, `Document` is now a `Selection` itself (a selection of one, the root element, just like `Document.Root` was before). Add jQuery's Closest() method.
+* **v0.1.1** : Add benchmarks to use as baseline for refactorings, refactor Next...() and Prev...() methods to use the new html package's linked list features (Next/PrevSibling, FirstChild). Good performance boost (40+% in some cases).
+* **v0.1.0** : Initial release.
+
+## API
+
+goquery exposes two structs, `Document` and `Selection`, and the `Matcher` interface. Unlike jQuery, which is loaded as part of a DOM document, and thus acts on its containing document, goquery doesn't know which HTML document to act upon. So it needs to be told, and that's what the `Document` type is for. It holds the root document node as the initial Selection value to manipulate.
+
+jQuery often has many variants for the same function (no argument, a selector string argument, a jQuery object argument, a DOM element argument, ...). Instead of exposing the same features in goquery as a single method with variadic empty interface arguments, statically-typed signatures are used following this naming convention:
+
+* When the jQuery equivalent can be called with no argument, it has the same name as jQuery for the no argument signature (e.g.: `Prev()`), and the version with a selector string argument is called `XxxFiltered()` (e.g.: `PrevFiltered()`)
+* When the jQuery equivalent **requires** one argument, the same name as jQuery is used for the selector string version (e.g.: `Is()`)
+* The signatures accepting a jQuery object as argument are defined in goquery as `XxxSelection()` and take a `*Selection` object as argument (e.g.: `FilterSelection()`)
+* The signatures accepting a DOM element as argument in jQuery are defined in goquery as `XxxNodes()` and take a variadic argument of type `*html.Node` (e.g.: `FilterNodes()`)
+* The signatures accepting a function as argument in jQuery are defined in goquery as `XxxFunction()` and take a function as argument (e.g.: `FilterFunction()`)
+* The goquery methods that can be called with a selector string have a corresponding version that take a `Matcher` interface and are defined as `XxxMatcher()` (e.g.: `IsMatcher()`)
+
+Utility functions that are not in jQuery but are useful in Go are implemented as functions (that take a `*Selection` as parameter), to avoid a potential naming clash on the `*Selection`'s methods (reserved for jQuery-equivalent behaviour).
+
+The complete [godoc reference documentation can be found here][doc].
+
+Please note that Cascadia's selectors do not necessarily match all supported selectors of jQuery (Sizzle). See the [cascadia project][cascadia] for details. Invalid selector strings compile to a `Matcher` that fails to match any node. Behaviour of the various functions that take a selector string as argument follows from that fact, e.g. (where `~` is an invalid selector string):
+
+* `Find("~")` returns an empty selection because the selector string doesn't match anything.
+* `Add("~")` returns a new selection that holds the same nodes as the original selection, because it didn't add any node (selector string didn't match anything).
+* `ParentsFiltered("~")` returns an empty selection because the selector string doesn't match anything.
+* `ParentsUntil("~")` returns all parents of the selection because the selector string didn't match any element to stop before the top element.
+
+## Examples
+
+See some tips and tricks in the [wiki][].
+
+Adapted from example_test.go:
+
+```Go
+package main
+
+import (
+ "fmt"
+ "log"
+ "net/http"
+
+ "github.com/PuerkitoBio/goquery"
+)
+
+func ExampleScrape() {
+ // Request the HTML page.
+ res, err := http.Get("http://metalsucks.net")
+ if err != nil {
+ log.Fatal(err)
+ }
+ defer res.Body.Close()
+ if res.StatusCode != 200 {
+ log.Fatalf("status code error: %d %s", res.StatusCode, res.Status)
+ }
+
+ // Load the HTML document
+ doc, err := goquery.NewDocumentFromReader(res.Body)
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ // Find the review items
+ doc.Find(".sidebar-reviews article .content-block").Each(func(i int, s *goquery.Selection) {
+ // For each item found, get the band and title
+ band := s.Find("a").Text()
+ title := s.Find("i").Text()
+ fmt.Printf("Review %d: %s - %s\n", i, band, title)
+ })
+}
+
+func main() {
+ ExampleScrape()
+}
+```
+
+## Related Projects
+
+- [Goq][goq], an HTML deserialization and scraping library based on goquery and struct tags.
+- [andybalholm/cascadia][cascadia], the CSS selector library used by goquery.
+- [suntong/cascadia][cascadiacli], a command-line interface to the cascadia CSS selector library, useful to test selectors.
+- [asciimoo/colly](https://github.com/asciimoo/colly), a lightning fast and elegant Scraping Framework
+- [gnulnx/goperf](https://github.com/gnulnx/goperf), a website performance test tool that also fetches static assets.
+- [MontFerret/ferret](https://github.com/MontFerret/ferret), declarative web scraping.
+
+## Support
+
+There are a number of ways you can support the project:
+
+* Use it, star it, build something with it, spread the word!
+ - If you do build something open-source or otherwise publicly-visible, let me know so I can add it to the [Related Projects](#related-projects) section!
+* Raise issues to improve the project (note: doc typos and clarifications are issues too!)
+ - Please search existing issues before opening a new one - it may have already been adressed.
+* Pull requests: please discuss new code in an issue first, unless the fix is really trivial.
+ - Make sure new code is tested.
+ - Be mindful of existing code - PRs that break existing code have a high probability of being declined, unless it fixes a serious issue.
+
+If you desperately want to send money my way, I have a BuyMeACoffee.com page:
+
+
+
+## License
+
+The [BSD 3-Clause license][bsd], the same as the [Go language][golic]. Cascadia's license is [here][caslic].
+
+[jquery]: http://jquery.com/
+[go]: http://golang.org/
+[cascadia]: https://github.com/andybalholm/cascadia
+[cascadiacli]: https://github.com/suntong/cascadia
+[bsd]: http://opensource.org/licenses/BSD-3-Clause
+[golic]: http://golang.org/LICENSE
+[caslic]: https://github.com/andybalholm/cascadia/blob/master/LICENSE
+[doc]: http://godoc.org/github.com/PuerkitoBio/goquery
+[index]: http://api.jquery.com/index/
+[gonet]: https://github.com/golang/net/
+[html]: http://godoc.org/golang.org/x/net/html
+[wiki]: https://github.com/PuerkitoBio/goquery/wiki/Tips-and-tricks
+[thatguystone]: https://github.com/thatguystone
+[piotr]: https://github.com/piotrkowalczuk
+[goq]: https://github.com/andrewstuart/goq
diff --git a/vendor/github.com/PuerkitoBio/goquery/array.go b/vendor/github.com/PuerkitoBio/goquery/array.go
new file mode 100644
index 0000000..1b1f6cb
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/array.go
@@ -0,0 +1,124 @@
+package goquery
+
+import (
+ "golang.org/x/net/html"
+)
+
+const (
+ maxUint = ^uint(0)
+ maxInt = int(maxUint >> 1)
+
+ // ToEnd is a special index value that can be used as end index in a call
+ // to Slice so that all elements are selected until the end of the Selection.
+ // It is equivalent to passing (*Selection).Length().
+ ToEnd = maxInt
+)
+
+// First reduces the set of matched elements to the first in the set.
+// It returns a new Selection object, and an empty Selection object if the
+// the selection is empty.
+func (s *Selection) First() *Selection {
+ return s.Eq(0)
+}
+
+// Last reduces the set of matched elements to the last in the set.
+// It returns a new Selection object, and an empty Selection object if
+// the selection is empty.
+func (s *Selection) Last() *Selection {
+ return s.Eq(-1)
+}
+
+// Eq reduces the set of matched elements to the one at the specified index.
+// If a negative index is given, it counts backwards starting at the end of the
+// set. It returns a new Selection object, and an empty Selection object if the
+// index is invalid.
+func (s *Selection) Eq(index int) *Selection {
+ if index < 0 {
+ index += len(s.Nodes)
+ }
+
+ if index >= len(s.Nodes) || index < 0 {
+ return newEmptySelection(s.document)
+ }
+
+ return s.Slice(index, index+1)
+}
+
+// Slice reduces the set of matched elements to a subset specified by a range
+// of indices. The start index is 0-based and indicates the index of the first
+// element to select. The end index is 0-based and indicates the index at which
+// the elements stop being selected (the end index is not selected).
+//
+// The indices may be negative, in which case they represent an offset from the
+// end of the selection.
+//
+// The special value ToEnd may be specified as end index, in which case all elements
+// until the end are selected. This works both for a positive and negative start
+// index.
+func (s *Selection) Slice(start, end int) *Selection {
+ if start < 0 {
+ start += len(s.Nodes)
+ }
+ if end == ToEnd {
+ end = len(s.Nodes)
+ } else if end < 0 {
+ end += len(s.Nodes)
+ }
+ return pushStack(s, s.Nodes[start:end])
+}
+
+// Get retrieves the underlying node at the specified index.
+// Get without parameter is not implemented, since the node array is available
+// on the Selection object.
+func (s *Selection) Get(index int) *html.Node {
+ if index < 0 {
+ index += len(s.Nodes) // Negative index gets from the end
+ }
+ return s.Nodes[index]
+}
+
+// Index returns the position of the first element within the Selection object
+// relative to its sibling elements.
+func (s *Selection) Index() int {
+ if len(s.Nodes) > 0 {
+ return newSingleSelection(s.Nodes[0], s.document).PrevAll().Length()
+ }
+ return -1
+}
+
+// IndexSelector returns the position of the first element within the
+// Selection object relative to the elements matched by the selector, or -1 if
+// not found.
+func (s *Selection) IndexSelector(selector string) int {
+ if len(s.Nodes) > 0 {
+ sel := s.document.Find(selector)
+ return indexInSlice(sel.Nodes, s.Nodes[0])
+ }
+ return -1
+}
+
+// IndexMatcher returns the position of the first element within the
+// Selection object relative to the elements matched by the matcher, or -1 if
+// not found.
+func (s *Selection) IndexMatcher(m Matcher) int {
+ if len(s.Nodes) > 0 {
+ sel := s.document.FindMatcher(m)
+ return indexInSlice(sel.Nodes, s.Nodes[0])
+ }
+ return -1
+}
+
+// IndexOfNode returns the position of the specified node within the Selection
+// object, or -1 if not found.
+func (s *Selection) IndexOfNode(node *html.Node) int {
+ return indexInSlice(s.Nodes, node)
+}
+
+// IndexOfSelection returns the position of the first node in the specified
+// Selection object within this Selection object, or -1 if not found.
+func (s *Selection) IndexOfSelection(sel *Selection) int {
+ if sel != nil && len(sel.Nodes) > 0 {
+ return indexInSlice(s.Nodes, sel.Nodes[0])
+ }
+ return -1
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/doc.go b/vendor/github.com/PuerkitoBio/goquery/doc.go
new file mode 100644
index 0000000..71146a7
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/doc.go
@@ -0,0 +1,123 @@
+// Copyright (c) 2012-2016, Martin Angers & Contributors
+// All rights reserved.
+//
+// Redistribution and use in source and binary forms, with or without modification,
+// are permitted provided that the following conditions are met:
+//
+// * Redistributions of source code must retain the above copyright notice,
+// this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above copyright notice,
+// this list of conditions and the following disclaimer in the documentation and/or
+// other materials provided with the distribution.
+// * Neither the name of the author nor the names of its contributors may be used to
+// endorse or promote products derived from this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS
+// OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
+// AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR
+// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+// DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
+// WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
+// WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+/*
+Package goquery implements features similar to jQuery, including the chainable
+syntax, to manipulate and query an HTML document.
+
+It brings a syntax and a set of features similar to jQuery to the Go language.
+It is based on Go's net/html package and the CSS Selector library cascadia.
+Since the net/html parser returns nodes, and not a full-featured DOM
+tree, jQuery's stateful manipulation functions (like height(), css(), detach())
+have been left off.
+
+Also, because the net/html parser requires UTF-8 encoding, so does goquery: it is
+the caller's responsibility to ensure that the source document provides UTF-8 encoded HTML.
+See the repository's wiki for various options on how to do this.
+
+Syntax-wise, it is as close as possible to jQuery, with the same method names when
+possible, and that warm and fuzzy chainable interface. jQuery being the
+ultra-popular library that it is, writing a similar HTML-manipulating
+library was better to follow its API than to start anew (in the same spirit as
+Go's fmt package), even though some of its methods are less than intuitive (looking
+at you, index()...).
+
+It is hosted on GitHub, along with additional documentation in the README.md
+file: https://github.com/puerkitobio/goquery
+
+Please note that because of the net/html dependency, goquery requires Go1.1+.
+
+The various methods are split into files based on the category of behavior.
+The three dots (...) indicate that various "overloads" are available.
+
+* array.go : array-like positional manipulation of the selection.
+ - Eq()
+ - First()
+ - Get()
+ - Index...()
+ - Last()
+ - Slice()
+
+* expand.go : methods that expand or augment the selection's set.
+ - Add...()
+ - AndSelf()
+ - Union(), which is an alias for AddSelection()
+
+* filter.go : filtering methods, that reduce the selection's set.
+ - End()
+ - Filter...()
+ - Has...()
+ - Intersection(), which is an alias of FilterSelection()
+ - Not...()
+
+* iteration.go : methods to loop over the selection's nodes.
+ - Each()
+ - EachWithBreak()
+ - Map()
+
+* manipulation.go : methods for modifying the document
+ - After...()
+ - Append...()
+ - Before...()
+ - Clone()
+ - Empty()
+ - Prepend...()
+ - Remove...()
+ - ReplaceWith...()
+ - Unwrap()
+ - Wrap...()
+ - WrapAll...()
+ - WrapInner...()
+
+* property.go : methods that inspect and get the node's properties values.
+ - Attr*(), RemoveAttr(), SetAttr()
+ - AddClass(), HasClass(), RemoveClass(), ToggleClass()
+ - Html()
+ - Length()
+ - Size(), which is an alias for Length()
+ - Text()
+
+* query.go : methods that query, or reflect, a node's identity.
+ - Contains()
+ - Is...()
+
+* traversal.go : methods to traverse the HTML document tree.
+ - Children...()
+ - Contents()
+ - Find...()
+ - Next...()
+ - Parent[s]...()
+ - Prev...()
+ - Siblings...()
+
+* type.go : definition of the types exposed by goquery.
+ - Document
+ - Selection
+ - Matcher
+
+* utilities.go : definition of helper functions (and not methods on a *Selection)
+that are not part of jQuery, but are useful to goquery.
+ - NodeName
+ - OuterHtml
+*/
+package goquery
diff --git a/vendor/github.com/PuerkitoBio/goquery/expand.go b/vendor/github.com/PuerkitoBio/goquery/expand.go
new file mode 100644
index 0000000..7caade5
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/expand.go
@@ -0,0 +1,70 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+// Add adds the selector string's matching nodes to those in the current
+// selection and returns a new Selection object.
+// The selector string is run in the context of the document of the current
+// Selection object.
+func (s *Selection) Add(selector string) *Selection {
+ return s.AddNodes(findWithMatcher([]*html.Node{s.document.rootNode}, compileMatcher(selector))...)
+}
+
+// AddMatcher adds the matcher's matching nodes to those in the current
+// selection and returns a new Selection object.
+// The matcher is run in the context of the document of the current
+// Selection object.
+func (s *Selection) AddMatcher(m Matcher) *Selection {
+ return s.AddNodes(findWithMatcher([]*html.Node{s.document.rootNode}, m)...)
+}
+
+// AddSelection adds the specified Selection object's nodes to those in the
+// current selection and returns a new Selection object.
+func (s *Selection) AddSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return s.AddNodes()
+ }
+ return s.AddNodes(sel.Nodes...)
+}
+
+// Union is an alias for AddSelection.
+func (s *Selection) Union(sel *Selection) *Selection {
+ return s.AddSelection(sel)
+}
+
+// AddNodes adds the specified nodes to those in the
+// current selection and returns a new Selection object.
+func (s *Selection) AddNodes(nodes ...*html.Node) *Selection {
+ return pushStack(s, appendWithoutDuplicates(s.Nodes, nodes, nil))
+}
+
+// AndSelf adds the previous set of elements on the stack to the current set.
+// It returns a new Selection object containing the current Selection combined
+// with the previous one.
+// Deprecated: This function has been deprecated and is now an alias for AddBack().
+func (s *Selection) AndSelf() *Selection {
+ return s.AddBack()
+}
+
+// AddBack adds the previous set of elements on the stack to the current set.
+// It returns a new Selection object containing the current Selection combined
+// with the previous one.
+func (s *Selection) AddBack() *Selection {
+ return s.AddSelection(s.prevSel)
+}
+
+// AddBackFiltered reduces the previous set of elements on the stack to those that
+// match the selector string, and adds them to the current set.
+// It returns a new Selection object containing the current Selection combined
+// with the filtered previous one
+func (s *Selection) AddBackFiltered(selector string) *Selection {
+ return s.AddSelection(s.prevSel.Filter(selector))
+}
+
+// AddBackMatcher reduces the previous set of elements on the stack to those that match
+// the mateher, and adds them to the curernt set.
+// It returns a new Selection object containing the current Selection combined
+// with the filtered previous one
+func (s *Selection) AddBackMatcher(m Matcher) *Selection {
+ return s.AddSelection(s.prevSel.FilterMatcher(m))
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/filter.go b/vendor/github.com/PuerkitoBio/goquery/filter.go
new file mode 100644
index 0000000..9138ffb
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/filter.go
@@ -0,0 +1,163 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+// Filter reduces the set of matched elements to those that match the selector string.
+// It returns a new Selection object for this subset of matching elements.
+func (s *Selection) Filter(selector string) *Selection {
+ return s.FilterMatcher(compileMatcher(selector))
+}
+
+// FilterMatcher reduces the set of matched elements to those that match
+// the given matcher. It returns a new Selection object for this subset
+// of matching elements.
+func (s *Selection) FilterMatcher(m Matcher) *Selection {
+ return pushStack(s, winnow(s, m, true))
+}
+
+// Not removes elements from the Selection that match the selector string.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) Not(selector string) *Selection {
+ return s.NotMatcher(compileMatcher(selector))
+}
+
+// NotMatcher removes elements from the Selection that match the given matcher.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotMatcher(m Matcher) *Selection {
+ return pushStack(s, winnow(s, m, false))
+}
+
+// FilterFunction reduces the set of matched elements to those that pass the function's test.
+// It returns a new Selection object for this subset of elements.
+func (s *Selection) FilterFunction(f func(int, *Selection) bool) *Selection {
+ return pushStack(s, winnowFunction(s, f, true))
+}
+
+// NotFunction removes elements from the Selection that pass the function's test.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotFunction(f func(int, *Selection) bool) *Selection {
+ return pushStack(s, winnowFunction(s, f, false))
+}
+
+// FilterNodes reduces the set of matched elements to those that match the specified nodes.
+// It returns a new Selection object for this subset of elements.
+func (s *Selection) FilterNodes(nodes ...*html.Node) *Selection {
+ return pushStack(s, winnowNodes(s, nodes, true))
+}
+
+// NotNodes removes elements from the Selection that match the specified nodes.
+// It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotNodes(nodes ...*html.Node) *Selection {
+ return pushStack(s, winnowNodes(s, nodes, false))
+}
+
+// FilterSelection reduces the set of matched elements to those that match a
+// node in the specified Selection object.
+// It returns a new Selection object for this subset of elements.
+func (s *Selection) FilterSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return pushStack(s, winnowNodes(s, nil, true))
+ }
+ return pushStack(s, winnowNodes(s, sel.Nodes, true))
+}
+
+// NotSelection removes elements from the Selection that match a node in the specified
+// Selection object. It returns a new Selection object with the matching elements removed.
+func (s *Selection) NotSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return pushStack(s, winnowNodes(s, nil, false))
+ }
+ return pushStack(s, winnowNodes(s, sel.Nodes, false))
+}
+
+// Intersection is an alias for FilterSelection.
+func (s *Selection) Intersection(sel *Selection) *Selection {
+ return s.FilterSelection(sel)
+}
+
+// Has reduces the set of matched elements to those that have a descendant
+// that matches the selector.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) Has(selector string) *Selection {
+ return s.HasSelection(s.document.Find(selector))
+}
+
+// HasMatcher reduces the set of matched elements to those that have a descendant
+// that matches the matcher.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) HasMatcher(m Matcher) *Selection {
+ return s.HasSelection(s.document.FindMatcher(m))
+}
+
+// HasNodes reduces the set of matched elements to those that have a
+// descendant that matches one of the nodes.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) HasNodes(nodes ...*html.Node) *Selection {
+ return s.FilterFunction(func(_ int, sel *Selection) bool {
+ // Add all nodes that contain one of the specified nodes
+ for _, n := range nodes {
+ if sel.Contains(n) {
+ return true
+ }
+ }
+ return false
+ })
+}
+
+// HasSelection reduces the set of matched elements to those that have a
+// descendant that matches one of the nodes of the specified Selection object.
+// It returns a new Selection object with the matching elements.
+func (s *Selection) HasSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return s.HasNodes()
+ }
+ return s.HasNodes(sel.Nodes...)
+}
+
+// End ends the most recent filtering operation in the current chain and
+// returns the set of matched elements to its previous state.
+func (s *Selection) End() *Selection {
+ if s.prevSel != nil {
+ return s.prevSel
+ }
+ return newEmptySelection(s.document)
+}
+
+// Filter based on the matcher, and the indicator to keep (Filter) or
+// to get rid of (Not) the matching elements.
+func winnow(sel *Selection, m Matcher, keep bool) []*html.Node {
+ // Optimize if keep is requested
+ if keep {
+ return m.Filter(sel.Nodes)
+ }
+ // Use grep
+ return grep(sel, func(i int, s *Selection) bool {
+ return !m.Match(s.Get(0))
+ })
+}
+
+// Filter based on an array of nodes, and the indicator to keep (Filter) or
+// to get rid of (Not) the matching elements.
+func winnowNodes(sel *Selection, nodes []*html.Node, keep bool) []*html.Node {
+ if len(nodes)+len(sel.Nodes) < minNodesForSet {
+ return grep(sel, func(i int, s *Selection) bool {
+ return isInSlice(nodes, s.Get(0)) == keep
+ })
+ }
+
+ set := make(map[*html.Node]bool)
+ for _, n := range nodes {
+ set[n] = true
+ }
+ return grep(sel, func(i int, s *Selection) bool {
+ return set[s.Get(0)] == keep
+ })
+}
+
+// Filter based on a function test, and the indicator to keep (Filter) or
+// to get rid of (Not) the matching elements.
+func winnowFunction(sel *Selection, f func(int, *Selection) bool, keep bool) []*html.Node {
+ return grep(sel, func(i int, s *Selection) bool {
+ return f(i, s) == keep
+ })
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/iteration.go b/vendor/github.com/PuerkitoBio/goquery/iteration.go
new file mode 100644
index 0000000..e246f2e
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/iteration.go
@@ -0,0 +1,39 @@
+package goquery
+
+// Each iterates over a Selection object, executing a function for each
+// matched element. It returns the current Selection object. The function
+// f is called for each element in the selection with the index of the
+// element in that selection starting at 0, and a *Selection that contains
+// only that element.
+func (s *Selection) Each(f func(int, *Selection)) *Selection {
+ for i, n := range s.Nodes {
+ f(i, newSingleSelection(n, s.document))
+ }
+ return s
+}
+
+// EachWithBreak iterates over a Selection object, executing a function for each
+// matched element. It is identical to Each except that it is possible to break
+// out of the loop by returning false in the callback function. It returns the
+// current Selection object.
+func (s *Selection) EachWithBreak(f func(int, *Selection) bool) *Selection {
+ for i, n := range s.Nodes {
+ if !f(i, newSingleSelection(n, s.document)) {
+ return s
+ }
+ }
+ return s
+}
+
+// Map passes each element in the current matched set through a function,
+// producing a slice of string holding the returned values. The function
+// f is called for each element in the selection with the index of the
+// element in that selection starting at 0, and a *Selection that contains
+// only that element.
+func (s *Selection) Map(f func(int, *Selection) string) (result []string) {
+ for i, n := range s.Nodes {
+ result = append(result, f(i, newSingleSelection(n, s.document)))
+ }
+
+ return result
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/manipulation.go b/vendor/github.com/PuerkitoBio/goquery/manipulation.go
new file mode 100644
index 0000000..34eb757
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/manipulation.go
@@ -0,0 +1,574 @@
+package goquery
+
+import (
+ "strings"
+
+ "golang.org/x/net/html"
+)
+
+// After applies the selector from the root document and inserts the matched elements
+// after the elements in the set of matched elements.
+//
+// If one of the matched elements in the selection is not currently in the
+// document, it's impossible to insert nodes after it, so it will be ignored.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) After(selector string) *Selection {
+ return s.AfterMatcher(compileMatcher(selector))
+}
+
+// AfterMatcher applies the matcher from the root document and inserts the matched elements
+// after the elements in the set of matched elements.
+//
+// If one of the matched elements in the selection is not currently in the
+// document, it's impossible to insert nodes after it, so it will be ignored.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterMatcher(m Matcher) *Selection {
+ return s.AfterNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// AfterSelection inserts the elements in the selection after each element in the set of matched
+// elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterSelection(sel *Selection) *Selection {
+ return s.AfterNodes(sel.Nodes...)
+}
+
+// AfterHtml parses the html and inserts it after the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterHtml(html string) *Selection {
+ return s.AfterNodes(parseHtml(html)...)
+}
+
+// AfterNodes inserts the nodes after each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AfterNodes(ns ...*html.Node) *Selection {
+ return s.manipulateNodes(ns, true, func(sn *html.Node, n *html.Node) {
+ if sn.Parent != nil {
+ sn.Parent.InsertBefore(n, sn.NextSibling)
+ }
+ })
+}
+
+// Append appends the elements specified by the selector to the end of each element
+// in the set of matched elements, following those rules:
+//
+// 1) The selector is applied to the root document.
+//
+// 2) Elements that are part of the document will be moved to the new location.
+//
+// 3) If there are multiple locations to append to, cloned nodes will be
+// appended to all target locations except the last one, which will be moved
+// as noted in (2).
+func (s *Selection) Append(selector string) *Selection {
+ return s.AppendMatcher(compileMatcher(selector))
+}
+
+// AppendMatcher appends the elements specified by the matcher to the end of each element
+// in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AppendMatcher(m Matcher) *Selection {
+ return s.AppendNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// AppendSelection appends the elements in the selection to the end of each element
+// in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AppendSelection(sel *Selection) *Selection {
+ return s.AppendNodes(sel.Nodes...)
+}
+
+// AppendHtml parses the html and appends it to the set of matched elements.
+func (s *Selection) AppendHtml(html string) *Selection {
+ return s.AppendNodes(parseHtml(html)...)
+}
+
+// AppendNodes appends the specified nodes to each node in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) AppendNodes(ns ...*html.Node) *Selection {
+ return s.manipulateNodes(ns, false, func(sn *html.Node, n *html.Node) {
+ sn.AppendChild(n)
+ })
+}
+
+// Before inserts the matched elements before each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) Before(selector string) *Selection {
+ return s.BeforeMatcher(compileMatcher(selector))
+}
+
+// BeforeMatcher inserts the matched elements before each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeMatcher(m Matcher) *Selection {
+ return s.BeforeNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// BeforeSelection inserts the elements in the selection before each element in the set of matched
+// elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeSelection(sel *Selection) *Selection {
+ return s.BeforeNodes(sel.Nodes...)
+}
+
+// BeforeHtml parses the html and inserts it before the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeHtml(html string) *Selection {
+ return s.BeforeNodes(parseHtml(html)...)
+}
+
+// BeforeNodes inserts the nodes before each element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) BeforeNodes(ns ...*html.Node) *Selection {
+ return s.manipulateNodes(ns, false, func(sn *html.Node, n *html.Node) {
+ if sn.Parent != nil {
+ sn.Parent.InsertBefore(n, sn)
+ }
+ })
+}
+
+// Clone creates a deep copy of the set of matched nodes. The new nodes will not be
+// attached to the document.
+func (s *Selection) Clone() *Selection {
+ ns := newEmptySelection(s.document)
+ ns.Nodes = cloneNodes(s.Nodes)
+ return ns
+}
+
+// Empty removes all children nodes from the set of matched elements.
+// It returns the children nodes in a new Selection.
+func (s *Selection) Empty() *Selection {
+ var nodes []*html.Node
+
+ for _, n := range s.Nodes {
+ for c := n.FirstChild; c != nil; c = n.FirstChild {
+ n.RemoveChild(c)
+ nodes = append(nodes, c)
+ }
+ }
+
+ return pushStack(s, nodes)
+}
+
+// Prepend prepends the elements specified by the selector to each element in
+// the set of matched elements, following the same rules as Append.
+func (s *Selection) Prepend(selector string) *Selection {
+ return s.PrependMatcher(compileMatcher(selector))
+}
+
+// PrependMatcher prepends the elements specified by the matcher to each
+// element in the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) PrependMatcher(m Matcher) *Selection {
+ return s.PrependNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// PrependSelection prepends the elements in the selection to each element in
+// the set of matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) PrependSelection(sel *Selection) *Selection {
+ return s.PrependNodes(sel.Nodes...)
+}
+
+// PrependHtml parses the html and prepends it to the set of matched elements.
+func (s *Selection) PrependHtml(html string) *Selection {
+ return s.PrependNodes(parseHtml(html)...)
+}
+
+// PrependNodes prepends the specified nodes to each node in the set of
+// matched elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) PrependNodes(ns ...*html.Node) *Selection {
+ return s.manipulateNodes(ns, true, func(sn *html.Node, n *html.Node) {
+ // sn.FirstChild may be nil, in which case this functions like
+ // sn.AppendChild()
+ sn.InsertBefore(n, sn.FirstChild)
+ })
+}
+
+// Remove removes the set of matched elements from the document.
+// It returns the same selection, now consisting of nodes not in the document.
+func (s *Selection) Remove() *Selection {
+ for _, n := range s.Nodes {
+ if n.Parent != nil {
+ n.Parent.RemoveChild(n)
+ }
+ }
+
+ return s
+}
+
+// RemoveFiltered removes the set of matched elements by selector.
+// It returns the Selection of removed nodes.
+func (s *Selection) RemoveFiltered(selector string) *Selection {
+ return s.RemoveMatcher(compileMatcher(selector))
+}
+
+// RemoveMatcher removes the set of matched elements.
+// It returns the Selection of removed nodes.
+func (s *Selection) RemoveMatcher(m Matcher) *Selection {
+ return s.FilterMatcher(m).Remove()
+}
+
+// ReplaceWith replaces each element in the set of matched elements with the
+// nodes matched by the given selector.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWith(selector string) *Selection {
+ return s.ReplaceWithMatcher(compileMatcher(selector))
+}
+
+// ReplaceWithMatcher replaces each element in the set of matched elements with
+// the nodes matched by the given Matcher.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithMatcher(m Matcher) *Selection {
+ return s.ReplaceWithNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// ReplaceWithSelection replaces each element in the set of matched elements with
+// the nodes from the given Selection.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithSelection(sel *Selection) *Selection {
+ return s.ReplaceWithNodes(sel.Nodes...)
+}
+
+// ReplaceWithHtml replaces each element in the set of matched elements with
+// the parsed HTML.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithHtml(html string) *Selection {
+ return s.ReplaceWithNodes(parseHtml(html)...)
+}
+
+// ReplaceWithNodes replaces each element in the set of matched elements with
+// the given nodes.
+// It returns the removed elements.
+//
+// This follows the same rules as Selection.Append.
+func (s *Selection) ReplaceWithNodes(ns ...*html.Node) *Selection {
+ s.AfterNodes(ns...)
+ return s.Remove()
+}
+
+// SetHtml sets the html content of each element in the selection to
+// specified html string.
+func (s *Selection) SetHtml(html string) *Selection {
+ return setHtmlNodes(s, parseHtml(html)...)
+}
+
+// SetText sets the content of each element in the selection to specified content.
+// The provided text string is escaped.
+func (s *Selection) SetText(text string) *Selection {
+ return s.SetHtml(html.EscapeString(text))
+}
+
+// Unwrap removes the parents of the set of matched elements, leaving the matched
+// elements (and their siblings, if any) in their place.
+// It returns the original selection.
+func (s *Selection) Unwrap() *Selection {
+ s.Parent().Each(func(i int, ss *Selection) {
+ // For some reason, jquery allows unwrap to remove the
element, so
+ // allowing it here too. Same for . Why it allows those elements to
+ // be unwrapped while not allowing body is a mystery to me.
+ if ss.Nodes[0].Data != "body" {
+ ss.ReplaceWithSelection(ss.Contents())
+ }
+ })
+
+ return s
+}
+
+// Wrap wraps each element in the set of matched elements inside the first
+// element matched by the given selector. The matched child is cloned before
+// being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) Wrap(selector string) *Selection {
+ return s.WrapMatcher(compileMatcher(selector))
+}
+
+// WrapMatcher wraps each element in the set of matched elements inside the
+// first element matched by the given matcher. The matched child is cloned
+// before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapMatcher(m Matcher) *Selection {
+ return s.wrapNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// WrapSelection wraps each element in the set of matched elements inside the
+// first element in the given Selection. The element is cloned before being
+// inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapSelection(sel *Selection) *Selection {
+ return s.wrapNodes(sel.Nodes...)
+}
+
+// WrapHtml wraps each element in the set of matched elements inside the inner-
+// most child of the given HTML.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapHtml(html string) *Selection {
+ return s.wrapNodes(parseHtml(html)...)
+}
+
+// WrapNode wraps each element in the set of matched elements inside the inner-
+// most child of the given node. The given node is copied before being inserted
+// into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapNode(n *html.Node) *Selection {
+ return s.wrapNodes(n)
+}
+
+func (s *Selection) wrapNodes(ns ...*html.Node) *Selection {
+ s.Each(func(i int, ss *Selection) {
+ ss.wrapAllNodes(ns...)
+ })
+
+ return s
+}
+
+// WrapAll wraps a single HTML structure, matched by the given selector, around
+// all elements in the set of matched elements. The matched child is cloned
+// before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAll(selector string) *Selection {
+ return s.WrapAllMatcher(compileMatcher(selector))
+}
+
+// WrapAllMatcher wraps a single HTML structure, matched by the given Matcher,
+// around all elements in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllMatcher(m Matcher) *Selection {
+ return s.wrapAllNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// WrapAllSelection wraps a single HTML structure, the first node of the given
+// Selection, around all elements in the set of matched elements. The matched
+// child is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllSelection(sel *Selection) *Selection {
+ return s.wrapAllNodes(sel.Nodes...)
+}
+
+// WrapAllHtml wraps the given HTML structure around all elements in the set of
+// matched elements. The matched child is cloned before being inserted into the
+// document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllHtml(html string) *Selection {
+ return s.wrapAllNodes(parseHtml(html)...)
+}
+
+func (s *Selection) wrapAllNodes(ns ...*html.Node) *Selection {
+ if len(ns) > 0 {
+ return s.WrapAllNode(ns[0])
+ }
+ return s
+}
+
+// WrapAllNode wraps the given node around the first element in the Selection,
+// making all other nodes in the Selection children of the given node. The node
+// is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapAllNode(n *html.Node) *Selection {
+ if s.Size() == 0 {
+ return s
+ }
+
+ wrap := cloneNode(n)
+
+ first := s.Nodes[0]
+ if first.Parent != nil {
+ first.Parent.InsertBefore(wrap, first)
+ first.Parent.RemoveChild(first)
+ }
+
+ for c := getFirstChildEl(wrap); c != nil; c = getFirstChildEl(wrap) {
+ wrap = c
+ }
+
+ newSingleSelection(wrap, s.document).AppendSelection(s)
+
+ return s
+}
+
+// WrapInner wraps an HTML structure, matched by the given selector, around the
+// content of element in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInner(selector string) *Selection {
+ return s.WrapInnerMatcher(compileMatcher(selector))
+}
+
+// WrapInnerMatcher wraps an HTML structure, matched by the given selector,
+// around the content of element in the set of matched elements. The matched
+// child is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerMatcher(m Matcher) *Selection {
+ return s.wrapInnerNodes(m.MatchAll(s.document.rootNode)...)
+}
+
+// WrapInnerSelection wraps an HTML structure, matched by the given selector,
+// around the content of element in the set of matched elements. The matched
+// child is cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerSelection(sel *Selection) *Selection {
+ return s.wrapInnerNodes(sel.Nodes...)
+}
+
+// WrapInnerHtml wraps an HTML structure, matched by the given selector, around
+// the content of element in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerHtml(html string) *Selection {
+ return s.wrapInnerNodes(parseHtml(html)...)
+}
+
+// WrapInnerNode wraps an HTML structure, matched by the given selector, around
+// the content of element in the set of matched elements. The matched child is
+// cloned before being inserted into the document.
+//
+// It returns the original set of elements.
+func (s *Selection) WrapInnerNode(n *html.Node) *Selection {
+ return s.wrapInnerNodes(n)
+}
+
+func (s *Selection) wrapInnerNodes(ns ...*html.Node) *Selection {
+ if len(ns) == 0 {
+ return s
+ }
+
+ s.Each(func(i int, s *Selection) {
+ contents := s.Contents()
+
+ if contents.Size() > 0 {
+ contents.wrapAllNodes(ns...)
+ } else {
+ s.AppendNodes(cloneNode(ns[0]))
+ }
+ })
+
+ return s
+}
+
+func parseHtml(h string) []*html.Node {
+ // Errors are only returned when the io.Reader returns any error besides
+ // EOF, but strings.Reader never will
+ nodes, err := html.ParseFragment(strings.NewReader(h), &html.Node{Type: html.ElementNode})
+ if err != nil {
+ panic("goquery: failed to parse HTML: " + err.Error())
+ }
+ return nodes
+}
+
+func setHtmlNodes(s *Selection, ns ...*html.Node) *Selection {
+ for _, n := range s.Nodes {
+ for c := n.FirstChild; c != nil; c = n.FirstChild {
+ n.RemoveChild(c)
+ }
+ for _, c := range ns {
+ n.AppendChild(cloneNode(c))
+ }
+ }
+ return s
+}
+
+// Get the first child that is an ElementNode
+func getFirstChildEl(n *html.Node) *html.Node {
+ c := n.FirstChild
+ for c != nil && c.Type != html.ElementNode {
+ c = c.NextSibling
+ }
+ return c
+}
+
+// Deep copy a slice of nodes.
+func cloneNodes(ns []*html.Node) []*html.Node {
+ cns := make([]*html.Node, 0, len(ns))
+
+ for _, n := range ns {
+ cns = append(cns, cloneNode(n))
+ }
+
+ return cns
+}
+
+// Deep copy a node. The new node has clones of all the original node's
+// children but none of its parents or siblings.
+func cloneNode(n *html.Node) *html.Node {
+ nn := &html.Node{
+ Type: n.Type,
+ DataAtom: n.DataAtom,
+ Data: n.Data,
+ Attr: make([]html.Attribute, len(n.Attr)),
+ }
+
+ copy(nn.Attr, n.Attr)
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ nn.AppendChild(cloneNode(c))
+ }
+
+ return nn
+}
+
+func (s *Selection) manipulateNodes(ns []*html.Node, reverse bool,
+ f func(sn *html.Node, n *html.Node)) *Selection {
+
+ lasti := s.Size() - 1
+
+ // net.Html doesn't provide document fragments for insertion, so to get
+ // things in the correct order with After() and Prepend(), the callback
+ // needs to be called on the reverse of the nodes.
+ if reverse {
+ for i, j := 0, len(ns)-1; i < j; i, j = i+1, j-1 {
+ ns[i], ns[j] = ns[j], ns[i]
+ }
+ }
+
+ for i, sn := range s.Nodes {
+ for _, n := range ns {
+ if i != lasti {
+ f(sn, cloneNode(n))
+ } else {
+ if n.Parent != nil {
+ n.Parent.RemoveChild(n)
+ }
+ f(sn, n)
+ }
+ }
+ }
+
+ return s
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/property.go b/vendor/github.com/PuerkitoBio/goquery/property.go
new file mode 100644
index 0000000..411126d
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/property.go
@@ -0,0 +1,275 @@
+package goquery
+
+import (
+ "bytes"
+ "regexp"
+ "strings"
+
+ "golang.org/x/net/html"
+)
+
+var rxClassTrim = regexp.MustCompile("[\t\r\n]")
+
+// Attr gets the specified attribute's value for the first element in the
+// Selection. To get the value for each element individually, use a looping
+// construct such as Each or Map method.
+func (s *Selection) Attr(attrName string) (val string, exists bool) {
+ if len(s.Nodes) == 0 {
+ return
+ }
+ return getAttributeValue(attrName, s.Nodes[0])
+}
+
+// AttrOr works like Attr but returns default value if attribute is not present.
+func (s *Selection) AttrOr(attrName, defaultValue string) string {
+ if len(s.Nodes) == 0 {
+ return defaultValue
+ }
+
+ val, exists := getAttributeValue(attrName, s.Nodes[0])
+ if !exists {
+ return defaultValue
+ }
+
+ return val
+}
+
+// RemoveAttr removes the named attribute from each element in the set of matched elements.
+func (s *Selection) RemoveAttr(attrName string) *Selection {
+ for _, n := range s.Nodes {
+ removeAttr(n, attrName)
+ }
+
+ return s
+}
+
+// SetAttr sets the given attribute on each element in the set of matched elements.
+func (s *Selection) SetAttr(attrName, val string) *Selection {
+ for _, n := range s.Nodes {
+ attr := getAttributePtr(attrName, n)
+ if attr == nil {
+ n.Attr = append(n.Attr, html.Attribute{Key: attrName, Val: val})
+ } else {
+ attr.Val = val
+ }
+ }
+
+ return s
+}
+
+// Text gets the combined text contents of each element in the set of matched
+// elements, including their descendants.
+func (s *Selection) Text() string {
+ var buf bytes.Buffer
+
+ // Slightly optimized vs calling Each: no single selection object created
+ var f func(*html.Node)
+ f = func(n *html.Node) {
+ if n.Type == html.TextNode {
+ // Keep newlines and spaces, like jQuery
+ buf.WriteString(n.Data)
+ }
+ if n.FirstChild != nil {
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ f(c)
+ }
+ }
+ }
+ for _, n := range s.Nodes {
+ f(n)
+ }
+
+ return buf.String()
+}
+
+// Size is an alias for Length.
+func (s *Selection) Size() int {
+ return s.Length()
+}
+
+// Length returns the number of elements in the Selection object.
+func (s *Selection) Length() int {
+ return len(s.Nodes)
+}
+
+// Html gets the HTML contents of the first element in the set of matched
+// elements. It includes text and comment nodes.
+func (s *Selection) Html() (ret string, e error) {
+ // Since there is no .innerHtml, the HTML content must be re-created from
+ // the nodes using html.Render.
+ var buf bytes.Buffer
+
+ if len(s.Nodes) > 0 {
+ for c := s.Nodes[0].FirstChild; c != nil; c = c.NextSibling {
+ e = html.Render(&buf, c)
+ if e != nil {
+ return
+ }
+ }
+ ret = buf.String()
+ }
+
+ return
+}
+
+// AddClass adds the given class(es) to each element in the set of matched elements.
+// Multiple class names can be specified, separated by a space or via multiple arguments.
+func (s *Selection) AddClass(class ...string) *Selection {
+ classStr := strings.TrimSpace(strings.Join(class, " "))
+
+ if classStr == "" {
+ return s
+ }
+
+ tcls := getClassesSlice(classStr)
+ for _, n := range s.Nodes {
+ curClasses, attr := getClassesAndAttr(n, true)
+ for _, newClass := range tcls {
+ if !strings.Contains(curClasses, " "+newClass+" ") {
+ curClasses += newClass + " "
+ }
+ }
+
+ setClasses(n, attr, curClasses)
+ }
+
+ return s
+}
+
+// HasClass determines whether any of the matched elements are assigned the
+// given class.
+func (s *Selection) HasClass(class string) bool {
+ class = " " + class + " "
+ for _, n := range s.Nodes {
+ classes, _ := getClassesAndAttr(n, false)
+ if strings.Contains(classes, class) {
+ return true
+ }
+ }
+ return false
+}
+
+// RemoveClass removes the given class(es) from each element in the set of matched elements.
+// Multiple class names can be specified, separated by a space or via multiple arguments.
+// If no class name is provided, all classes are removed.
+func (s *Selection) RemoveClass(class ...string) *Selection {
+ var rclasses []string
+
+ classStr := strings.TrimSpace(strings.Join(class, " "))
+ remove := classStr == ""
+
+ if !remove {
+ rclasses = getClassesSlice(classStr)
+ }
+
+ for _, n := range s.Nodes {
+ if remove {
+ removeAttr(n, "class")
+ } else {
+ classes, attr := getClassesAndAttr(n, true)
+ for _, rcl := range rclasses {
+ classes = strings.Replace(classes, " "+rcl+" ", " ", -1)
+ }
+
+ setClasses(n, attr, classes)
+ }
+ }
+
+ return s
+}
+
+// ToggleClass adds or removes the given class(es) for each element in the set of matched elements.
+// Multiple class names can be specified, separated by a space or via multiple arguments.
+func (s *Selection) ToggleClass(class ...string) *Selection {
+ classStr := strings.TrimSpace(strings.Join(class, " "))
+
+ if classStr == "" {
+ return s
+ }
+
+ tcls := getClassesSlice(classStr)
+
+ for _, n := range s.Nodes {
+ classes, attr := getClassesAndAttr(n, true)
+ for _, tcl := range tcls {
+ if strings.Contains(classes, " "+tcl+" ") {
+ classes = strings.Replace(classes, " "+tcl+" ", " ", -1)
+ } else {
+ classes += tcl + " "
+ }
+ }
+
+ setClasses(n, attr, classes)
+ }
+
+ return s
+}
+
+func getAttributePtr(attrName string, n *html.Node) *html.Attribute {
+ if n == nil {
+ return nil
+ }
+
+ for i, a := range n.Attr {
+ if a.Key == attrName {
+ return &n.Attr[i]
+ }
+ }
+ return nil
+}
+
+// Private function to get the specified attribute's value from a node.
+func getAttributeValue(attrName string, n *html.Node) (val string, exists bool) {
+ if a := getAttributePtr(attrName, n); a != nil {
+ val = a.Val
+ exists = true
+ }
+ return
+}
+
+// Get and normalize the "class" attribute from the node.
+func getClassesAndAttr(n *html.Node, create bool) (classes string, attr *html.Attribute) {
+ // Applies only to element nodes
+ if n.Type == html.ElementNode {
+ attr = getAttributePtr("class", n)
+ if attr == nil && create {
+ n.Attr = append(n.Attr, html.Attribute{
+ Key: "class",
+ Val: "",
+ })
+ attr = &n.Attr[len(n.Attr)-1]
+ }
+ }
+
+ if attr == nil {
+ classes = " "
+ } else {
+ classes = rxClassTrim.ReplaceAllString(" "+attr.Val+" ", " ")
+ }
+
+ return
+}
+
+func getClassesSlice(classes string) []string {
+ return strings.Split(rxClassTrim.ReplaceAllString(" "+classes+" ", " "), " ")
+}
+
+func removeAttr(n *html.Node, attrName string) {
+ for i, a := range n.Attr {
+ if a.Key == attrName {
+ n.Attr[i], n.Attr[len(n.Attr)-1], n.Attr =
+ n.Attr[len(n.Attr)-1], html.Attribute{}, n.Attr[:len(n.Attr)-1]
+ return
+ }
+ }
+}
+
+func setClasses(n *html.Node, attr *html.Attribute, classes string) {
+ classes = strings.TrimSpace(classes)
+ if classes == "" {
+ removeAttr(n, "class")
+ return
+ }
+
+ attr.Val = classes
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/query.go b/vendor/github.com/PuerkitoBio/goquery/query.go
new file mode 100644
index 0000000..fe86bf0
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/query.go
@@ -0,0 +1,49 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+// Is checks the current matched set of elements against a selector and
+// returns true if at least one of these elements matches.
+func (s *Selection) Is(selector string) bool {
+ return s.IsMatcher(compileMatcher(selector))
+}
+
+// IsMatcher checks the current matched set of elements against a matcher and
+// returns true if at least one of these elements matches.
+func (s *Selection) IsMatcher(m Matcher) bool {
+ if len(s.Nodes) > 0 {
+ if len(s.Nodes) == 1 {
+ return m.Match(s.Nodes[0])
+ }
+ return len(m.Filter(s.Nodes)) > 0
+ }
+
+ return false
+}
+
+// IsFunction checks the current matched set of elements against a predicate and
+// returns true if at least one of these elements matches.
+func (s *Selection) IsFunction(f func(int, *Selection) bool) bool {
+ return s.FilterFunction(f).Length() > 0
+}
+
+// IsSelection checks the current matched set of elements against a Selection object
+// and returns true if at least one of these elements matches.
+func (s *Selection) IsSelection(sel *Selection) bool {
+ return s.FilterSelection(sel).Length() > 0
+}
+
+// IsNodes checks the current matched set of elements against the specified nodes
+// and returns true if at least one of these elements matches.
+func (s *Selection) IsNodes(nodes ...*html.Node) bool {
+ return s.FilterNodes(nodes...).Length() > 0
+}
+
+// Contains returns true if the specified Node is within,
+// at any depth, one of the nodes in the Selection object.
+// It is NOT inclusive, to behave like jQuery's implementation, and
+// unlike Javascript's .contains, so if the contained
+// node is itself in the selection, it returns false.
+func (s *Selection) Contains(n *html.Node) bool {
+ return sliceContains(s.Nodes, n)
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/traversal.go b/vendor/github.com/PuerkitoBio/goquery/traversal.go
new file mode 100644
index 0000000..5fa5315
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/traversal.go
@@ -0,0 +1,698 @@
+package goquery
+
+import "golang.org/x/net/html"
+
+type siblingType int
+
+// Sibling type, used internally when iterating over children at the same
+// level (siblings) to specify which nodes are requested.
+const (
+ siblingPrevUntil siblingType = iota - 3
+ siblingPrevAll
+ siblingPrev
+ siblingAll
+ siblingNext
+ siblingNextAll
+ siblingNextUntil
+ siblingAllIncludingNonElements
+)
+
+// Find gets the descendants of each element in the current set of matched
+// elements, filtered by a selector. It returns a new Selection object
+// containing these matched elements.
+func (s *Selection) Find(selector string) *Selection {
+ return pushStack(s, findWithMatcher(s.Nodes, compileMatcher(selector)))
+}
+
+// FindMatcher gets the descendants of each element in the current set of matched
+// elements, filtered by the matcher. It returns a new Selection object
+// containing these matched elements.
+func (s *Selection) FindMatcher(m Matcher) *Selection {
+ return pushStack(s, findWithMatcher(s.Nodes, m))
+}
+
+// FindSelection gets the descendants of each element in the current
+// Selection, filtered by a Selection. It returns a new Selection object
+// containing these matched elements.
+func (s *Selection) FindSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return pushStack(s, nil)
+ }
+ return s.FindNodes(sel.Nodes...)
+}
+
+// FindNodes gets the descendants of each element in the current
+// Selection, filtered by some nodes. It returns a new Selection object
+// containing these matched elements.
+func (s *Selection) FindNodes(nodes ...*html.Node) *Selection {
+ return pushStack(s, mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+ if sliceContains(s.Nodes, n) {
+ return []*html.Node{n}
+ }
+ return nil
+ }))
+}
+
+// Contents gets the children of each element in the Selection,
+// including text and comment nodes. It returns a new Selection object
+// containing these elements.
+func (s *Selection) Contents() *Selection {
+ return pushStack(s, getChildrenNodes(s.Nodes, siblingAllIncludingNonElements))
+}
+
+// ContentsFiltered gets the children of each element in the Selection,
+// filtered by the specified selector. It returns a new Selection
+// object containing these elements. Since selectors only act on Element nodes,
+// this function is an alias to ChildrenFiltered unless the selector is empty,
+// in which case it is an alias to Contents.
+func (s *Selection) ContentsFiltered(selector string) *Selection {
+ if selector != "" {
+ return s.ChildrenFiltered(selector)
+ }
+ return s.Contents()
+}
+
+// ContentsMatcher gets the children of each element in the Selection,
+// filtered by the specified matcher. It returns a new Selection
+// object containing these elements. Since matchers only act on Element nodes,
+// this function is an alias to ChildrenMatcher.
+func (s *Selection) ContentsMatcher(m Matcher) *Selection {
+ return s.ChildrenMatcher(m)
+}
+
+// Children gets the child elements of each element in the Selection.
+// It returns a new Selection object containing these elements.
+func (s *Selection) Children() *Selection {
+ return pushStack(s, getChildrenNodes(s.Nodes, siblingAll))
+}
+
+// ChildrenFiltered gets the child elements of each element in the Selection,
+// filtered by the specified selector. It returns a new
+// Selection object containing these elements.
+func (s *Selection) ChildrenFiltered(selector string) *Selection {
+ return filterAndPush(s, getChildrenNodes(s.Nodes, siblingAll), compileMatcher(selector))
+}
+
+// ChildrenMatcher gets the child elements of each element in the Selection,
+// filtered by the specified matcher. It returns a new
+// Selection object containing these elements.
+func (s *Selection) ChildrenMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getChildrenNodes(s.Nodes, siblingAll), m)
+}
+
+// Parent gets the parent of each element in the Selection. It returns a
+// new Selection object containing the matched elements.
+func (s *Selection) Parent() *Selection {
+ return pushStack(s, getParentNodes(s.Nodes))
+}
+
+// ParentFiltered gets the parent of each element in the Selection filtered by a
+// selector. It returns a new Selection object containing the matched elements.
+func (s *Selection) ParentFiltered(selector string) *Selection {
+ return filterAndPush(s, getParentNodes(s.Nodes), compileMatcher(selector))
+}
+
+// ParentMatcher gets the parent of each element in the Selection filtered by a
+// matcher. It returns a new Selection object containing the matched elements.
+func (s *Selection) ParentMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getParentNodes(s.Nodes), m)
+}
+
+// Closest gets the first element that matches the selector by testing the
+// element itself and traversing up through its ancestors in the DOM tree.
+func (s *Selection) Closest(selector string) *Selection {
+ cs := compileMatcher(selector)
+ return s.ClosestMatcher(cs)
+}
+
+// ClosestMatcher gets the first element that matches the matcher by testing the
+// element itself and traversing up through its ancestors in the DOM tree.
+func (s *Selection) ClosestMatcher(m Matcher) *Selection {
+ return pushStack(s, mapNodes(s.Nodes, func(i int, n *html.Node) []*html.Node {
+ // For each node in the selection, test the node itself, then each parent
+ // until a match is found.
+ for ; n != nil; n = n.Parent {
+ if m.Match(n) {
+ return []*html.Node{n}
+ }
+ }
+ return nil
+ }))
+}
+
+// ClosestNodes gets the first element that matches one of the nodes by testing the
+// element itself and traversing up through its ancestors in the DOM tree.
+func (s *Selection) ClosestNodes(nodes ...*html.Node) *Selection {
+ set := make(map[*html.Node]bool)
+ for _, n := range nodes {
+ set[n] = true
+ }
+ return pushStack(s, mapNodes(s.Nodes, func(i int, n *html.Node) []*html.Node {
+ // For each node in the selection, test the node itself, then each parent
+ // until a match is found.
+ for ; n != nil; n = n.Parent {
+ if set[n] {
+ return []*html.Node{n}
+ }
+ }
+ return nil
+ }))
+}
+
+// ClosestSelection gets the first element that matches one of the nodes in the
+// Selection by testing the element itself and traversing up through its ancestors
+// in the DOM tree.
+func (s *Selection) ClosestSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return pushStack(s, nil)
+ }
+ return s.ClosestNodes(sel.Nodes...)
+}
+
+// Parents gets the ancestors of each element in the current Selection. It
+// returns a new Selection object with the matched elements.
+func (s *Selection) Parents() *Selection {
+ return pushStack(s, getParentsNodes(s.Nodes, nil, nil))
+}
+
+// ParentsFiltered gets the ancestors of each element in the current
+// Selection. It returns a new Selection object with the matched elements.
+func (s *Selection) ParentsFiltered(selector string) *Selection {
+ return filterAndPush(s, getParentsNodes(s.Nodes, nil, nil), compileMatcher(selector))
+}
+
+// ParentsMatcher gets the ancestors of each element in the current
+// Selection. It returns a new Selection object with the matched elements.
+func (s *Selection) ParentsMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getParentsNodes(s.Nodes, nil, nil), m)
+}
+
+// ParentsUntil gets the ancestors of each element in the Selection, up to but
+// not including the element matched by the selector. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) ParentsUntil(selector string) *Selection {
+ return pushStack(s, getParentsNodes(s.Nodes, compileMatcher(selector), nil))
+}
+
+// ParentsUntilMatcher gets the ancestors of each element in the Selection, up to but
+// not including the element matched by the matcher. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) ParentsUntilMatcher(m Matcher) *Selection {
+ return pushStack(s, getParentsNodes(s.Nodes, m, nil))
+}
+
+// ParentsUntilSelection gets the ancestors of each element in the Selection,
+// up to but not including the elements in the specified Selection. It returns a
+// new Selection object containing the matched elements.
+func (s *Selection) ParentsUntilSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return s.Parents()
+ }
+ return s.ParentsUntilNodes(sel.Nodes...)
+}
+
+// ParentsUntilNodes gets the ancestors of each element in the Selection,
+// up to but not including the specified nodes. It returns a
+// new Selection object containing the matched elements.
+func (s *Selection) ParentsUntilNodes(nodes ...*html.Node) *Selection {
+ return pushStack(s, getParentsNodes(s.Nodes, nil, nodes))
+}
+
+// ParentsFilteredUntil is like ParentsUntil, with the option to filter the
+// results based on a selector string. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) ParentsFilteredUntil(filterSelector, untilSelector string) *Selection {
+ return filterAndPush(s, getParentsNodes(s.Nodes, compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
+}
+
+// ParentsFilteredUntilMatcher is like ParentsUntilMatcher, with the option to filter the
+// results based on a matcher. It returns a new Selection object containing the matched elements.
+func (s *Selection) ParentsFilteredUntilMatcher(filter, until Matcher) *Selection {
+ return filterAndPush(s, getParentsNodes(s.Nodes, until, nil), filter)
+}
+
+// ParentsFilteredUntilSelection is like ParentsUntilSelection, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
+ return s.ParentsMatcherUntilSelection(compileMatcher(filterSelector), sel)
+}
+
+// ParentsMatcherUntilSelection is like ParentsUntilSelection, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
+ if sel == nil {
+ return s.ParentsMatcher(filter)
+ }
+ return s.ParentsMatcherUntilNodes(filter, sel.Nodes...)
+}
+
+// ParentsFilteredUntilNodes is like ParentsUntilNodes, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
+ return filterAndPush(s, getParentsNodes(s.Nodes, nil, nodes), compileMatcher(filterSelector))
+}
+
+// ParentsMatcherUntilNodes is like ParentsUntilNodes, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) ParentsMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
+ return filterAndPush(s, getParentsNodes(s.Nodes, nil, nodes), filter)
+}
+
+// Siblings gets the siblings of each element in the Selection. It returns
+// a new Selection object containing the matched elements.
+func (s *Selection) Siblings() *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil))
+}
+
+// SiblingsFiltered gets the siblings of each element in the Selection
+// filtered by a selector. It returns a new Selection object containing the
+// matched elements.
+func (s *Selection) SiblingsFiltered(selector string) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil), compileMatcher(selector))
+}
+
+// SiblingsMatcher gets the siblings of each element in the Selection
+// filtered by a matcher. It returns a new Selection object containing the
+// matched elements.
+func (s *Selection) SiblingsMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingAll, nil, nil), m)
+}
+
+// Next gets the immediately following sibling of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) Next() *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil))
+}
+
+// NextFiltered gets the immediately following sibling of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextFiltered(selector string) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil), compileMatcher(selector))
+}
+
+// NextMatcher gets the immediately following sibling of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNext, nil, nil), m)
+}
+
+// NextAll gets all the following siblings of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) NextAll() *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil))
+}
+
+// NextAllFiltered gets all the following siblings of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextAllFiltered(selector string) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil), compileMatcher(selector))
+}
+
+// NextAllMatcher gets all the following siblings of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) NextAllMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextAll, nil, nil), m)
+}
+
+// Prev gets the immediately preceding sibling of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) Prev() *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil))
+}
+
+// PrevFiltered gets the immediately preceding sibling of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevFiltered(selector string) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil), compileMatcher(selector))
+}
+
+// PrevMatcher gets the immediately preceding sibling of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrev, nil, nil), m)
+}
+
+// PrevAll gets all the preceding siblings of each element in the
+// Selection. It returns a new Selection object containing the matched elements.
+func (s *Selection) PrevAll() *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil))
+}
+
+// PrevAllFiltered gets all the preceding siblings of each element in the
+// Selection filtered by a selector. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevAllFiltered(selector string) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil), compileMatcher(selector))
+}
+
+// PrevAllMatcher gets all the preceding siblings of each element in the
+// Selection filtered by a matcher. It returns a new Selection object
+// containing the matched elements.
+func (s *Selection) PrevAllMatcher(m Matcher) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevAll, nil, nil), m)
+}
+
+// NextUntil gets all following siblings of each element up to but not
+// including the element matched by the selector. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntil(selector string) *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+ compileMatcher(selector), nil))
+}
+
+// NextUntilMatcher gets all following siblings of each element up to but not
+// including the element matched by the matcher. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntilMatcher(m Matcher) *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+ m, nil))
+}
+
+// NextUntilSelection gets all following siblings of each element up to but not
+// including the element matched by the Selection. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntilSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return s.NextAll()
+ }
+ return s.NextUntilNodes(sel.Nodes...)
+}
+
+// NextUntilNodes gets all following siblings of each element up to but not
+// including the element matched by the nodes. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) NextUntilNodes(nodes ...*html.Node) *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+ nil, nodes))
+}
+
+// PrevUntil gets all preceding siblings of each element up to but not
+// including the element matched by the selector. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntil(selector string) *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+ compileMatcher(selector), nil))
+}
+
+// PrevUntilMatcher gets all preceding siblings of each element up to but not
+// including the element matched by the matcher. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntilMatcher(m Matcher) *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+ m, nil))
+}
+
+// PrevUntilSelection gets all preceding siblings of each element up to but not
+// including the element matched by the Selection. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntilSelection(sel *Selection) *Selection {
+ if sel == nil {
+ return s.PrevAll()
+ }
+ return s.PrevUntilNodes(sel.Nodes...)
+}
+
+// PrevUntilNodes gets all preceding siblings of each element up to but not
+// including the element matched by the nodes. It returns a new Selection
+// object containing the matched elements.
+func (s *Selection) PrevUntilNodes(nodes ...*html.Node) *Selection {
+ return pushStack(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+ nil, nodes))
+}
+
+// NextFilteredUntil is like NextUntil, with the option to filter
+// the results based on a selector string.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntil(filterSelector, untilSelector string) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+ compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
+}
+
+// NextFilteredUntilMatcher is like NextUntilMatcher, with the option to filter
+// the results based on a matcher.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntilMatcher(filter, until Matcher) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+ until, nil), filter)
+}
+
+// NextFilteredUntilSelection is like NextUntilSelection, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
+ return s.NextMatcherUntilSelection(compileMatcher(filterSelector), sel)
+}
+
+// NextMatcherUntilSelection is like NextUntilSelection, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
+ if sel == nil {
+ return s.NextMatcher(filter)
+ }
+ return s.NextMatcherUntilNodes(filter, sel.Nodes...)
+}
+
+// NextFilteredUntilNodes is like NextUntilNodes, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+ nil, nodes), compileMatcher(filterSelector))
+}
+
+// NextMatcherUntilNodes is like NextUntilNodes, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) NextMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingNextUntil,
+ nil, nodes), filter)
+}
+
+// PrevFilteredUntil is like PrevUntil, with the option to filter
+// the results based on a selector string.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntil(filterSelector, untilSelector string) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+ compileMatcher(untilSelector), nil), compileMatcher(filterSelector))
+}
+
+// PrevFilteredUntilMatcher is like PrevUntilMatcher, with the option to filter
+// the results based on a matcher.
+// It returns a new Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntilMatcher(filter, until Matcher) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+ until, nil), filter)
+}
+
+// PrevFilteredUntilSelection is like PrevUntilSelection, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntilSelection(filterSelector string, sel *Selection) *Selection {
+ return s.PrevMatcherUntilSelection(compileMatcher(filterSelector), sel)
+}
+
+// PrevMatcherUntilSelection is like PrevUntilSelection, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevMatcherUntilSelection(filter Matcher, sel *Selection) *Selection {
+ if sel == nil {
+ return s.PrevMatcher(filter)
+ }
+ return s.PrevMatcherUntilNodes(filter, sel.Nodes...)
+}
+
+// PrevFilteredUntilNodes is like PrevUntilNodes, with the
+// option to filter the results based on a selector string. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevFilteredUntilNodes(filterSelector string, nodes ...*html.Node) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+ nil, nodes), compileMatcher(filterSelector))
+}
+
+// PrevMatcherUntilNodes is like PrevUntilNodes, with the
+// option to filter the results based on a matcher. It returns a new
+// Selection object containing the matched elements.
+func (s *Selection) PrevMatcherUntilNodes(filter Matcher, nodes ...*html.Node) *Selection {
+ return filterAndPush(s, getSiblingNodes(s.Nodes, siblingPrevUntil,
+ nil, nodes), filter)
+}
+
+// Filter and push filters the nodes based on a matcher, and pushes the results
+// on the stack, with the srcSel as previous selection.
+func filterAndPush(srcSel *Selection, nodes []*html.Node, m Matcher) *Selection {
+ // Create a temporary Selection with the specified nodes to filter using winnow
+ sel := &Selection{nodes, srcSel.document, nil}
+ // Filter based on matcher and push on stack
+ return pushStack(srcSel, winnow(sel, m, true))
+}
+
+// Internal implementation of Find that return raw nodes.
+func findWithMatcher(nodes []*html.Node, m Matcher) []*html.Node {
+ // Map nodes to find the matches within the children of each node
+ return mapNodes(nodes, func(i int, n *html.Node) (result []*html.Node) {
+ // Go down one level, becausejQuery's Find selects only within descendants
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ if c.Type == html.ElementNode {
+ result = append(result, m.MatchAll(c)...)
+ }
+ }
+ return
+ })
+}
+
+// Internal implementation to get all parent nodes, stopping at the specified
+// node (or nil if no stop).
+func getParentsNodes(nodes []*html.Node, stopm Matcher, stopNodes []*html.Node) []*html.Node {
+ return mapNodes(nodes, func(i int, n *html.Node) (result []*html.Node) {
+ for p := n.Parent; p != nil; p = p.Parent {
+ sel := newSingleSelection(p, nil)
+ if stopm != nil {
+ if sel.IsMatcher(stopm) {
+ break
+ }
+ } else if len(stopNodes) > 0 {
+ if sel.IsNodes(stopNodes...) {
+ break
+ }
+ }
+ if p.Type == html.ElementNode {
+ result = append(result, p)
+ }
+ }
+ return
+ })
+}
+
+// Internal implementation of sibling nodes that return a raw slice of matches.
+func getSiblingNodes(nodes []*html.Node, st siblingType, untilm Matcher, untilNodes []*html.Node) []*html.Node {
+ var f func(*html.Node) bool
+
+ // If the requested siblings are ...Until, create the test function to
+ // determine if the until condition is reached (returns true if it is)
+ if st == siblingNextUntil || st == siblingPrevUntil {
+ f = func(n *html.Node) bool {
+ if untilm != nil {
+ // Matcher-based condition
+ sel := newSingleSelection(n, nil)
+ return sel.IsMatcher(untilm)
+ } else if len(untilNodes) > 0 {
+ // Nodes-based condition
+ sel := newSingleSelection(n, nil)
+ return sel.IsNodes(untilNodes...)
+ }
+ return false
+ }
+ }
+
+ return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+ return getChildrenWithSiblingType(n.Parent, st, n, f)
+ })
+}
+
+// Gets the children nodes of each node in the specified slice of nodes,
+// based on the sibling type request.
+func getChildrenNodes(nodes []*html.Node, st siblingType) []*html.Node {
+ return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+ return getChildrenWithSiblingType(n, st, nil, nil)
+ })
+}
+
+// Gets the children of the specified parent, based on the requested sibling
+// type, skipping a specified node if required.
+func getChildrenWithSiblingType(parent *html.Node, st siblingType, skipNode *html.Node,
+ untilFunc func(*html.Node) bool) (result []*html.Node) {
+
+ // Create the iterator function
+ var iter = func(cur *html.Node) (ret *html.Node) {
+ // Based on the sibling type requested, iterate the right way
+ for {
+ switch st {
+ case siblingAll, siblingAllIncludingNonElements:
+ if cur == nil {
+ // First iteration, start with first child of parent
+ // Skip node if required
+ if ret = parent.FirstChild; ret == skipNode && skipNode != nil {
+ ret = skipNode.NextSibling
+ }
+ } else {
+ // Skip node if required
+ if ret = cur.NextSibling; ret == skipNode && skipNode != nil {
+ ret = skipNode.NextSibling
+ }
+ }
+ case siblingPrev, siblingPrevAll, siblingPrevUntil:
+ if cur == nil {
+ // Start with previous sibling of the skip node
+ ret = skipNode.PrevSibling
+ } else {
+ ret = cur.PrevSibling
+ }
+ case siblingNext, siblingNextAll, siblingNextUntil:
+ if cur == nil {
+ // Start with next sibling of the skip node
+ ret = skipNode.NextSibling
+ } else {
+ ret = cur.NextSibling
+ }
+ default:
+ panic("Invalid sibling type.")
+ }
+ if ret == nil || ret.Type == html.ElementNode || st == siblingAllIncludingNonElements {
+ return
+ }
+ // Not a valid node, try again from this one
+ cur = ret
+ }
+ }
+
+ for c := iter(nil); c != nil; c = iter(c) {
+ // If this is an ...Until case, test before append (returns true
+ // if the until condition is reached)
+ if st == siblingNextUntil || st == siblingPrevUntil {
+ if untilFunc(c) {
+ return
+ }
+ }
+ result = append(result, c)
+ if st == siblingNext || st == siblingPrev {
+ // Only one node was requested (immediate next or previous), so exit
+ return
+ }
+ }
+ return
+}
+
+// Internal implementation of parent nodes that return a raw slice of Nodes.
+func getParentNodes(nodes []*html.Node) []*html.Node {
+ return mapNodes(nodes, func(i int, n *html.Node) []*html.Node {
+ if n.Parent != nil && n.Parent.Type == html.ElementNode {
+ return []*html.Node{n.Parent}
+ }
+ return nil
+ })
+}
+
+// Internal map function used by many traversing methods. Takes the source nodes
+// to iterate on and the mapping function that returns an array of nodes.
+// Returns an array of nodes mapped by calling the callback function once for
+// each node in the source nodes.
+func mapNodes(nodes []*html.Node, f func(int, *html.Node) []*html.Node) (result []*html.Node) {
+ set := make(map[*html.Node]bool)
+ for i, n := range nodes {
+ if vals := f(i, n); len(vals) > 0 {
+ result = appendWithoutDuplicates(result, vals, set)
+ }
+ }
+ return result
+}
diff --git a/vendor/github.com/PuerkitoBio/goquery/type.go b/vendor/github.com/PuerkitoBio/goquery/type.go
new file mode 100644
index 0000000..6ad51db
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/type.go
@@ -0,0 +1,141 @@
+package goquery
+
+import (
+ "errors"
+ "io"
+ "net/http"
+ "net/url"
+
+ "github.com/andybalholm/cascadia"
+
+ "golang.org/x/net/html"
+)
+
+// Document represents an HTML document to be manipulated. Unlike jQuery, which
+// is loaded as part of a DOM document, and thus acts upon its containing
+// document, GoQuery doesn't know which HTML document to act upon. So it needs
+// to be told, and that's what the Document class is for. It holds the root
+// document node to manipulate, and can make selections on this document.
+type Document struct {
+ *Selection
+ Url *url.URL
+ rootNode *html.Node
+}
+
+// NewDocumentFromNode is a Document constructor that takes a root html Node
+// as argument.
+func NewDocumentFromNode(root *html.Node) *Document {
+ return newDocument(root, nil)
+}
+
+// NewDocument is a Document constructor that takes a string URL as argument.
+// It loads the specified document, parses it, and stores the root Document
+// node, ready to be manipulated.
+//
+// Deprecated: Use the net/http standard library package to make the request
+// and validate the response before calling goquery.NewDocumentFromReader
+// with the response's body.
+func NewDocument(url string) (*Document, error) {
+ // Load the URL
+ res, e := http.Get(url)
+ if e != nil {
+ return nil, e
+ }
+ return NewDocumentFromResponse(res)
+}
+
+// NewDocumentFromReader returns a Document from an io.Reader.
+// It returns an error as second value if the reader's data cannot be parsed
+// as html. It does not check if the reader is also an io.Closer, the
+// provided reader is never closed by this call. It is the responsibility
+// of the caller to close it if required.
+func NewDocumentFromReader(r io.Reader) (*Document, error) {
+ root, e := html.Parse(r)
+ if e != nil {
+ return nil, e
+ }
+ return newDocument(root, nil), nil
+}
+
+// NewDocumentFromResponse is another Document constructor that takes an http response as argument.
+// It loads the specified response's document, parses it, and stores the root Document
+// node, ready to be manipulated. The response's body is closed on return.
+//
+// Deprecated: Use goquery.NewDocumentFromReader with the response's body.
+func NewDocumentFromResponse(res *http.Response) (*Document, error) {
+ if res == nil {
+ return nil, errors.New("Response is nil")
+ }
+ defer res.Body.Close()
+ if res.Request == nil {
+ return nil, errors.New("Response.Request is nil")
+ }
+
+ // Parse the HTML into nodes
+ root, e := html.Parse(res.Body)
+ if e != nil {
+ return nil, e
+ }
+
+ // Create and fill the document
+ return newDocument(root, res.Request.URL), nil
+}
+
+// CloneDocument creates a deep-clone of a document.
+func CloneDocument(doc *Document) *Document {
+ return newDocument(cloneNode(doc.rootNode), doc.Url)
+}
+
+// Private constructor, make sure all fields are correctly filled.
+func newDocument(root *html.Node, url *url.URL) *Document {
+ // Create and fill the document
+ d := &Document{nil, url, root}
+ d.Selection = newSingleSelection(root, d)
+ return d
+}
+
+// Selection represents a collection of nodes matching some criteria. The
+// initial Selection can be created by using Document.Find, and then
+// manipulated using the jQuery-like chainable syntax and methods.
+type Selection struct {
+ Nodes []*html.Node
+ document *Document
+ prevSel *Selection
+}
+
+// Helper constructor to create an empty selection
+func newEmptySelection(doc *Document) *Selection {
+ return &Selection{nil, doc, nil}
+}
+
+// Helper constructor to create a selection of only one node
+func newSingleSelection(node *html.Node, doc *Document) *Selection {
+ return &Selection{[]*html.Node{node}, doc, nil}
+}
+
+// Matcher is an interface that defines the methods to match
+// HTML nodes against a compiled selector string. Cascadia's
+// Selector implements this interface.
+type Matcher interface {
+ Match(*html.Node) bool
+ MatchAll(*html.Node) []*html.Node
+ Filter([]*html.Node) []*html.Node
+}
+
+// compileMatcher compiles the selector string s and returns
+// the corresponding Matcher. If s is an invalid selector string,
+// it returns a Matcher that fails all matches.
+func compileMatcher(s string) Matcher {
+ cs, err := cascadia.Compile(s)
+ if err != nil {
+ return invalidMatcher{}
+ }
+ return cs
+}
+
+// invalidMatcher is a Matcher that always fails to match.
+type invalidMatcher struct{}
+
+func (invalidMatcher) Match(n *html.Node) bool { return false }
+func (invalidMatcher) MatchAll(n *html.Node) []*html.Node { return nil }
+func (invalidMatcher) Filter(ns []*html.Node) []*html.Node { return nil }
diff --git a/vendor/github.com/PuerkitoBio/goquery/utilities.go b/vendor/github.com/PuerkitoBio/goquery/utilities.go
new file mode 100644
index 0000000..b4c061a
--- /dev/null
+++ b/vendor/github.com/PuerkitoBio/goquery/utilities.go
@@ -0,0 +1,161 @@
+package goquery
+
+import (
+ "bytes"
+
+ "golang.org/x/net/html"
+)
+
+// used to determine if a set (map[*html.Node]bool) should be used
+// instead of iterating over a slice. The set uses more memory and
+// is slower than slice iteration for small N.
+const minNodesForSet = 1000
+
+var nodeNames = []string{
+ html.ErrorNode: "#error",
+ html.TextNode: "#text",
+ html.DocumentNode: "#document",
+ html.CommentNode: "#comment",
+}
+
+// NodeName returns the node name of the first element in the selection.
+// It tries to behave in a similar way as the DOM's nodeName property
+// (https://developer.mozilla.org/en-US/docs/Web/API/Node/nodeName).
+//
+// Go's net/html package defines the following node types, listed with
+// the corresponding returned value from this function:
+//
+// ErrorNode : #error
+// TextNode : #text
+// DocumentNode : #document
+// ElementNode : the element's tag name
+// CommentNode : #comment
+// DoctypeNode : the name of the document type
+//
+func NodeName(s *Selection) string {
+ if s.Length() == 0 {
+ return ""
+ }
+ switch n := s.Get(0); n.Type {
+ case html.ElementNode, html.DoctypeNode:
+ return n.Data
+ default:
+ if n.Type >= 0 && int(n.Type) < len(nodeNames) {
+ return nodeNames[n.Type]
+ }
+ return ""
+ }
+}
+
+// OuterHtml returns the outer HTML rendering of the first item in
+// the selection - that is, the HTML including the first element's
+// tag and attributes.
+//
+// Unlike InnerHtml, this is a function and not a method on the Selection,
+// because this is not a jQuery method (in javascript-land, this is
+// a property provided by the DOM).
+func OuterHtml(s *Selection) (string, error) {
+ var buf bytes.Buffer
+
+ if s.Length() == 0 {
+ return "", nil
+ }
+ n := s.Get(0)
+ if err := html.Render(&buf, n); err != nil {
+ return "", err
+ }
+ return buf.String(), nil
+}
+
+// Loop through all container nodes to search for the target node.
+func sliceContains(container []*html.Node, contained *html.Node) bool {
+ for _, n := range container {
+ if nodeContains(n, contained) {
+ return true
+ }
+ }
+
+ return false
+}
+
+// Checks if the contained node is within the container node.
+func nodeContains(container *html.Node, contained *html.Node) bool {
+ // Check if the parent of the contained node is the container node, traversing
+ // upward until the top is reached, or the container is found.
+ for contained = contained.Parent; contained != nil; contained = contained.Parent {
+ if container == contained {
+ return true
+ }
+ }
+ return false
+}
+
+// Checks if the target node is in the slice of nodes.
+func isInSlice(slice []*html.Node, node *html.Node) bool {
+ return indexInSlice(slice, node) > -1
+}
+
+// Returns the index of the target node in the slice, or -1.
+func indexInSlice(slice []*html.Node, node *html.Node) int {
+ if node != nil {
+ for i, n := range slice {
+ if n == node {
+ return i
+ }
+ }
+ }
+ return -1
+}
+
+// Appends the new nodes to the target slice, making sure no duplicate is added.
+// There is no check to the original state of the target slice, so it may still
+// contain duplicates. The target slice is returned because append() may create
+// a new underlying array. If targetSet is nil, a local set is created with the
+// target if len(target) + len(nodes) is greater than minNodesForSet.
+func appendWithoutDuplicates(target []*html.Node, nodes []*html.Node, targetSet map[*html.Node]bool) []*html.Node {
+ // if there are not that many nodes, don't use the map, faster to just use nested loops
+ // (unless a non-nil targetSet is passed, in which case the caller knows better).
+ if targetSet == nil && len(target)+len(nodes) < minNodesForSet {
+ for _, n := range nodes {
+ if !isInSlice(target, n) {
+ target = append(target, n)
+ }
+ }
+ return target
+ }
+
+ // if a targetSet is passed, then assume it is reliable, otherwise create one
+ // and initialize it with the current target contents.
+ if targetSet == nil {
+ targetSet = make(map[*html.Node]bool, len(target))
+ for _, n := range target {
+ targetSet[n] = true
+ }
+ }
+ for _, n := range nodes {
+ if !targetSet[n] {
+ target = append(target, n)
+ targetSet[n] = true
+ }
+ }
+
+ return target
+}
+
+// Loop through a selection, returning only those nodes that pass the predicate
+// function.
+func grep(sel *Selection, predicate func(i int, s *Selection) bool) (result []*html.Node) {
+ for i, n := range sel.Nodes {
+ if predicate(i, newSingleSelection(n, sel.document)) {
+ result = append(result, n)
+ }
+ }
+ return result
+}
+
+// Creates a new Selection object based on the specified nodes, and keeps the
+// source Selection object on the stack (linked list).
+func pushStack(fromSel *Selection, nodes []*html.Node) *Selection {
+ result := &Selection{nodes, fromSel.document, fromSel}
+ return result
+}
diff --git a/vendor/github.com/andybalholm/cascadia/LICENSE b/vendor/github.com/andybalholm/cascadia/LICENSE
new file mode 100755
index 0000000..ee5ad35
--- /dev/null
+++ b/vendor/github.com/andybalholm/cascadia/LICENSE
@@ -0,0 +1,24 @@
+Copyright (c) 2011 Andy Balholm. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/andybalholm/cascadia/README.md b/vendor/github.com/andybalholm/cascadia/README.md
new file mode 100644
index 0000000..9021cb9
--- /dev/null
+++ b/vendor/github.com/andybalholm/cascadia/README.md
@@ -0,0 +1,7 @@
+# cascadia
+
+[](https://travis-ci.org/andybalholm/cascadia)
+
+The Cascadia package implements CSS selectors for use with the parse trees produced by the html package.
+
+To test CSS selectors without writing Go code, check out [cascadia](https://github.com/suntong/cascadia) the command line tool, a thin wrapper around this package.
diff --git a/vendor/github.com/andybalholm/cascadia/go.mod b/vendor/github.com/andybalholm/cascadia/go.mod
new file mode 100644
index 0000000..e6febbb
--- /dev/null
+++ b/vendor/github.com/andybalholm/cascadia/go.mod
@@ -0,0 +1,3 @@
+module "github.com/andybalholm/cascadia"
+
+require "golang.org/x/net" v0.0.0-20180218175443-cbe0f9307d01
diff --git a/vendor/github.com/andybalholm/cascadia/parser.go b/vendor/github.com/andybalholm/cascadia/parser.go
new file mode 100644
index 0000000..495db9c
--- /dev/null
+++ b/vendor/github.com/andybalholm/cascadia/parser.go
@@ -0,0 +1,835 @@
+// Package cascadia is an implementation of CSS selectors.
+package cascadia
+
+import (
+ "errors"
+ "fmt"
+ "regexp"
+ "strconv"
+ "strings"
+
+ "golang.org/x/net/html"
+)
+
+// a parser for CSS selectors
+type parser struct {
+ s string // the source text
+ i int // the current position
+}
+
+// parseEscape parses a backslash escape.
+func (p *parser) parseEscape() (result string, err error) {
+ if len(p.s) < p.i+2 || p.s[p.i] != '\\' {
+ return "", errors.New("invalid escape sequence")
+ }
+
+ start := p.i + 1
+ c := p.s[start]
+ switch {
+ case c == '\r' || c == '\n' || c == '\f':
+ return "", errors.New("escaped line ending outside string")
+ case hexDigit(c):
+ // unicode escape (hex)
+ var i int
+ for i = start; i < p.i+6 && i < len(p.s) && hexDigit(p.s[i]); i++ {
+ // empty
+ }
+ v, _ := strconv.ParseUint(p.s[start:i], 16, 21)
+ if len(p.s) > i {
+ switch p.s[i] {
+ case '\r':
+ i++
+ if len(p.s) > i && p.s[i] == '\n' {
+ i++
+ }
+ case ' ', '\t', '\n', '\f':
+ i++
+ }
+ }
+ p.i = i
+ return string(rune(v)), nil
+ }
+
+ // Return the literal character after the backslash.
+ result = p.s[start : start+1]
+ p.i += 2
+ return result, nil
+}
+
+func hexDigit(c byte) bool {
+ return '0' <= c && c <= '9' || 'a' <= c && c <= 'f' || 'A' <= c && c <= 'F'
+}
+
+// nameStart returns whether c can be the first character of an identifier
+// (not counting an initial hyphen, or an escape sequence).
+func nameStart(c byte) bool {
+ return 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '_' || c > 127
+}
+
+// nameChar returns whether c can be a character within an identifier
+// (not counting an escape sequence).
+func nameChar(c byte) bool {
+ return 'a' <= c && c <= 'z' || 'A' <= c && c <= 'Z' || c == '_' || c > 127 ||
+ c == '-' || '0' <= c && c <= '9'
+}
+
+// parseIdentifier parses an identifier.
+func (p *parser) parseIdentifier() (result string, err error) {
+ startingDash := false
+ if len(p.s) > p.i && p.s[p.i] == '-' {
+ startingDash = true
+ p.i++
+ }
+
+ if len(p.s) <= p.i {
+ return "", errors.New("expected identifier, found EOF instead")
+ }
+
+ if c := p.s[p.i]; !(nameStart(c) || c == '\\') {
+ return "", fmt.Errorf("expected identifier, found %c instead", c)
+ }
+
+ result, err = p.parseName()
+ if startingDash && err == nil {
+ result = "-" + result
+ }
+ return
+}
+
+// parseName parses a name (which is like an identifier, but doesn't have
+// extra restrictions on the first character).
+func (p *parser) parseName() (result string, err error) {
+ i := p.i
+loop:
+ for i < len(p.s) {
+ c := p.s[i]
+ switch {
+ case nameChar(c):
+ start := i
+ for i < len(p.s) && nameChar(p.s[i]) {
+ i++
+ }
+ result += p.s[start:i]
+ case c == '\\':
+ p.i = i
+ val, err := p.parseEscape()
+ if err != nil {
+ return "", err
+ }
+ i = p.i
+ result += val
+ default:
+ break loop
+ }
+ }
+
+ if result == "" {
+ return "", errors.New("expected name, found EOF instead")
+ }
+
+ p.i = i
+ return result, nil
+}
+
+// parseString parses a single- or double-quoted string.
+func (p *parser) parseString() (result string, err error) {
+ i := p.i
+ if len(p.s) < i+2 {
+ return "", errors.New("expected string, found EOF instead")
+ }
+
+ quote := p.s[i]
+ i++
+
+loop:
+ for i < len(p.s) {
+ switch p.s[i] {
+ case '\\':
+ if len(p.s) > i+1 {
+ switch c := p.s[i+1]; c {
+ case '\r':
+ if len(p.s) > i+2 && p.s[i+2] == '\n' {
+ i += 3
+ continue loop
+ }
+ fallthrough
+ case '\n', '\f':
+ i += 2
+ continue loop
+ }
+ }
+ p.i = i
+ val, err := p.parseEscape()
+ if err != nil {
+ return "", err
+ }
+ i = p.i
+ result += val
+ case quote:
+ break loop
+ case '\r', '\n', '\f':
+ return "", errors.New("unexpected end of line in string")
+ default:
+ start := i
+ for i < len(p.s) {
+ if c := p.s[i]; c == quote || c == '\\' || c == '\r' || c == '\n' || c == '\f' {
+ break
+ }
+ i++
+ }
+ result += p.s[start:i]
+ }
+ }
+
+ if i >= len(p.s) {
+ return "", errors.New("EOF in string")
+ }
+
+ // Consume the final quote.
+ i++
+
+ p.i = i
+ return result, nil
+}
+
+// parseRegex parses a regular expression; the end is defined by encountering an
+// unmatched closing ')' or ']' which is not consumed
+func (p *parser) parseRegex() (rx *regexp.Regexp, err error) {
+ i := p.i
+ if len(p.s) < i+2 {
+ return nil, errors.New("expected regular expression, found EOF instead")
+ }
+
+ // number of open parens or brackets;
+ // when it becomes negative, finished parsing regex
+ open := 0
+
+loop:
+ for i < len(p.s) {
+ switch p.s[i] {
+ case '(', '[':
+ open++
+ case ')', ']':
+ open--
+ if open < 0 {
+ break loop
+ }
+ }
+ i++
+ }
+
+ if i >= len(p.s) {
+ return nil, errors.New("EOF in regular expression")
+ }
+ rx, err = regexp.Compile(p.s[p.i:i])
+ p.i = i
+ return rx, err
+}
+
+// skipWhitespace consumes whitespace characters and comments.
+// It returns true if there was actually anything to skip.
+func (p *parser) skipWhitespace() bool {
+ i := p.i
+ for i < len(p.s) {
+ switch p.s[i] {
+ case ' ', '\t', '\r', '\n', '\f':
+ i++
+ continue
+ case '/':
+ if strings.HasPrefix(p.s[i:], "/*") {
+ end := strings.Index(p.s[i+len("/*"):], "*/")
+ if end != -1 {
+ i += end + len("/**/")
+ continue
+ }
+ }
+ }
+ break
+ }
+
+ if i > p.i {
+ p.i = i
+ return true
+ }
+
+ return false
+}
+
+// consumeParenthesis consumes an opening parenthesis and any following
+// whitespace. It returns true if there was actually a parenthesis to skip.
+func (p *parser) consumeParenthesis() bool {
+ if p.i < len(p.s) && p.s[p.i] == '(' {
+ p.i++
+ p.skipWhitespace()
+ return true
+ }
+ return false
+}
+
+// consumeClosingParenthesis consumes a closing parenthesis and any preceding
+// whitespace. It returns true if there was actually a parenthesis to skip.
+func (p *parser) consumeClosingParenthesis() bool {
+ i := p.i
+ p.skipWhitespace()
+ if p.i < len(p.s) && p.s[p.i] == ')' {
+ p.i++
+ return true
+ }
+ p.i = i
+ return false
+}
+
+// parseTypeSelector parses a type selector (one that matches by tag name).
+func (p *parser) parseTypeSelector() (result Selector, err error) {
+ tag, err := p.parseIdentifier()
+ if err != nil {
+ return nil, err
+ }
+
+ return typeSelector(tag), nil
+}
+
+// parseIDSelector parses a selector that matches by id attribute.
+func (p *parser) parseIDSelector() (Selector, error) {
+ if p.i >= len(p.s) {
+ return nil, fmt.Errorf("expected id selector (#id), found EOF instead")
+ }
+ if p.s[p.i] != '#' {
+ return nil, fmt.Errorf("expected id selector (#id), found '%c' instead", p.s[p.i])
+ }
+
+ p.i++
+ id, err := p.parseName()
+ if err != nil {
+ return nil, err
+ }
+
+ return attributeEqualsSelector("id", id), nil
+}
+
+// parseClassSelector parses a selector that matches by class attribute.
+func (p *parser) parseClassSelector() (Selector, error) {
+ if p.i >= len(p.s) {
+ return nil, fmt.Errorf("expected class selector (.class), found EOF instead")
+ }
+ if p.s[p.i] != '.' {
+ return nil, fmt.Errorf("expected class selector (.class), found '%c' instead", p.s[p.i])
+ }
+
+ p.i++
+ class, err := p.parseIdentifier()
+ if err != nil {
+ return nil, err
+ }
+
+ return attributeIncludesSelector("class", class), nil
+}
+
+// parseAttributeSelector parses a selector that matches by attribute value.
+func (p *parser) parseAttributeSelector() (Selector, error) {
+ if p.i >= len(p.s) {
+ return nil, fmt.Errorf("expected attribute selector ([attribute]), found EOF instead")
+ }
+ if p.s[p.i] != '[' {
+ return nil, fmt.Errorf("expected attribute selector ([attribute]), found '%c' instead", p.s[p.i])
+ }
+
+ p.i++
+ p.skipWhitespace()
+ key, err := p.parseIdentifier()
+ if err != nil {
+ return nil, err
+ }
+
+ p.skipWhitespace()
+ if p.i >= len(p.s) {
+ return nil, errors.New("unexpected EOF in attribute selector")
+ }
+
+ if p.s[p.i] == ']' {
+ p.i++
+ return attributeExistsSelector(key), nil
+ }
+
+ if p.i+2 >= len(p.s) {
+ return nil, errors.New("unexpected EOF in attribute selector")
+ }
+
+ op := p.s[p.i : p.i+2]
+ if op[0] == '=' {
+ op = "="
+ } else if op[1] != '=' {
+ return nil, fmt.Errorf(`expected equality operator, found "%s" instead`, op)
+ }
+ p.i += len(op)
+
+ p.skipWhitespace()
+ if p.i >= len(p.s) {
+ return nil, errors.New("unexpected EOF in attribute selector")
+ }
+ var val string
+ var rx *regexp.Regexp
+ if op == "#=" {
+ rx, err = p.parseRegex()
+ } else {
+ switch p.s[p.i] {
+ case '\'', '"':
+ val, err = p.parseString()
+ default:
+ val, err = p.parseIdentifier()
+ }
+ }
+ if err != nil {
+ return nil, err
+ }
+
+ p.skipWhitespace()
+ if p.i >= len(p.s) {
+ return nil, errors.New("unexpected EOF in attribute selector")
+ }
+ if p.s[p.i] != ']' {
+ return nil, fmt.Errorf("expected ']', found '%c' instead", p.s[p.i])
+ }
+ p.i++
+
+ switch op {
+ case "=":
+ return attributeEqualsSelector(key, val), nil
+ case "!=":
+ return attributeNotEqualSelector(key, val), nil
+ case "~=":
+ return attributeIncludesSelector(key, val), nil
+ case "|=":
+ return attributeDashmatchSelector(key, val), nil
+ case "^=":
+ return attributePrefixSelector(key, val), nil
+ case "$=":
+ return attributeSuffixSelector(key, val), nil
+ case "*=":
+ return attributeSubstringSelector(key, val), nil
+ case "#=":
+ return attributeRegexSelector(key, rx), nil
+ }
+
+ return nil, fmt.Errorf("attribute operator %q is not supported", op)
+}
+
+var errExpectedParenthesis = errors.New("expected '(' but didn't find it")
+var errExpectedClosingParenthesis = errors.New("expected ')' but didn't find it")
+var errUnmatchedParenthesis = errors.New("unmatched '('")
+
+// parsePseudoclassSelector parses a pseudoclass selector like :not(p).
+func (p *parser) parsePseudoclassSelector() (Selector, error) {
+ if p.i >= len(p.s) {
+ return nil, fmt.Errorf("expected pseudoclass selector (:pseudoclass), found EOF instead")
+ }
+ if p.s[p.i] != ':' {
+ return nil, fmt.Errorf("expected attribute selector (:pseudoclass), found '%c' instead", p.s[p.i])
+ }
+
+ p.i++
+ name, err := p.parseIdentifier()
+ if err != nil {
+ return nil, err
+ }
+ name = toLowerASCII(name)
+
+ switch name {
+ case "not", "has", "haschild":
+ if !p.consumeParenthesis() {
+ return nil, errExpectedParenthesis
+ }
+ sel, parseErr := p.parseSelectorGroup()
+ if parseErr != nil {
+ return nil, parseErr
+ }
+ if !p.consumeClosingParenthesis() {
+ return nil, errExpectedClosingParenthesis
+ }
+
+ switch name {
+ case "not":
+ return negatedSelector(sel), nil
+ case "has":
+ return hasDescendantSelector(sel), nil
+ case "haschild":
+ return hasChildSelector(sel), nil
+ }
+
+ case "contains", "containsown":
+ if !p.consumeParenthesis() {
+ return nil, errExpectedParenthesis
+ }
+ if p.i == len(p.s) {
+ return nil, errUnmatchedParenthesis
+ }
+ var val string
+ switch p.s[p.i] {
+ case '\'', '"':
+ val, err = p.parseString()
+ default:
+ val, err = p.parseIdentifier()
+ }
+ if err != nil {
+ return nil, err
+ }
+ val = strings.ToLower(val)
+ p.skipWhitespace()
+ if p.i >= len(p.s) {
+ return nil, errors.New("unexpected EOF in pseudo selector")
+ }
+ if !p.consumeClosingParenthesis() {
+ return nil, errExpectedClosingParenthesis
+ }
+
+ switch name {
+ case "contains":
+ return textSubstrSelector(val), nil
+ case "containsown":
+ return ownTextSubstrSelector(val), nil
+ }
+
+ case "matches", "matchesown":
+ if !p.consumeParenthesis() {
+ return nil, errExpectedParenthesis
+ }
+ rx, err := p.parseRegex()
+ if err != nil {
+ return nil, err
+ }
+ if p.i >= len(p.s) {
+ return nil, errors.New("unexpected EOF in pseudo selector")
+ }
+ if !p.consumeClosingParenthesis() {
+ return nil, errExpectedClosingParenthesis
+ }
+
+ switch name {
+ case "matches":
+ return textRegexSelector(rx), nil
+ case "matchesown":
+ return ownTextRegexSelector(rx), nil
+ }
+
+ case "nth-child", "nth-last-child", "nth-of-type", "nth-last-of-type":
+ if !p.consumeParenthesis() {
+ return nil, errExpectedParenthesis
+ }
+ a, b, err := p.parseNth()
+ if err != nil {
+ return nil, err
+ }
+ if !p.consumeClosingParenthesis() {
+ return nil, errExpectedClosingParenthesis
+ }
+ if a == 0 {
+ switch name {
+ case "nth-child":
+ return simpleNthChildSelector(b, false), nil
+ case "nth-of-type":
+ return simpleNthChildSelector(b, true), nil
+ case "nth-last-child":
+ return simpleNthLastChildSelector(b, false), nil
+ case "nth-last-of-type":
+ return simpleNthLastChildSelector(b, true), nil
+ }
+ }
+ return nthChildSelector(a, b,
+ name == "nth-last-child" || name == "nth-last-of-type",
+ name == "nth-of-type" || name == "nth-last-of-type"),
+ nil
+
+ case "first-child":
+ return simpleNthChildSelector(1, false), nil
+ case "last-child":
+ return simpleNthLastChildSelector(1, false), nil
+ case "first-of-type":
+ return simpleNthChildSelector(1, true), nil
+ case "last-of-type":
+ return simpleNthLastChildSelector(1, true), nil
+ case "only-child":
+ return onlyChildSelector(false), nil
+ case "only-of-type":
+ return onlyChildSelector(true), nil
+ case "input":
+ return inputSelector, nil
+ case "empty":
+ return emptyElementSelector, nil
+ case "root":
+ return rootSelector, nil
+ }
+
+ return nil, fmt.Errorf("unknown pseudoclass :%s", name)
+}
+
+// parseInteger parses a decimal integer.
+func (p *parser) parseInteger() (int, error) {
+ i := p.i
+ start := i
+ for i < len(p.s) && '0' <= p.s[i] && p.s[i] <= '9' {
+ i++
+ }
+ if i == start {
+ return 0, errors.New("expected integer, but didn't find it")
+ }
+ p.i = i
+
+ val, err := strconv.Atoi(p.s[start:i])
+ if err != nil {
+ return 0, err
+ }
+
+ return val, nil
+}
+
+// parseNth parses the argument for :nth-child (normally of the form an+b).
+func (p *parser) parseNth() (a, b int, err error) {
+ // initial state
+ if p.i >= len(p.s) {
+ goto eof
+ }
+ switch p.s[p.i] {
+ case '-':
+ p.i++
+ goto negativeA
+ case '+':
+ p.i++
+ goto positiveA
+ case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+ goto positiveA
+ case 'n', 'N':
+ a = 1
+ p.i++
+ goto readN
+ case 'o', 'O', 'e', 'E':
+ id, nameErr := p.parseName()
+ if nameErr != nil {
+ return 0, 0, nameErr
+ }
+ id = toLowerASCII(id)
+ if id == "odd" {
+ return 2, 1, nil
+ }
+ if id == "even" {
+ return 2, 0, nil
+ }
+ return 0, 0, fmt.Errorf("expected 'odd' or 'even', but found '%s' instead", id)
+ default:
+ goto invalid
+ }
+
+positiveA:
+ if p.i >= len(p.s) {
+ goto eof
+ }
+ switch p.s[p.i] {
+ case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+ a, err = p.parseInteger()
+ if err != nil {
+ return 0, 0, err
+ }
+ goto readA
+ case 'n', 'N':
+ a = 1
+ p.i++
+ goto readN
+ default:
+ goto invalid
+ }
+
+negativeA:
+ if p.i >= len(p.s) {
+ goto eof
+ }
+ switch p.s[p.i] {
+ case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
+ a, err = p.parseInteger()
+ if err != nil {
+ return 0, 0, err
+ }
+ a = -a
+ goto readA
+ case 'n', 'N':
+ a = -1
+ p.i++
+ goto readN
+ default:
+ goto invalid
+ }
+
+readA:
+ if p.i >= len(p.s) {
+ goto eof
+ }
+ switch p.s[p.i] {
+ case 'n', 'N':
+ p.i++
+ goto readN
+ default:
+ // The number we read as a is actually b.
+ return 0, a, nil
+ }
+
+readN:
+ p.skipWhitespace()
+ if p.i >= len(p.s) {
+ goto eof
+ }
+ switch p.s[p.i] {
+ case '+':
+ p.i++
+ p.skipWhitespace()
+ b, err = p.parseInteger()
+ if err != nil {
+ return 0, 0, err
+ }
+ return a, b, nil
+ case '-':
+ p.i++
+ p.skipWhitespace()
+ b, err = p.parseInteger()
+ if err != nil {
+ return 0, 0, err
+ }
+ return a, -b, nil
+ default:
+ return a, 0, nil
+ }
+
+eof:
+ return 0, 0, errors.New("unexpected EOF while attempting to parse expression of form an+b")
+
+invalid:
+ return 0, 0, errors.New("unexpected character while attempting to parse expression of form an+b")
+}
+
+// parseSimpleSelectorSequence parses a selector sequence that applies to
+// a single element.
+func (p *parser) parseSimpleSelectorSequence() (Selector, error) {
+ var result Selector
+
+ if p.i >= len(p.s) {
+ return nil, errors.New("expected selector, found EOF instead")
+ }
+
+ switch p.s[p.i] {
+ case '*':
+ // It's the universal selector. Just skip over it, since it doesn't affect the meaning.
+ p.i++
+ case '#', '.', '[', ':':
+ // There's no type selector. Wait to process the other till the main loop.
+ default:
+ r, err := p.parseTypeSelector()
+ if err != nil {
+ return nil, err
+ }
+ result = r
+ }
+
+loop:
+ for p.i < len(p.s) {
+ var ns Selector
+ var err error
+ switch p.s[p.i] {
+ case '#':
+ ns, err = p.parseIDSelector()
+ case '.':
+ ns, err = p.parseClassSelector()
+ case '[':
+ ns, err = p.parseAttributeSelector()
+ case ':':
+ ns, err = p.parsePseudoclassSelector()
+ default:
+ break loop
+ }
+ if err != nil {
+ return nil, err
+ }
+ if result == nil {
+ result = ns
+ } else {
+ result = intersectionSelector(result, ns)
+ }
+ }
+
+ if result == nil {
+ result = func(n *html.Node) bool {
+ return n.Type == html.ElementNode
+ }
+ }
+
+ return result, nil
+}
+
+// parseSelector parses a selector that may include combinators.
+func (p *parser) parseSelector() (result Selector, err error) {
+ p.skipWhitespace()
+ result, err = p.parseSimpleSelectorSequence()
+ if err != nil {
+ return
+ }
+
+ for {
+ var combinator byte
+ if p.skipWhitespace() {
+ combinator = ' '
+ }
+ if p.i >= len(p.s) {
+ return
+ }
+
+ switch p.s[p.i] {
+ case '+', '>', '~':
+ combinator = p.s[p.i]
+ p.i++
+ p.skipWhitespace()
+ case ',', ')':
+ // These characters can't begin a selector, but they can legally occur after one.
+ return
+ }
+
+ if combinator == 0 {
+ return
+ }
+
+ c, err := p.parseSimpleSelectorSequence()
+ if err != nil {
+ return nil, err
+ }
+
+ switch combinator {
+ case ' ':
+ result = descendantSelector(result, c)
+ case '>':
+ result = childSelector(result, c)
+ case '+':
+ result = siblingSelector(result, c, true)
+ case '~':
+ result = siblingSelector(result, c, false)
+ }
+ }
+
+ panic("unreachable")
+}
+
+// parseSelectorGroup parses a group of selectors, separated by commas.
+func (p *parser) parseSelectorGroup() (result Selector, err error) {
+ result, err = p.parseSelector()
+ if err != nil {
+ return
+ }
+
+ for p.i < len(p.s) {
+ if p.s[p.i] != ',' {
+ return result, nil
+ }
+ p.i++
+ c, err := p.parseSelector()
+ if err != nil {
+ return nil, err
+ }
+ result = unionSelector(result, c)
+ }
+
+ return
+}
diff --git a/vendor/github.com/andybalholm/cascadia/selector.go b/vendor/github.com/andybalholm/cascadia/selector.go
new file mode 100644
index 0000000..9fb05cc
--- /dev/null
+++ b/vendor/github.com/andybalholm/cascadia/selector.go
@@ -0,0 +1,622 @@
+package cascadia
+
+import (
+ "bytes"
+ "fmt"
+ "regexp"
+ "strings"
+
+ "golang.org/x/net/html"
+)
+
+// the Selector type, and functions for creating them
+
+// A Selector is a function which tells whether a node matches or not.
+type Selector func(*html.Node) bool
+
+// hasChildMatch returns whether n has any child that matches a.
+func hasChildMatch(n *html.Node, a Selector) bool {
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ if a(c) {
+ return true
+ }
+ }
+ return false
+}
+
+// hasDescendantMatch performs a depth-first search of n's descendants,
+// testing whether any of them match a. It returns true as soon as a match is
+// found, or false if no match is found.
+func hasDescendantMatch(n *html.Node, a Selector) bool {
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ if a(c) || (c.Type == html.ElementNode && hasDescendantMatch(c, a)) {
+ return true
+ }
+ }
+ return false
+}
+
+// Compile parses a selector and returns, if successful, a Selector object
+// that can be used to match against html.Node objects.
+func Compile(sel string) (Selector, error) {
+ p := &parser{s: sel}
+ compiled, err := p.parseSelectorGroup()
+ if err != nil {
+ return nil, err
+ }
+
+ if p.i < len(sel) {
+ return nil, fmt.Errorf("parsing %q: %d bytes left over", sel, len(sel)-p.i)
+ }
+
+ return compiled, nil
+}
+
+// MustCompile is like Compile, but panics instead of returning an error.
+func MustCompile(sel string) Selector {
+ compiled, err := Compile(sel)
+ if err != nil {
+ panic(err)
+ }
+ return compiled
+}
+
+// MatchAll returns a slice of the nodes that match the selector,
+// from n and its children.
+func (s Selector) MatchAll(n *html.Node) []*html.Node {
+ return s.matchAllInto(n, nil)
+}
+
+func (s Selector) matchAllInto(n *html.Node, storage []*html.Node) []*html.Node {
+ if s(n) {
+ storage = append(storage, n)
+ }
+
+ for child := n.FirstChild; child != nil; child = child.NextSibling {
+ storage = s.matchAllInto(child, storage)
+ }
+
+ return storage
+}
+
+// Match returns true if the node matches the selector.
+func (s Selector) Match(n *html.Node) bool {
+ return s(n)
+}
+
+// MatchFirst returns the first node that matches s, from n and its children.
+func (s Selector) MatchFirst(n *html.Node) *html.Node {
+ if s.Match(n) {
+ return n
+ }
+
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ m := s.MatchFirst(c)
+ if m != nil {
+ return m
+ }
+ }
+ return nil
+}
+
+// Filter returns the nodes in nodes that match the selector.
+func (s Selector) Filter(nodes []*html.Node) (result []*html.Node) {
+ for _, n := range nodes {
+ if s(n) {
+ result = append(result, n)
+ }
+ }
+ return result
+}
+
+// typeSelector returns a Selector that matches elements with a given tag name.
+func typeSelector(tag string) Selector {
+ tag = toLowerASCII(tag)
+ return func(n *html.Node) bool {
+ return n.Type == html.ElementNode && n.Data == tag
+ }
+}
+
+// toLowerASCII returns s with all ASCII capital letters lowercased.
+func toLowerASCII(s string) string {
+ var b []byte
+ for i := 0; i < len(s); i++ {
+ if c := s[i]; 'A' <= c && c <= 'Z' {
+ if b == nil {
+ b = make([]byte, len(s))
+ copy(b, s)
+ }
+ b[i] = s[i] + ('a' - 'A')
+ }
+ }
+
+ if b == nil {
+ return s
+ }
+
+ return string(b)
+}
+
+// attributeSelector returns a Selector that matches elements
+// where the attribute named key satisifes the function f.
+func attributeSelector(key string, f func(string) bool) Selector {
+ key = toLowerASCII(key)
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+ for _, a := range n.Attr {
+ if a.Key == key && f(a.Val) {
+ return true
+ }
+ }
+ return false
+ }
+}
+
+// attributeExistsSelector returns a Selector that matches elements that have
+// an attribute named key.
+func attributeExistsSelector(key string) Selector {
+ return attributeSelector(key, func(string) bool { return true })
+}
+
+// attributeEqualsSelector returns a Selector that matches elements where
+// the attribute named key has the value val.
+func attributeEqualsSelector(key, val string) Selector {
+ return attributeSelector(key,
+ func(s string) bool {
+ return s == val
+ })
+}
+
+// attributeNotEqualSelector returns a Selector that matches elements where
+// the attribute named key does not have the value val.
+func attributeNotEqualSelector(key, val string) Selector {
+ key = toLowerASCII(key)
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+ for _, a := range n.Attr {
+ if a.Key == key && a.Val == val {
+ return false
+ }
+ }
+ return true
+ }
+}
+
+// attributeIncludesSelector returns a Selector that matches elements where
+// the attribute named key is a whitespace-separated list that includes val.
+func attributeIncludesSelector(key, val string) Selector {
+ return attributeSelector(key,
+ func(s string) bool {
+ for s != "" {
+ i := strings.IndexAny(s, " \t\r\n\f")
+ if i == -1 {
+ return s == val
+ }
+ if s[:i] == val {
+ return true
+ }
+ s = s[i+1:]
+ }
+ return false
+ })
+}
+
+// attributeDashmatchSelector returns a Selector that matches elements where
+// the attribute named key equals val or starts with val plus a hyphen.
+func attributeDashmatchSelector(key, val string) Selector {
+ return attributeSelector(key,
+ func(s string) bool {
+ if s == val {
+ return true
+ }
+ if len(s) <= len(val) {
+ return false
+ }
+ if s[:len(val)] == val && s[len(val)] == '-' {
+ return true
+ }
+ return false
+ })
+}
+
+// attributePrefixSelector returns a Selector that matches elements where
+// the attribute named key starts with val.
+func attributePrefixSelector(key, val string) Selector {
+ return attributeSelector(key,
+ func(s string) bool {
+ if strings.TrimSpace(s) == "" {
+ return false
+ }
+ return strings.HasPrefix(s, val)
+ })
+}
+
+// attributeSuffixSelector returns a Selector that matches elements where
+// the attribute named key ends with val.
+func attributeSuffixSelector(key, val string) Selector {
+ return attributeSelector(key,
+ func(s string) bool {
+ if strings.TrimSpace(s) == "" {
+ return false
+ }
+ return strings.HasSuffix(s, val)
+ })
+}
+
+// attributeSubstringSelector returns a Selector that matches nodes where
+// the attribute named key contains val.
+func attributeSubstringSelector(key, val string) Selector {
+ return attributeSelector(key,
+ func(s string) bool {
+ if strings.TrimSpace(s) == "" {
+ return false
+ }
+ return strings.Contains(s, val)
+ })
+}
+
+// attributeRegexSelector returns a Selector that matches nodes where
+// the attribute named key matches the regular expression rx
+func attributeRegexSelector(key string, rx *regexp.Regexp) Selector {
+ return attributeSelector(key,
+ func(s string) bool {
+ return rx.MatchString(s)
+ })
+}
+
+// intersectionSelector returns a selector that matches nodes that match
+// both a and b.
+func intersectionSelector(a, b Selector) Selector {
+ return func(n *html.Node) bool {
+ return a(n) && b(n)
+ }
+}
+
+// unionSelector returns a selector that matches elements that match
+// either a or b.
+func unionSelector(a, b Selector) Selector {
+ return func(n *html.Node) bool {
+ return a(n) || b(n)
+ }
+}
+
+// negatedSelector returns a selector that matches elements that do not match a.
+func negatedSelector(a Selector) Selector {
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+ return !a(n)
+ }
+}
+
+// writeNodeText writes the text contained in n and its descendants to b.
+func writeNodeText(n *html.Node, b *bytes.Buffer) {
+ switch n.Type {
+ case html.TextNode:
+ b.WriteString(n.Data)
+ case html.ElementNode:
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ writeNodeText(c, b)
+ }
+ }
+}
+
+// nodeText returns the text contained in n and its descendants.
+func nodeText(n *html.Node) string {
+ var b bytes.Buffer
+ writeNodeText(n, &b)
+ return b.String()
+}
+
+// nodeOwnText returns the contents of the text nodes that are direct
+// children of n.
+func nodeOwnText(n *html.Node) string {
+ var b bytes.Buffer
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ if c.Type == html.TextNode {
+ b.WriteString(c.Data)
+ }
+ }
+ return b.String()
+}
+
+// textSubstrSelector returns a selector that matches nodes that
+// contain the given text.
+func textSubstrSelector(val string) Selector {
+ return func(n *html.Node) bool {
+ text := strings.ToLower(nodeText(n))
+ return strings.Contains(text, val)
+ }
+}
+
+// ownTextSubstrSelector returns a selector that matches nodes that
+// directly contain the given text
+func ownTextSubstrSelector(val string) Selector {
+ return func(n *html.Node) bool {
+ text := strings.ToLower(nodeOwnText(n))
+ return strings.Contains(text, val)
+ }
+}
+
+// textRegexSelector returns a selector that matches nodes whose text matches
+// the specified regular expression
+func textRegexSelector(rx *regexp.Regexp) Selector {
+ return func(n *html.Node) bool {
+ return rx.MatchString(nodeText(n))
+ }
+}
+
+// ownTextRegexSelector returns a selector that matches nodes whose text
+// directly matches the specified regular expression
+func ownTextRegexSelector(rx *regexp.Regexp) Selector {
+ return func(n *html.Node) bool {
+ return rx.MatchString(nodeOwnText(n))
+ }
+}
+
+// hasChildSelector returns a selector that matches elements
+// with a child that matches a.
+func hasChildSelector(a Selector) Selector {
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+ return hasChildMatch(n, a)
+ }
+}
+
+// hasDescendantSelector returns a selector that matches elements
+// with any descendant that matches a.
+func hasDescendantSelector(a Selector) Selector {
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+ return hasDescendantMatch(n, a)
+ }
+}
+
+// nthChildSelector returns a selector that implements :nth-child(an+b).
+// If last is true, implements :nth-last-child instead.
+// If ofType is true, implements :nth-of-type instead.
+func nthChildSelector(a, b int, last, ofType bool) Selector {
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+
+ parent := n.Parent
+ if parent == nil {
+ return false
+ }
+
+ if parent.Type == html.DocumentNode {
+ return false
+ }
+
+ i := -1
+ count := 0
+ for c := parent.FirstChild; c != nil; c = c.NextSibling {
+ if (c.Type != html.ElementNode) || (ofType && c.Data != n.Data) {
+ continue
+ }
+ count++
+ if c == n {
+ i = count
+ if !last {
+ break
+ }
+ }
+ }
+
+ if i == -1 {
+ // This shouldn't happen, since n should always be one of its parent's children.
+ return false
+ }
+
+ if last {
+ i = count - i + 1
+ }
+
+ i -= b
+ if a == 0 {
+ return i == 0
+ }
+
+ return i%a == 0 && i/a >= 0
+ }
+}
+
+// simpleNthChildSelector returns a selector that implements :nth-child(b).
+// If ofType is true, implements :nth-of-type instead.
+func simpleNthChildSelector(b int, ofType bool) Selector {
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+
+ parent := n.Parent
+ if parent == nil {
+ return false
+ }
+
+ if parent.Type == html.DocumentNode {
+ return false
+ }
+
+ count := 0
+ for c := parent.FirstChild; c != nil; c = c.NextSibling {
+ if c.Type != html.ElementNode || (ofType && c.Data != n.Data) {
+ continue
+ }
+ count++
+ if c == n {
+ return count == b
+ }
+ if count >= b {
+ return false
+ }
+ }
+ return false
+ }
+}
+
+// simpleNthLastChildSelector returns a selector that implements
+// :nth-last-child(b). If ofType is true, implements :nth-last-of-type
+// instead.
+func simpleNthLastChildSelector(b int, ofType bool) Selector {
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+
+ parent := n.Parent
+ if parent == nil {
+ return false
+ }
+
+ if parent.Type == html.DocumentNode {
+ return false
+ }
+
+ count := 0
+ for c := parent.LastChild; c != nil; c = c.PrevSibling {
+ if c.Type != html.ElementNode || (ofType && c.Data != n.Data) {
+ continue
+ }
+ count++
+ if c == n {
+ return count == b
+ }
+ if count >= b {
+ return false
+ }
+ }
+ return false
+ }
+}
+
+// onlyChildSelector returns a selector that implements :only-child.
+// If ofType is true, it implements :only-of-type instead.
+func onlyChildSelector(ofType bool) Selector {
+ return func(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+
+ parent := n.Parent
+ if parent == nil {
+ return false
+ }
+
+ if parent.Type == html.DocumentNode {
+ return false
+ }
+
+ count := 0
+ for c := parent.FirstChild; c != nil; c = c.NextSibling {
+ if (c.Type != html.ElementNode) || (ofType && c.Data != n.Data) {
+ continue
+ }
+ count++
+ if count > 1 {
+ return false
+ }
+ }
+
+ return count == 1
+ }
+}
+
+// inputSelector is a Selector that matches input, select, textarea and button elements.
+func inputSelector(n *html.Node) bool {
+ return n.Type == html.ElementNode && (n.Data == "input" || n.Data == "select" || n.Data == "textarea" || n.Data == "button")
+}
+
+// emptyElementSelector is a Selector that matches empty elements.
+func emptyElementSelector(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ switch c.Type {
+ case html.ElementNode, html.TextNode:
+ return false
+ }
+ }
+
+ return true
+}
+
+// descendantSelector returns a Selector that matches an element if
+// it matches d and has an ancestor that matches a.
+func descendantSelector(a, d Selector) Selector {
+ return func(n *html.Node) bool {
+ if !d(n) {
+ return false
+ }
+
+ for p := n.Parent; p != nil; p = p.Parent {
+ if a(p) {
+ return true
+ }
+ }
+
+ return false
+ }
+}
+
+// childSelector returns a Selector that matches an element if
+// it matches d and its parent matches a.
+func childSelector(a, d Selector) Selector {
+ return func(n *html.Node) bool {
+ return d(n) && n.Parent != nil && a(n.Parent)
+ }
+}
+
+// siblingSelector returns a Selector that matches an element
+// if it matches s2 and in is preceded by an element that matches s1.
+// If adjacent is true, the sibling must be immediately before the element.
+func siblingSelector(s1, s2 Selector, adjacent bool) Selector {
+ return func(n *html.Node) bool {
+ if !s2(n) {
+ return false
+ }
+
+ if adjacent {
+ for n = n.PrevSibling; n != nil; n = n.PrevSibling {
+ if n.Type == html.TextNode || n.Type == html.CommentNode {
+ continue
+ }
+ return s1(n)
+ }
+ return false
+ }
+
+ // Walk backwards looking for element that matches s1
+ for c := n.PrevSibling; c != nil; c = c.PrevSibling {
+ if s1(c) {
+ return true
+ }
+ }
+
+ return false
+ }
+}
+
+// rootSelector implements :root
+func rootSelector(n *html.Node) bool {
+ if n.Type != html.ElementNode {
+ return false
+ }
+ if n.Parent == nil {
+ return false
+ }
+ return n.Parent.Type == html.DocumentNode
+}
diff --git a/vendor/github.com/golang-collections/go-datastructures/LICENSE b/vendor/github.com/golang-collections/go-datastructures/LICENSE
new file mode 100644
index 0000000..7a4a3ea
--- /dev/null
+++ b/vendor/github.com/golang-collections/go-datastructures/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/vendor/github.com/golang-collections/go-datastructures/queue/error.go b/vendor/github.com/golang-collections/go-datastructures/queue/error.go
new file mode 100644
index 0000000..29c062c
--- /dev/null
+++ b/vendor/github.com/golang-collections/go-datastructures/queue/error.go
@@ -0,0 +1,21 @@
+/*
+Copyright 2014 Workiva, LLC
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package queue
+
+import "errors"
+
+var disposedError = errors.New(`Queue has been disposed.`)
diff --git a/vendor/github.com/golang-collections/go-datastructures/queue/priority_queue.go b/vendor/github.com/golang-collections/go-datastructures/queue/priority_queue.go
new file mode 100644
index 0000000..3ccfd57
--- /dev/null
+++ b/vendor/github.com/golang-collections/go-datastructures/queue/priority_queue.go
@@ -0,0 +1,235 @@
+/*
+Copyright 2014 Workiva, LLC
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+/*
+The priority queue is almost a spitting image of the logic
+used for a regular queue. In order to keep the logic fast,
+this code is repeated instead of using casts to cast to interface{}
+back and forth. If Go had inheritance and generics, this problem
+would be easier to solve.
+*/
+package queue
+
+import (
+ "sort"
+ "sync"
+)
+
+// Item is an item that can be added to the priority queue.
+type Item interface {
+ // Compare returns a bool that can be used to determine
+ // ordering in the priority queue. Assuming the queue
+ // is in ascending order, this should return > logic.
+ // Return 1 to indicate this object is greater than the
+ // the other logic, 0 to indicate equality, and -1 to indicate
+ // less than other.
+ Compare(other Item) int
+}
+
+type priorityItems []Item
+
+func (items *priorityItems) get(number int) []Item {
+ returnItems := make([]Item, 0, number)
+ index := 0
+ for i := 0; i < number; i++ {
+ if i >= len(*items) {
+ break
+ }
+
+ returnItems = append(returnItems, (*items)[i])
+ (*items)[i] = nil
+ index++
+ }
+
+ *items = (*items)[index:]
+ return returnItems
+}
+
+func (items *priorityItems) insert(item Item) {
+ if len(*items) == 0 {
+ *items = append(*items, item)
+ return
+ }
+
+ equalFound := false
+ i := sort.Search(len(*items), func(i int) bool {
+ result := (*items)[i].Compare(item)
+ if result == 0 {
+ equalFound = true
+ }
+ return result >= 0
+ })
+
+ if equalFound {
+ return
+ }
+
+ if i == len(*items) {
+ *items = append(*items, item)
+ return
+ }
+
+ *items = append(*items, nil)
+ copy((*items)[i+1:], (*items)[i:])
+ (*items)[i] = item
+}
+
+// PriorityQueue is similar to queue except that it takes
+// items that implement the Item interface and adds them
+// to the queue in priority order.
+type PriorityQueue struct {
+ waiters waiters
+ items priorityItems
+ lock sync.Mutex
+ disposeLock sync.Mutex
+ disposed bool
+}
+
+// Put adds items to the queue.
+func (pq *PriorityQueue) Put(items ...Item) error {
+ if len(items) == 0 {
+ return nil
+ }
+
+ pq.lock.Lock()
+ if pq.disposed {
+ pq.lock.Unlock()
+ return disposedError
+ }
+
+ for _, item := range items {
+ pq.items.insert(item)
+ }
+
+ for {
+ sema := pq.waiters.get()
+ if sema == nil {
+ break
+ }
+
+ sema.response.Add(1)
+ sema.wg.Done()
+ sema.response.Wait()
+ if len(pq.items) == 0 {
+ break
+ }
+ }
+
+ pq.lock.Unlock()
+ return nil
+}
+
+// Get retrieves items from the queue. If the queue is empty,
+// this call blocks until the next item is added to the queue. This
+// will attempt to retrieve number of items.
+func (pq *PriorityQueue) Get(number int) ([]Item, error) {
+ if number < 1 {
+ return nil, nil
+ }
+
+ pq.lock.Lock()
+
+ if pq.disposed {
+ pq.lock.Unlock()
+ return nil, disposedError
+ }
+
+ var items []Item
+
+ if len(pq.items) == 0 {
+ sema := newSema()
+ pq.waiters.put(sema)
+ sema.wg.Add(1)
+ pq.lock.Unlock()
+
+ sema.wg.Wait()
+ pq.disposeLock.Lock()
+ if pq.disposed {
+ pq.disposeLock.Unlock()
+ return nil, disposedError
+ }
+ pq.disposeLock.Unlock()
+
+ items = pq.items.get(number)
+ sema.response.Done()
+ return items, nil
+ }
+
+ items = pq.items.get(number)
+ pq.lock.Unlock()
+ return items, nil
+}
+
+// Peek will look at the next item without removing it from the queue.
+func (pq *PriorityQueue) Peek() Item {
+ pq.lock.Lock()
+ defer pq.lock.Unlock()
+ if len(pq.items) > 0 {
+ return pq.items[0]
+ }
+ return nil
+}
+
+// Empty returns a bool indicating if there are any items left
+// in the queue.
+func (pq *PriorityQueue) Empty() bool {
+ pq.lock.Lock()
+ defer pq.lock.Unlock()
+
+ return len(pq.items) == 0
+}
+
+// Len returns a number indicating how many items are in the queue.
+func (pq *PriorityQueue) Len() int {
+ pq.lock.Lock()
+ defer pq.lock.Unlock()
+
+ return len(pq.items)
+}
+
+// Disposed returns a bool indicating if this queue has been disposed.
+func (pq *PriorityQueue) Disposed() bool {
+ pq.lock.Lock()
+ defer pq.lock.Unlock()
+
+ return pq.disposed
+}
+
+// Dispose will prevent any further reads/writes to this queue
+// and frees available resources.
+func (pq *PriorityQueue) Dispose() {
+ pq.lock.Lock()
+ defer pq.lock.Unlock()
+
+ pq.disposeLock.Lock()
+ defer pq.disposeLock.Unlock()
+
+ pq.disposed = true
+ for _, waiter := range pq.waiters {
+ waiter.response.Add(1)
+ waiter.wg.Done()
+ }
+
+ pq.items = nil
+ pq.waiters = nil
+}
+
+// NewPriorityQueue is the constructor for a priority queue.
+func NewPriorityQueue(hint int) *PriorityQueue {
+ return &PriorityQueue{
+ items: make(priorityItems, 0, hint),
+ }
+}
diff --git a/vendor/github.com/golang-collections/go-datastructures/queue/queue.go b/vendor/github.com/golang-collections/go-datastructures/queue/queue.go
new file mode 100644
index 0000000..856ae3e
--- /dev/null
+++ b/vendor/github.com/golang-collections/go-datastructures/queue/queue.go
@@ -0,0 +1,324 @@
+/*
+Copyright 2014 Workiva, LLC
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+/*
+Package queue includes a regular queue and a priority queue.
+These queues rely on waitgroups to pause listening threads
+on empty queues until a message is received. If any thread
+calls Dispose on the queue, any listeners are immediately returned
+with an error. Any subsequent put to the queue will return an error
+as opposed to panicking as with channels. Queues will grow with unbounded
+behavior as opposed to channels which can be buffered but will pause
+while a thread attempts to put to a full channel.
+
+Recently added is a lockless ring buffer using the same basic C design as
+found here:
+
+http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue
+
+Modified for use with Go with the addition of some dispose semantics providing
+the capability to release blocked threads. This works for both puts
+and gets, either will return an error if they are blocked and the buffer
+is disposed. This could serve as a signal to kill a goroutine. All threadsafety
+is acheived using CAS operations, making this buffer pretty quick.
+
+Benchmarks:
+BenchmarkPriorityQueue-8 2000000 782 ns/op
+BenchmarkQueue-8 2000000 671 ns/op
+BenchmarkChannel-8 1000000 2083 ns/op
+BenchmarkQueuePut-8 20000 84299 ns/op
+BenchmarkQueueGet-8 20000 80753 ns/op
+BenchmarkExecuteInParallel-8 20000 68891 ns/op
+BenchmarkRBLifeCycle-8 10000000 177 ns/op
+BenchmarkRBPut-8 30000000 58.1 ns/op
+BenchmarkRBGet-8 50000000 26.8 ns/op
+
+TODO: We really need a Fibonacci heap for the priority queue.
+TODO: Unify the types of queue to the same interface.
+*/
+package queue
+
+import (
+ "runtime"
+ "sync"
+ "sync/atomic"
+)
+
+type waiters []*sema
+
+func (w *waiters) get() *sema {
+ if len(*w) == 0 {
+ return nil
+ }
+
+ sema := (*w)[0]
+ copy((*w)[0:], (*w)[1:])
+ (*w)[len(*w)-1] = nil // or the zero value of T
+ *w = (*w)[:len(*w)-1]
+ return sema
+}
+
+func (w *waiters) put(sema *sema) {
+ *w = append(*w, sema)
+}
+
+type items []interface{}
+
+func (items *items) get(number int64) []interface{} {
+ returnItems := make([]interface{}, 0, number)
+ index := int64(0)
+ for i := int64(0); i < number; i++ {
+ if i >= int64(len(*items)) {
+ break
+ }
+
+ returnItems = append(returnItems, (*items)[i])
+ (*items)[i] = nil
+ index++
+ }
+
+ *items = (*items)[index:]
+ return returnItems
+}
+
+func (items *items) getUntil(checker func(item interface{}) bool) []interface{} {
+ length := len(*items)
+
+ if len(*items) == 0 {
+ // returning nil here actually wraps that nil in a list
+ // of interfaces... thanks go
+ return []interface{}{}
+ }
+
+ returnItems := make([]interface{}, 0, length)
+ index := 0
+ for i, item := range *items {
+ if !checker(item) {
+ break
+ }
+
+ returnItems = append(returnItems, item)
+ index = i
+ }
+
+ *items = (*items)[index:]
+ return returnItems
+}
+
+type sema struct {
+ wg *sync.WaitGroup
+ response *sync.WaitGroup
+}
+
+func newSema() *sema {
+ return &sema{
+ wg: &sync.WaitGroup{},
+ response: &sync.WaitGroup{},
+ }
+}
+
+// Queue is the struct responsible for tracking the state
+// of the queue.
+type Queue struct {
+ waiters waiters
+ items items
+ lock sync.Mutex
+ disposed bool
+}
+
+// Put will add the specified items to the queue.
+func (q *Queue) Put(items ...interface{}) error {
+ if len(items) == 0 {
+ return nil
+ }
+
+ q.lock.Lock()
+
+ if q.disposed {
+ q.lock.Unlock()
+ return disposedError
+ }
+
+ q.items = append(q.items, items...)
+ for {
+ sema := q.waiters.get()
+ if sema == nil {
+ break
+ }
+ sema.response.Add(1)
+ sema.wg.Done()
+ sema.response.Wait()
+ if len(q.items) == 0 {
+ break
+ }
+ }
+
+ q.lock.Unlock()
+ return nil
+}
+
+// Get will add an item to the queue. If there are some items in the
+// queue, get will return a number UP TO the number passed in as a
+// parameter. If no items are in the queue, this method will pause
+// until items are added to the queue.
+func (q *Queue) Get(number int64) ([]interface{}, error) {
+ if number < 1 {
+ // thanks again go
+ return []interface{}{}, nil
+ }
+
+ q.lock.Lock()
+
+ if q.disposed {
+ q.lock.Unlock()
+ return nil, disposedError
+ }
+
+ var items []interface{}
+
+ if len(q.items) == 0 {
+ sema := newSema()
+ q.waiters.put(sema)
+ sema.wg.Add(1)
+ q.lock.Unlock()
+
+ sema.wg.Wait()
+ // we are now inside the put's lock
+ if q.disposed {
+ return nil, disposedError
+ }
+ items = q.items.get(number)
+ sema.response.Done()
+ return items, nil
+ }
+
+ items = q.items.get(number)
+ q.lock.Unlock()
+ return items, nil
+}
+
+// TakeUntil takes a function and returns a list of items that
+// match the checker until the checker returns false. This does not
+// wait if there are no items in the queue.
+func (q *Queue) TakeUntil(checker func(item interface{}) bool) ([]interface{}, error) {
+ if checker == nil {
+ return nil, nil
+ }
+
+ q.lock.Lock()
+
+ if q.disposed {
+ q.lock.Unlock()
+ return nil, disposedError
+ }
+
+ result := q.items.getUntil(checker)
+ q.lock.Unlock()
+ return result, nil
+}
+
+// Empty returns a bool indicating if this bool is empty.
+func (q *Queue) Empty() bool {
+ q.lock.Lock()
+ defer q.lock.Unlock()
+
+ return len(q.items) == 0
+}
+
+// Len returns the number of items in this queue.
+func (q *Queue) Len() int64 {
+ q.lock.Lock()
+ defer q.lock.Unlock()
+
+ return int64(len(q.items))
+}
+
+// Disposed returns a bool indicating if this queue
+// has had disposed called on it.
+func (q *Queue) Disposed() bool {
+ q.lock.Lock()
+ defer q.lock.Unlock()
+
+ return q.disposed
+}
+
+// Dispose will dispose of this queue. Any subsequent
+// calls to Get or Put will return an error.
+func (q *Queue) Dispose() {
+ q.lock.Lock()
+ defer q.lock.Unlock()
+
+ q.disposed = true
+ for _, waiter := range q.waiters {
+ waiter.response.Add(1)
+ waiter.wg.Done()
+ }
+
+ q.items = nil
+ q.waiters = nil
+}
+
+// New is a constructor for a new threadsafe queue.
+func New(hint int64) *Queue {
+ return &Queue{
+ items: make([]interface{}, 0, hint),
+ }
+}
+
+// ExecuteInParallel will (in parallel) call the provided function
+// with each item in the queue until the queue is exhausted. When the queue
+// is exhausted execution is complete and all goroutines will be killed.
+// This means that the queue will be disposed so cannot be used again.
+func ExecuteInParallel(q *Queue, fn func(interface{})) {
+ if q == nil {
+ return
+ }
+
+ q.lock.Lock() // so no one touches anything in the middle
+ // of this process
+ todo, done := uint64(len(q.items)), int64(-1)
+ // this is important or we might face an infinite loop
+ if todo == 0 {
+ return
+ }
+
+ numCPU := 1
+ if runtime.NumCPU() > 1 {
+ numCPU = runtime.NumCPU() - 1
+ }
+
+ var wg sync.WaitGroup
+ wg.Add(numCPU)
+ items := q.items
+
+ for i := 0; i < numCPU; i++ {
+ go func() {
+ for {
+ index := atomic.AddInt64(&done, 1)
+ if index >= int64(todo) {
+ wg.Done()
+ break
+ }
+
+ fn(items[index])
+ items[index] = 0
+ }
+ }()
+ }
+ wg.Wait()
+ q.lock.Unlock()
+ q.Dispose()
+}
diff --git a/vendor/github.com/golang-collections/go-datastructures/queue/ring.go b/vendor/github.com/golang-collections/go-datastructures/queue/ring.go
new file mode 100644
index 0000000..9c137a9
--- /dev/null
+++ b/vendor/github.com/golang-collections/go-datastructures/queue/ring.go
@@ -0,0 +1,158 @@
+/*
+Copyright 2014 Workiva, LLC
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+package queue
+
+import (
+ "runtime"
+ "sync/atomic"
+)
+
+// roundUp takes a uint64 greater than 0 and rounds it up to the next
+// power of 2.
+func roundUp(v uint64) uint64 {
+ v--
+ v |= v >> 1
+ v |= v >> 2
+ v |= v >> 4
+ v |= v >> 8
+ v |= v >> 16
+ v |= v >> 32
+ v++
+ return v
+}
+
+type node struct {
+ position uint64
+ data interface{}
+}
+
+type nodes []*node
+
+// RingBuffer is a MPMC buffer that achieves threadsafety with CAS operations
+// only. A put on full or get on empty call will block until an item
+// is put or retrieved. Calling Dispose on the RingBuffer will unblock
+// any blocked threads with an error. This buffer is similar to the buffer
+// described here: http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue
+// with some minor additions.
+type RingBuffer struct {
+ nodes nodes
+ queue, dequeue, mask, disposed uint64
+}
+
+func (rb *RingBuffer) init(size uint64) {
+ size = roundUp(size)
+ rb.nodes = make(nodes, size)
+ for i := uint64(0); i < size; i++ {
+ rb.nodes[i] = &node{position: i}
+ }
+ rb.mask = size - 1 // so we don't have to do this with every put/get operation
+}
+
+// Put adds the provided item to the queue. If the queue is full, this
+// call will block until an item is added to the queue or Dispose is called
+// on the queue. An error will be returned if the queue is disposed.
+func (rb *RingBuffer) Put(item interface{}) error {
+ var n *node
+ pos := atomic.LoadUint64(&rb.queue)
+L:
+ for {
+ if atomic.LoadUint64(&rb.disposed) == 1 {
+ return disposedError
+ }
+
+ n = rb.nodes[pos&rb.mask]
+ seq := atomic.LoadUint64(&n.position)
+ switch dif := seq - pos; {
+ case dif == 0:
+ if atomic.CompareAndSwapUint64(&rb.queue, pos, pos+1) {
+ break L
+ }
+ case dif < 0:
+ panic(`Ring buffer in a compromised state during a put operation.`)
+ default:
+ pos = atomic.LoadUint64(&rb.queue)
+ }
+ runtime.Gosched() // free up the cpu before the next iteration
+ }
+
+ n.data = item
+ atomic.StoreUint64(&n.position, pos+1)
+ return nil
+}
+
+// Get will return the next item in the queue. This call will block
+// if the queue is empty. This call will unblock when an item is added
+// to the queue or Dispose is called on the queue. An error will be returned
+// if the queue is disposed.
+func (rb *RingBuffer) Get() (interface{}, error) {
+ var n *node
+ pos := atomic.LoadUint64(&rb.dequeue)
+L:
+ for {
+ if atomic.LoadUint64(&rb.disposed) == 1 {
+ return nil, disposedError
+ }
+
+ n = rb.nodes[pos&rb.mask]
+ seq := atomic.LoadUint64(&n.position)
+ switch dif := seq - (pos + 1); {
+ case dif == 0:
+ if atomic.CompareAndSwapUint64(&rb.dequeue, pos, pos+1) {
+ break L
+ }
+ case dif < 0:
+ panic(`Ring buffer in compromised state during a get operation.`)
+ default:
+ pos = atomic.LoadUint64(&rb.dequeue)
+ }
+ runtime.Gosched() // free up cpu before next iteration
+ }
+ data := n.data
+ n.data = nil
+ atomic.StoreUint64(&n.position, pos+rb.mask+1)
+ return data, nil
+}
+
+// Len returns the number of items in the queue.
+func (rb *RingBuffer) Len() uint64 {
+ return atomic.LoadUint64(&rb.queue) - atomic.LoadUint64(&rb.dequeue)
+}
+
+// Cap returns the capacity of this ring buffer.
+func (rb *RingBuffer) Cap() uint64 {
+ return uint64(len(rb.nodes))
+}
+
+// Dispose will dispose of this queue and free any blocked threads
+// in the Put and/or Get methods. Calling those methods on a disposed
+// queue will return an error.
+func (rb *RingBuffer) Dispose() {
+ atomic.CompareAndSwapUint64(&rb.disposed, 0, 1)
+}
+
+// IsDisposed will return a bool indicating if this queue has been
+// disposed.
+func (rb *RingBuffer) IsDisposed() bool {
+ return atomic.LoadUint64(&rb.disposed) == 1
+}
+
+// NewRingBuffer will allocate, initialize, and return a ring buffer
+// with the specified size.
+func NewRingBuffer(size uint64) *RingBuffer {
+ rb := &RingBuffer{}
+ rb.init(size)
+ return rb
+}
diff --git a/vendor/github.com/google/uuid/CONTRIBUTING.md b/vendor/github.com/google/uuid/CONTRIBUTING.md
new file mode 100644
index 0000000..04fdf09
--- /dev/null
+++ b/vendor/github.com/google/uuid/CONTRIBUTING.md
@@ -0,0 +1,10 @@
+# How to contribute
+
+We definitely welcome patches and contribution to this project!
+
+### Legal requirements
+
+In order to protect both you and ourselves, you will need to sign the
+[Contributor License Agreement](https://cla.developers.google.com/clas).
+
+You may have already signed it for other Google projects.
diff --git a/vendor/github.com/google/uuid/CONTRIBUTORS b/vendor/github.com/google/uuid/CONTRIBUTORS
new file mode 100644
index 0000000..b4bb97f
--- /dev/null
+++ b/vendor/github.com/google/uuid/CONTRIBUTORS
@@ -0,0 +1,9 @@
+Paul Borman
+bmatsuo
+shawnps
+theory
+jboverfelt
+dsymonds
+cd1
+wallclockbuilder
+dansouza
diff --git a/vendor/github.com/google/uuid/LICENSE b/vendor/github.com/google/uuid/LICENSE
new file mode 100644
index 0000000..5dc6826
--- /dev/null
+++ b/vendor/github.com/google/uuid/LICENSE
@@ -0,0 +1,27 @@
+Copyright (c) 2009,2014 Google Inc. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/google/uuid/README.md b/vendor/github.com/google/uuid/README.md
new file mode 100644
index 0000000..9d92c11
--- /dev/null
+++ b/vendor/github.com/google/uuid/README.md
@@ -0,0 +1,19 @@
+# uuid 
+The uuid package generates and inspects UUIDs based on
+[RFC 4122](http://tools.ietf.org/html/rfc4122)
+and DCE 1.1: Authentication and Security Services.
+
+This package is based on the github.com/pborman/uuid package (previously named
+code.google.com/p/go-uuid). It differs from these earlier packages in that
+a UUID is a 16 byte array rather than a byte slice. One loss due to this
+change is the ability to represent an invalid UUID (vs a NIL UUID).
+
+###### Install
+`go get github.com/google/uuid`
+
+###### Documentation
+[](http://godoc.org/github.com/google/uuid)
+
+Full `go doc` style documentation for the package can be viewed online without
+installing this package by using the GoDoc site here:
+http://godoc.org/github.com/google/uuid
diff --git a/vendor/github.com/google/uuid/dce.go b/vendor/github.com/google/uuid/dce.go
new file mode 100644
index 0000000..fa820b9
--- /dev/null
+++ b/vendor/github.com/google/uuid/dce.go
@@ -0,0 +1,80 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "encoding/binary"
+ "fmt"
+ "os"
+)
+
+// A Domain represents a Version 2 domain
+type Domain byte
+
+// Domain constants for DCE Security (Version 2) UUIDs.
+const (
+ Person = Domain(0)
+ Group = Domain(1)
+ Org = Domain(2)
+)
+
+// NewDCESecurity returns a DCE Security (Version 2) UUID.
+//
+// The domain should be one of Person, Group or Org.
+// On a POSIX system the id should be the users UID for the Person
+// domain and the users GID for the Group. The meaning of id for
+// the domain Org or on non-POSIX systems is site defined.
+//
+// For a given domain/id pair the same token may be returned for up to
+// 7 minutes and 10 seconds.
+func NewDCESecurity(domain Domain, id uint32) (UUID, error) {
+ uuid, err := NewUUID()
+ if err == nil {
+ uuid[6] = (uuid[6] & 0x0f) | 0x20 // Version 2
+ uuid[9] = byte(domain)
+ binary.BigEndian.PutUint32(uuid[0:], id)
+ }
+ return uuid, err
+}
+
+// NewDCEPerson returns a DCE Security (Version 2) UUID in the person
+// domain with the id returned by os.Getuid.
+//
+// NewDCESecurity(Person, uint32(os.Getuid()))
+func NewDCEPerson() (UUID, error) {
+ return NewDCESecurity(Person, uint32(os.Getuid()))
+}
+
+// NewDCEGroup returns a DCE Security (Version 2) UUID in the group
+// domain with the id returned by os.Getgid.
+//
+// NewDCESecurity(Group, uint32(os.Getgid()))
+func NewDCEGroup() (UUID, error) {
+ return NewDCESecurity(Group, uint32(os.Getgid()))
+}
+
+// Domain returns the domain for a Version 2 UUID. Domains are only defined
+// for Version 2 UUIDs.
+func (uuid UUID) Domain() Domain {
+ return Domain(uuid[9])
+}
+
+// ID returns the id for a Version 2 UUID. IDs are only defined for Version 2
+// UUIDs.
+func (uuid UUID) ID() uint32 {
+ return binary.BigEndian.Uint32(uuid[0:4])
+}
+
+func (d Domain) String() string {
+ switch d {
+ case Person:
+ return "Person"
+ case Group:
+ return "Group"
+ case Org:
+ return "Org"
+ }
+ return fmt.Sprintf("Domain%d", int(d))
+}
diff --git a/vendor/github.com/google/uuid/doc.go b/vendor/github.com/google/uuid/doc.go
new file mode 100644
index 0000000..5b8a4b9
--- /dev/null
+++ b/vendor/github.com/google/uuid/doc.go
@@ -0,0 +1,12 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package uuid generates and inspects UUIDs.
+//
+// UUIDs are based on RFC 4122 and DCE 1.1: Authentication and Security
+// Services.
+//
+// A UUID is a 16 byte (128 bit) array. UUIDs may be used as keys to
+// maps or compared directly.
+package uuid
diff --git a/vendor/github.com/google/uuid/go.mod b/vendor/github.com/google/uuid/go.mod
new file mode 100644
index 0000000..fc84cd7
--- /dev/null
+++ b/vendor/github.com/google/uuid/go.mod
@@ -0,0 +1 @@
+module github.com/google/uuid
diff --git a/vendor/github.com/google/uuid/hash.go b/vendor/github.com/google/uuid/hash.go
new file mode 100644
index 0000000..b174616
--- /dev/null
+++ b/vendor/github.com/google/uuid/hash.go
@@ -0,0 +1,53 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "crypto/md5"
+ "crypto/sha1"
+ "hash"
+)
+
+// Well known namespace IDs and UUIDs
+var (
+ NameSpaceDNS = Must(Parse("6ba7b810-9dad-11d1-80b4-00c04fd430c8"))
+ NameSpaceURL = Must(Parse("6ba7b811-9dad-11d1-80b4-00c04fd430c8"))
+ NameSpaceOID = Must(Parse("6ba7b812-9dad-11d1-80b4-00c04fd430c8"))
+ NameSpaceX500 = Must(Parse("6ba7b814-9dad-11d1-80b4-00c04fd430c8"))
+ Nil UUID // empty UUID, all zeros
+)
+
+// NewHash returns a new UUID derived from the hash of space concatenated with
+// data generated by h. The hash should be at least 16 byte in length. The
+// first 16 bytes of the hash are used to form the UUID. The version of the
+// UUID will be the lower 4 bits of version. NewHash is used to implement
+// NewMD5 and NewSHA1.
+func NewHash(h hash.Hash, space UUID, data []byte, version int) UUID {
+ h.Reset()
+ h.Write(space[:])
+ h.Write(data)
+ s := h.Sum(nil)
+ var uuid UUID
+ copy(uuid[:], s)
+ uuid[6] = (uuid[6] & 0x0f) | uint8((version&0xf)<<4)
+ uuid[8] = (uuid[8] & 0x3f) | 0x80 // RFC 4122 variant
+ return uuid
+}
+
+// NewMD5 returns a new MD5 (Version 3) UUID based on the
+// supplied name space and data. It is the same as calling:
+//
+// NewHash(md5.New(), space, data, 3)
+func NewMD5(space UUID, data []byte) UUID {
+ return NewHash(md5.New(), space, data, 3)
+}
+
+// NewSHA1 returns a new SHA1 (Version 5) UUID based on the
+// supplied name space and data. It is the same as calling:
+//
+// NewHash(sha1.New(), space, data, 5)
+func NewSHA1(space UUID, data []byte) UUID {
+ return NewHash(sha1.New(), space, data, 5)
+}
diff --git a/vendor/github.com/google/uuid/marshal.go b/vendor/github.com/google/uuid/marshal.go
new file mode 100644
index 0000000..7f9e0c6
--- /dev/null
+++ b/vendor/github.com/google/uuid/marshal.go
@@ -0,0 +1,37 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import "fmt"
+
+// MarshalText implements encoding.TextMarshaler.
+func (uuid UUID) MarshalText() ([]byte, error) {
+ var js [36]byte
+ encodeHex(js[:], uuid)
+ return js[:], nil
+}
+
+// UnmarshalText implements encoding.TextUnmarshaler.
+func (uuid *UUID) UnmarshalText(data []byte) error {
+ id, err := ParseBytes(data)
+ if err == nil {
+ *uuid = id
+ }
+ return err
+}
+
+// MarshalBinary implements encoding.BinaryMarshaler.
+func (uuid UUID) MarshalBinary() ([]byte, error) {
+ return uuid[:], nil
+}
+
+// UnmarshalBinary implements encoding.BinaryUnmarshaler.
+func (uuid *UUID) UnmarshalBinary(data []byte) error {
+ if len(data) != 16 {
+ return fmt.Errorf("invalid UUID (got %d bytes)", len(data))
+ }
+ copy(uuid[:], data)
+ return nil
+}
diff --git a/vendor/github.com/google/uuid/node.go b/vendor/github.com/google/uuid/node.go
new file mode 100644
index 0000000..d651a2b
--- /dev/null
+++ b/vendor/github.com/google/uuid/node.go
@@ -0,0 +1,90 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "sync"
+)
+
+var (
+ nodeMu sync.Mutex
+ ifname string // name of interface being used
+ nodeID [6]byte // hardware for version 1 UUIDs
+ zeroID [6]byte // nodeID with only 0's
+)
+
+// NodeInterface returns the name of the interface from which the NodeID was
+// derived. The interface "user" is returned if the NodeID was set by
+// SetNodeID.
+func NodeInterface() string {
+ defer nodeMu.Unlock()
+ nodeMu.Lock()
+ return ifname
+}
+
+// SetNodeInterface selects the hardware address to be used for Version 1 UUIDs.
+// If name is "" then the first usable interface found will be used or a random
+// Node ID will be generated. If a named interface cannot be found then false
+// is returned.
+//
+// SetNodeInterface never fails when name is "".
+func SetNodeInterface(name string) bool {
+ defer nodeMu.Unlock()
+ nodeMu.Lock()
+ return setNodeInterface(name)
+}
+
+func setNodeInterface(name string) bool {
+ iname, addr := getHardwareInterface(name) // null implementation for js
+ if iname != "" && addr != nil {
+ ifname = iname
+ copy(nodeID[:], addr)
+ return true
+ }
+
+ // We found no interfaces with a valid hardware address. If name
+ // does not specify a specific interface generate a random Node ID
+ // (section 4.1.6)
+ if name == "" {
+ ifname = "random"
+ randomBits(nodeID[:])
+ return true
+ }
+ return false
+}
+
+// NodeID returns a slice of a copy of the current Node ID, setting the Node ID
+// if not already set.
+func NodeID() []byte {
+ defer nodeMu.Unlock()
+ nodeMu.Lock()
+ if nodeID == zeroID {
+ setNodeInterface("")
+ }
+ nid := nodeID
+ return nid[:]
+}
+
+// SetNodeID sets the Node ID to be used for Version 1 UUIDs. The first 6 bytes
+// of id are used. If id is less than 6 bytes then false is returned and the
+// Node ID is not set.
+func SetNodeID(id []byte) bool {
+ if len(id) < 6 {
+ return false
+ }
+ defer nodeMu.Unlock()
+ nodeMu.Lock()
+ copy(nodeID[:], id)
+ ifname = "user"
+ return true
+}
+
+// NodeID returns the 6 byte node id encoded in uuid. It returns nil if uuid is
+// not valid. The NodeID is only well defined for version 1 and 2 UUIDs.
+func (uuid UUID) NodeID() []byte {
+ var node [6]byte
+ copy(node[:], uuid[10:])
+ return node[:]
+}
diff --git a/vendor/github.com/google/uuid/node_js.go b/vendor/github.com/google/uuid/node_js.go
new file mode 100644
index 0000000..24b78ed
--- /dev/null
+++ b/vendor/github.com/google/uuid/node_js.go
@@ -0,0 +1,12 @@
+// Copyright 2017 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build js
+
+package uuid
+
+// getHardwareInterface returns nil values for the JS version of the code.
+// This remvoves the "net" dependency, because it is not used in the browser.
+// Using the "net" library inflates the size of the transpiled JS code by 673k bytes.
+func getHardwareInterface(name string) (string, []byte) { return "", nil }
diff --git a/vendor/github.com/google/uuid/node_net.go b/vendor/github.com/google/uuid/node_net.go
new file mode 100644
index 0000000..0cbbcdd
--- /dev/null
+++ b/vendor/github.com/google/uuid/node_net.go
@@ -0,0 +1,33 @@
+// Copyright 2017 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !js
+
+package uuid
+
+import "net"
+
+var interfaces []net.Interface // cached list of interfaces
+
+// getHardwareInterface returns the name and hardware address of interface name.
+// If name is "" then the name and hardware address of one of the system's
+// interfaces is returned. If no interfaces are found (name does not exist or
+// there are no interfaces) then "", nil is returned.
+//
+// Only addresses of at least 6 bytes are returned.
+func getHardwareInterface(name string) (string, []byte) {
+ if interfaces == nil {
+ var err error
+ interfaces, err = net.Interfaces()
+ if err != nil {
+ return "", nil
+ }
+ }
+ for _, ifs := range interfaces {
+ if len(ifs.HardwareAddr) >= 6 && (name == "" || name == ifs.Name) {
+ return ifs.Name, ifs.HardwareAddr
+ }
+ }
+ return "", nil
+}
diff --git a/vendor/github.com/google/uuid/sql.go b/vendor/github.com/google/uuid/sql.go
new file mode 100644
index 0000000..f326b54
--- /dev/null
+++ b/vendor/github.com/google/uuid/sql.go
@@ -0,0 +1,59 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "database/sql/driver"
+ "fmt"
+)
+
+// Scan implements sql.Scanner so UUIDs can be read from databases transparently
+// Currently, database types that map to string and []byte are supported. Please
+// consult database-specific driver documentation for matching types.
+func (uuid *UUID) Scan(src interface{}) error {
+ switch src := src.(type) {
+ case nil:
+ return nil
+
+ case string:
+ // if an empty UUID comes from a table, we return a null UUID
+ if src == "" {
+ return nil
+ }
+
+ // see Parse for required string format
+ u, err := Parse(src)
+ if err != nil {
+ return fmt.Errorf("Scan: %v", err)
+ }
+
+ *uuid = u
+
+ case []byte:
+ // if an empty UUID comes from a table, we return a null UUID
+ if len(src) == 0 {
+ return nil
+ }
+
+ // assumes a simple slice of bytes if 16 bytes
+ // otherwise attempts to parse
+ if len(src) != 16 {
+ return uuid.Scan(string(src))
+ }
+ copy((*uuid)[:], src)
+
+ default:
+ return fmt.Errorf("Scan: unable to scan type %T into UUID", src)
+ }
+
+ return nil
+}
+
+// Value implements sql.Valuer so that UUIDs can be written to databases
+// transparently. Currently, UUIDs map to strings. Please consult
+// database-specific driver documentation for matching types.
+func (uuid UUID) Value() (driver.Value, error) {
+ return uuid.String(), nil
+}
diff --git a/vendor/github.com/google/uuid/time.go b/vendor/github.com/google/uuid/time.go
new file mode 100644
index 0000000..e6ef06c
--- /dev/null
+++ b/vendor/github.com/google/uuid/time.go
@@ -0,0 +1,123 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "encoding/binary"
+ "sync"
+ "time"
+)
+
+// A Time represents a time as the number of 100's of nanoseconds since 15 Oct
+// 1582.
+type Time int64
+
+const (
+ lillian = 2299160 // Julian day of 15 Oct 1582
+ unix = 2440587 // Julian day of 1 Jan 1970
+ epoch = unix - lillian // Days between epochs
+ g1582 = epoch * 86400 // seconds between epochs
+ g1582ns100 = g1582 * 10000000 // 100s of a nanoseconds between epochs
+)
+
+var (
+ timeMu sync.Mutex
+ lasttime uint64 // last time we returned
+ clockSeq uint16 // clock sequence for this run
+
+ timeNow = time.Now // for testing
+)
+
+// UnixTime converts t the number of seconds and nanoseconds using the Unix
+// epoch of 1 Jan 1970.
+func (t Time) UnixTime() (sec, nsec int64) {
+ sec = int64(t - g1582ns100)
+ nsec = (sec % 10000000) * 100
+ sec /= 10000000
+ return sec, nsec
+}
+
+// GetTime returns the current Time (100s of nanoseconds since 15 Oct 1582) and
+// clock sequence as well as adjusting the clock sequence as needed. An error
+// is returned if the current time cannot be determined.
+func GetTime() (Time, uint16, error) {
+ defer timeMu.Unlock()
+ timeMu.Lock()
+ return getTime()
+}
+
+func getTime() (Time, uint16, error) {
+ t := timeNow()
+
+ // If we don't have a clock sequence already, set one.
+ if clockSeq == 0 {
+ setClockSequence(-1)
+ }
+ now := uint64(t.UnixNano()/100) + g1582ns100
+
+ // If time has gone backwards with this clock sequence then we
+ // increment the clock sequence
+ if now <= lasttime {
+ clockSeq = ((clockSeq + 1) & 0x3fff) | 0x8000
+ }
+ lasttime = now
+ return Time(now), clockSeq, nil
+}
+
+// ClockSequence returns the current clock sequence, generating one if not
+// already set. The clock sequence is only used for Version 1 UUIDs.
+//
+// The uuid package does not use global static storage for the clock sequence or
+// the last time a UUID was generated. Unless SetClockSequence is used, a new
+// random clock sequence is generated the first time a clock sequence is
+// requested by ClockSequence, GetTime, or NewUUID. (section 4.2.1.1)
+func ClockSequence() int {
+ defer timeMu.Unlock()
+ timeMu.Lock()
+ return clockSequence()
+}
+
+func clockSequence() int {
+ if clockSeq == 0 {
+ setClockSequence(-1)
+ }
+ return int(clockSeq & 0x3fff)
+}
+
+// SetClockSequence sets the clock sequence to the lower 14 bits of seq. Setting to
+// -1 causes a new sequence to be generated.
+func SetClockSequence(seq int) {
+ defer timeMu.Unlock()
+ timeMu.Lock()
+ setClockSequence(seq)
+}
+
+func setClockSequence(seq int) {
+ if seq == -1 {
+ var b [2]byte
+ randomBits(b[:]) // clock sequence
+ seq = int(b[0])<<8 | int(b[1])
+ }
+ oldSeq := clockSeq
+ clockSeq = uint16(seq&0x3fff) | 0x8000 // Set our variant
+ if oldSeq != clockSeq {
+ lasttime = 0
+ }
+}
+
+// Time returns the time in 100s of nanoseconds since 15 Oct 1582 encoded in
+// uuid. The time is only defined for version 1 and 2 UUIDs.
+func (uuid UUID) Time() Time {
+ time := int64(binary.BigEndian.Uint32(uuid[0:4]))
+ time |= int64(binary.BigEndian.Uint16(uuid[4:6])) << 32
+ time |= int64(binary.BigEndian.Uint16(uuid[6:8])&0xfff) << 48
+ return Time(time)
+}
+
+// ClockSequence returns the clock sequence encoded in uuid.
+// The clock sequence is only well defined for version 1 and 2 UUIDs.
+func (uuid UUID) ClockSequence() int {
+ return int(binary.BigEndian.Uint16(uuid[8:10])) & 0x3fff
+}
diff --git a/vendor/github.com/google/uuid/util.go b/vendor/github.com/google/uuid/util.go
new file mode 100644
index 0000000..5ea6c73
--- /dev/null
+++ b/vendor/github.com/google/uuid/util.go
@@ -0,0 +1,43 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "io"
+)
+
+// randomBits completely fills slice b with random data.
+func randomBits(b []byte) {
+ if _, err := io.ReadFull(rander, b); err != nil {
+ panic(err.Error()) // rand should never fail
+ }
+}
+
+// xvalues returns the value of a byte as a hexadecimal digit or 255.
+var xvalues = [256]byte{
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 255, 255, 255, 255, 255, 255,
+ 255, 10, 11, 12, 13, 14, 15, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 10, 11, 12, 13, 14, 15, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+ 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255,
+}
+
+// xtob converts hex characters x1 and x2 into a byte.
+func xtob(x1, x2 byte) (byte, bool) {
+ b1 := xvalues[x1]
+ b2 := xvalues[x2]
+ return (b1 << 4) | b2, b1 != 255 && b2 != 255
+}
diff --git a/vendor/github.com/google/uuid/uuid.go b/vendor/github.com/google/uuid/uuid.go
new file mode 100644
index 0000000..524404c
--- /dev/null
+++ b/vendor/github.com/google/uuid/uuid.go
@@ -0,0 +1,245 @@
+// Copyright 2018 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "bytes"
+ "crypto/rand"
+ "encoding/hex"
+ "errors"
+ "fmt"
+ "io"
+ "strings"
+)
+
+// A UUID is a 128 bit (16 byte) Universal Unique IDentifier as defined in RFC
+// 4122.
+type UUID [16]byte
+
+// A Version represents a UUID's version.
+type Version byte
+
+// A Variant represents a UUID's variant.
+type Variant byte
+
+// Constants returned by Variant.
+const (
+ Invalid = Variant(iota) // Invalid UUID
+ RFC4122 // The variant specified in RFC4122
+ Reserved // Reserved, NCS backward compatibility.
+ Microsoft // Reserved, Microsoft Corporation backward compatibility.
+ Future // Reserved for future definition.
+)
+
+var rander = rand.Reader // random function
+
+// Parse decodes s into a UUID or returns an error. Both the standard UUID
+// forms of xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx and
+// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx are decoded as well as the
+// Microsoft encoding {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} and the raw hex
+// encoding: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.
+func Parse(s string) (UUID, error) {
+ var uuid UUID
+ switch len(s) {
+ // xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ case 36:
+
+ // urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ case 36 + 9:
+ if strings.ToLower(s[:9]) != "urn:uuid:" {
+ return uuid, fmt.Errorf("invalid urn prefix: %q", s[:9])
+ }
+ s = s[9:]
+
+ // {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
+ case 36 + 2:
+ s = s[1:]
+
+ // xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ case 32:
+ var ok bool
+ for i := range uuid {
+ uuid[i], ok = xtob(s[i*2], s[i*2+1])
+ if !ok {
+ return uuid, errors.New("invalid UUID format")
+ }
+ }
+ return uuid, nil
+ default:
+ return uuid, fmt.Errorf("invalid UUID length: %d", len(s))
+ }
+ // s is now at least 36 bytes long
+ // it must be of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' {
+ return uuid, errors.New("invalid UUID format")
+ }
+ for i, x := range [16]int{
+ 0, 2, 4, 6,
+ 9, 11,
+ 14, 16,
+ 19, 21,
+ 24, 26, 28, 30, 32, 34} {
+ v, ok := xtob(s[x], s[x+1])
+ if !ok {
+ return uuid, errors.New("invalid UUID format")
+ }
+ uuid[i] = v
+ }
+ return uuid, nil
+}
+
+// ParseBytes is like Parse, except it parses a byte slice instead of a string.
+func ParseBytes(b []byte) (UUID, error) {
+ var uuid UUID
+ switch len(b) {
+ case 36: // xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ case 36 + 9: // urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ if !bytes.Equal(bytes.ToLower(b[:9]), []byte("urn:uuid:")) {
+ return uuid, fmt.Errorf("invalid urn prefix: %q", b[:9])
+ }
+ b = b[9:]
+ case 36 + 2: // {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}
+ b = b[1:]
+ case 32: // xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+ var ok bool
+ for i := 0; i < 32; i += 2 {
+ uuid[i/2], ok = xtob(b[i], b[i+1])
+ if !ok {
+ return uuid, errors.New("invalid UUID format")
+ }
+ }
+ return uuid, nil
+ default:
+ return uuid, fmt.Errorf("invalid UUID length: %d", len(b))
+ }
+ // s is now at least 36 bytes long
+ // it must be of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+ if b[8] != '-' || b[13] != '-' || b[18] != '-' || b[23] != '-' {
+ return uuid, errors.New("invalid UUID format")
+ }
+ for i, x := range [16]int{
+ 0, 2, 4, 6,
+ 9, 11,
+ 14, 16,
+ 19, 21,
+ 24, 26, 28, 30, 32, 34} {
+ v, ok := xtob(b[x], b[x+1])
+ if !ok {
+ return uuid, errors.New("invalid UUID format")
+ }
+ uuid[i] = v
+ }
+ return uuid, nil
+}
+
+// MustParse is like Parse but panics if the string cannot be parsed.
+// It simplifies safe initialization of global variables holding compiled UUIDs.
+func MustParse(s string) UUID {
+ uuid, err := Parse(s)
+ if err != nil {
+ panic(`uuid: Parse(` + s + `): ` + err.Error())
+ }
+ return uuid
+}
+
+// FromBytes creates a new UUID from a byte slice. Returns an error if the slice
+// does not have a length of 16. The bytes are copied from the slice.
+func FromBytes(b []byte) (uuid UUID, err error) {
+ err = uuid.UnmarshalBinary(b)
+ return uuid, err
+}
+
+// Must returns uuid if err is nil and panics otherwise.
+func Must(uuid UUID, err error) UUID {
+ if err != nil {
+ panic(err)
+ }
+ return uuid
+}
+
+// String returns the string form of uuid, xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+// , or "" if uuid is invalid.
+func (uuid UUID) String() string {
+ var buf [36]byte
+ encodeHex(buf[:], uuid)
+ return string(buf[:])
+}
+
+// URN returns the RFC 2141 URN form of uuid,
+// urn:uuid:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, or "" if uuid is invalid.
+func (uuid UUID) URN() string {
+ var buf [36 + 9]byte
+ copy(buf[:], "urn:uuid:")
+ encodeHex(buf[9:], uuid)
+ return string(buf[:])
+}
+
+func encodeHex(dst []byte, uuid UUID) {
+ hex.Encode(dst, uuid[:4])
+ dst[8] = '-'
+ hex.Encode(dst[9:13], uuid[4:6])
+ dst[13] = '-'
+ hex.Encode(dst[14:18], uuid[6:8])
+ dst[18] = '-'
+ hex.Encode(dst[19:23], uuid[8:10])
+ dst[23] = '-'
+ hex.Encode(dst[24:], uuid[10:])
+}
+
+// Variant returns the variant encoded in uuid.
+func (uuid UUID) Variant() Variant {
+ switch {
+ case (uuid[8] & 0xc0) == 0x80:
+ return RFC4122
+ case (uuid[8] & 0xe0) == 0xc0:
+ return Microsoft
+ case (uuid[8] & 0xe0) == 0xe0:
+ return Future
+ default:
+ return Reserved
+ }
+}
+
+// Version returns the version of uuid.
+func (uuid UUID) Version() Version {
+ return Version(uuid[6] >> 4)
+}
+
+func (v Version) String() string {
+ if v > 15 {
+ return fmt.Sprintf("BAD_VERSION_%d", v)
+ }
+ return fmt.Sprintf("VERSION_%d", v)
+}
+
+func (v Variant) String() string {
+ switch v {
+ case RFC4122:
+ return "RFC4122"
+ case Reserved:
+ return "Reserved"
+ case Microsoft:
+ return "Microsoft"
+ case Future:
+ return "Future"
+ case Invalid:
+ return "Invalid"
+ }
+ return fmt.Sprintf("BadVariant%d", int(v))
+}
+
+// SetRand sets the random number generator to r, which implements io.Reader.
+// If r.Read returns an error when the package requests random data then
+// a panic will be issued.
+//
+// Calling SetRand with nil sets the random number generator to the default
+// generator.
+func SetRand(r io.Reader) {
+ if r == nil {
+ rander = rand.Reader
+ return
+ }
+ rander = r
+}
diff --git a/vendor/github.com/google/uuid/version1.go b/vendor/github.com/google/uuid/version1.go
new file mode 100644
index 0000000..199a1ac
--- /dev/null
+++ b/vendor/github.com/google/uuid/version1.go
@@ -0,0 +1,44 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import (
+ "encoding/binary"
+)
+
+// NewUUID returns a Version 1 UUID based on the current NodeID and clock
+// sequence, and the current time. If the NodeID has not been set by SetNodeID
+// or SetNodeInterface then it will be set automatically. If the NodeID cannot
+// be set NewUUID returns nil. If clock sequence has not been set by
+// SetClockSequence then it will be set automatically. If GetTime fails to
+// return the current NewUUID returns nil and an error.
+//
+// In most cases, New should be used.
+func NewUUID() (UUID, error) {
+ nodeMu.Lock()
+ if nodeID == zeroID {
+ setNodeInterface("")
+ }
+ nodeMu.Unlock()
+
+ var uuid UUID
+ now, seq, err := GetTime()
+ if err != nil {
+ return uuid, err
+ }
+
+ timeLow := uint32(now & 0xffffffff)
+ timeMid := uint16((now >> 32) & 0xffff)
+ timeHi := uint16((now >> 48) & 0x0fff)
+ timeHi |= 0x1000 // Version 1
+
+ binary.BigEndian.PutUint32(uuid[0:], timeLow)
+ binary.BigEndian.PutUint16(uuid[4:], timeMid)
+ binary.BigEndian.PutUint16(uuid[6:], timeHi)
+ binary.BigEndian.PutUint16(uuid[8:], seq)
+ copy(uuid[10:], nodeID[:])
+
+ return uuid, nil
+}
diff --git a/vendor/github.com/google/uuid/version4.go b/vendor/github.com/google/uuid/version4.go
new file mode 100644
index 0000000..84af91c
--- /dev/null
+++ b/vendor/github.com/google/uuid/version4.go
@@ -0,0 +1,38 @@
+// Copyright 2016 Google Inc. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package uuid
+
+import "io"
+
+// New creates a new random UUID or panics. New is equivalent to
+// the expression
+//
+// uuid.Must(uuid.NewRandom())
+func New() UUID {
+ return Must(NewRandom())
+}
+
+// NewRandom returns a Random (Version 4) UUID.
+//
+// The strength of the UUIDs is based on the strength of the crypto/rand
+// package.
+//
+// A note about uniqueness derived from the UUID Wikipedia entry:
+//
+// Randomly generated UUIDs have 122 random bits. One's annual risk of being
+// hit by a meteorite is estimated to be one chance in 17 billion, that
+// means the probability is about 0.00000000006 (6 × 10−11),
+// equivalent to the odds of creating a few tens of trillions of UUIDs in a
+// year and having one duplicate.
+func NewRandom() (UUID, error) {
+ var uuid UUID
+ _, err := io.ReadFull(rander, uuid[:])
+ if err != nil {
+ return Nil, err
+ }
+ uuid[6] = (uuid[6] & 0x0f) | 0x40 // Version 4
+ uuid[8] = (uuid[8] & 0x3f) | 0x80 // Variant is 10
+ return uuid, nil
+}
diff --git a/vendor/github.com/gorilla/feeds/AUTHORS b/vendor/github.com/gorilla/feeds/AUTHORS
new file mode 100644
index 0000000..2c28cf9
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/AUTHORS
@@ -0,0 +1,29 @@
+# This is the official list of gorilla/feeds authors for copyright purposes.
+# Please keep the list sorted.
+
+Dmitry Chestnykh
+Eddie Scholtz
+Gabriel Simmer
+Google LLC (https://opensource.google.com/)
+honky
+James Gregory
+Jason Hall
+Jason Moiron
+Kamil Kisiel
+Kevin Stock
+Markus Zimmermann
+Matt Silverlock
+Matthew Dawson
+Milan Aleksic
+Milan Aleksić
+nlimpid
+Paul Petring
+Sean Enck
+Sue Spence
+Supermighty
+Toru Fukui
+Vabd
+Volker
+ZhiFeng Hu
+weberc2
+
diff --git a/vendor/github.com/gorilla/feeds/LICENSE b/vendor/github.com/gorilla/feeds/LICENSE
new file mode 100644
index 0000000..e24412d
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/LICENSE
@@ -0,0 +1,22 @@
+Copyright (c) 2013-2018 The Gorilla Feeds Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+
+ Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/gorilla/feeds/README.md b/vendor/github.com/gorilla/feeds/README.md
new file mode 100644
index 0000000..4d733cf
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/README.md
@@ -0,0 +1,185 @@
+## gorilla/feeds
+[](https://godoc.org/github.com/gorilla/feeds)
+[](https://travis-ci.org/gorilla/feeds)
+
+feeds is a web feed generator library for generating RSS, Atom and JSON feeds from Go
+applications.
+
+### Goals
+
+ * Provide a simple interface to create both Atom & RSS 2.0 feeds
+ * Full support for [Atom][atom], [RSS 2.0][rss], and [JSON Feed Version 1][jsonfeed] spec elements
+ * Ability to modify particulars for each spec
+
+[atom]: https://tools.ietf.org/html/rfc4287
+[rss]: http://www.rssboard.org/rss-specification
+[jsonfeed]: https://jsonfeed.org/version/1
+
+### Usage
+
+```go
+package main
+
+import (
+ "fmt"
+ "log"
+ "time"
+ "github.com/gorilla/feeds"
+)
+
+func main() {
+ now := time.Now()
+ feed := &feeds.Feed{
+ Title: "jmoiron.net blog",
+ Link: &feeds.Link{Href: "http://jmoiron.net/blog"},
+ Description: "discussion about tech, footie, photos",
+ Author: &feeds.Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
+ Created: now,
+ }
+
+ feed.Items = []*feeds.Item{
+ &feeds.Item{
+ Title: "Limiting Concurrency in Go",
+ Link: &feeds.Link{Href: "http://jmoiron.net/blog/limiting-concurrency-in-go/"},
+ Description: "A discussion on controlled parallelism in golang",
+ Author: &feeds.Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
+ Created: now,
+ },
+ &feeds.Item{
+ Title: "Logic-less Template Redux",
+ Link: &feeds.Link{Href: "http://jmoiron.net/blog/logicless-template-redux/"},
+ Description: "More thoughts on logicless templates",
+ Created: now,
+ },
+ &feeds.Item{
+ Title: "Idiomatic Code Reuse in Go",
+ Link: &feeds.Link{Href: "http://jmoiron.net/blog/idiomatic-code-reuse-in-go/"},
+ Description: "How to use interfaces effectively ",
+ Created: now,
+ },
+ }
+
+ atom, err := feed.ToAtom()
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ rss, err := feed.ToRss()
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ json, err := feed.ToJSON()
+ if err != nil {
+ log.Fatal(err)
+ }
+
+ fmt.Println(atom, "\n", rss, "\n", json)
+}
+```
+
+Outputs:
+
+```xml
+
+
+ jmoiron.net blog
+
+ http://jmoiron.net/blog
+ 2013-01-16T03:26:01-05:00
+ discussion about tech, footie, photos
+
+ Limiting Concurrency in Go
+
+ 2013-01-16T03:26:01-05:00
+ tag:jmoiron.net,2013-01-16:/blog/limiting-concurrency-in-go/
+ A discussion on controlled parallelism in golang
+
+ Jason Moiron
+ jmoiron@jmoiron.net
+
+
+
+ Logic-less Template Redux
+
+ 2013-01-16T03:26:01-05:00
+ tag:jmoiron.net,2013-01-16:/blog/logicless-template-redux/
+ More thoughts on logicless templates
+
+
+
+ Idiomatic Code Reuse in Go
+
+ 2013-01-16T03:26:01-05:00
+ tag:jmoiron.net,2013-01-16:/blog/idiomatic-code-reuse-in-go/
+ How to use interfaces <em>effectively</em>
+
+
+
+
+
+
+
+ jmoiron.net blog
+ http://jmoiron.net/blog
+ discussion about tech, footie, photos
+ jmoiron@jmoiron.net (Jason Moiron)
+ 2013-01-16T03:22:24-05:00
+ -
+
Limiting Concurrency in Go
+ http://jmoiron.net/blog/limiting-concurrency-in-go/
+ A discussion on controlled parallelism in golang
+ 2013-01-16T03:22:24-05:00
+
+ -
+
Logic-less Template Redux
+ http://jmoiron.net/blog/logicless-template-redux/
+ More thoughts on logicless templates
+ 2013-01-16T03:22:24-05:00
+
+ -
+
Idiomatic Code Reuse in Go
+ http://jmoiron.net/blog/idiomatic-code-reuse-in-go/
+ How to use interfaces <em>effectively</em>
+ 2013-01-16T03:22:24-05:00
+
+
+
+
+{
+ "version": "https://jsonfeed.org/version/1",
+ "title": "jmoiron.net blog",
+ "home_page_url": "http://jmoiron.net/blog",
+ "description": "discussion about tech, footie, photos",
+ "author": {
+ "name": "Jason Moiron"
+ },
+ "items": [
+ {
+ "id": "",
+ "url": "http://jmoiron.net/blog/limiting-concurrency-in-go/",
+ "title": "Limiting Concurrency in Go",
+ "summary": "A discussion on controlled parallelism in golang",
+ "date_published": "2013-01-16T03:22:24.530817846-05:00",
+ "author": {
+ "name": "Jason Moiron"
+ }
+ },
+ {
+ "id": "",
+ "url": "http://jmoiron.net/blog/logicless-template-redux/",
+ "title": "Logic-less Template Redux",
+ "summary": "More thoughts on logicless templates",
+ "date_published": "2013-01-16T03:22:24.530817846-05:00"
+ },
+ {
+ "id": "",
+ "url": "http://jmoiron.net/blog/idiomatic-code-reuse-in-go/",
+ "title": "Idiomatic Code Reuse in Go",
+ "summary": "How to use interfaces \u003cem\u003eeffectively\u003c/em\u003e",
+ "date_published": "2013-01-16T03:22:24.530817846-05:00"
+ }
+ ]
+}
+```
+
diff --git a/vendor/github.com/gorilla/feeds/atom.go b/vendor/github.com/gorilla/feeds/atom.go
new file mode 100644
index 0000000..5f483fe
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/atom.go
@@ -0,0 +1,169 @@
+package feeds
+
+import (
+ "encoding/xml"
+ "fmt"
+ "net/url"
+ "time"
+)
+
+// Generates Atom feed as XML
+
+const ns = "http://www.w3.org/2005/Atom"
+
+type AtomPerson struct {
+ Name string `xml:"name,omitempty"`
+ Uri string `xml:"uri,omitempty"`
+ Email string `xml:"email,omitempty"`
+}
+
+type AtomSummary struct {
+ XMLName xml.Name `xml:"summary"`
+ Content string `xml:",chardata"`
+ Type string `xml:"type,attr"`
+}
+
+type AtomContent struct {
+ XMLName xml.Name `xml:"content"`
+ Content string `xml:",chardata"`
+ Type string `xml:"type,attr"`
+}
+
+type AtomAuthor struct {
+ XMLName xml.Name `xml:"author"`
+ AtomPerson
+}
+
+type AtomContributor struct {
+ XMLName xml.Name `xml:"contributor"`
+ AtomPerson
+}
+
+type AtomEntry struct {
+ XMLName xml.Name `xml:"entry"`
+ Xmlns string `xml:"xmlns,attr,omitempty"`
+ Title string `xml:"title"` // required
+ Updated string `xml:"updated"` // required
+ Id string `xml:"id"` // required
+ Category string `xml:"category,omitempty"`
+ Content *AtomContent
+ Rights string `xml:"rights,omitempty"`
+ Source string `xml:"source,omitempty"`
+ Published string `xml:"published,omitempty"`
+ Contributor *AtomContributor
+ Links []AtomLink // required if no child 'content' elements
+ Summary *AtomSummary // required if content has src or content is base64
+ Author *AtomAuthor // required if feed lacks an author
+}
+
+// Multiple links with different rel can coexist
+type AtomLink struct {
+ //Atom 1.0
+ XMLName xml.Name `xml:"link"`
+ Href string `xml:"href,attr"`
+ Rel string `xml:"rel,attr,omitempty"`
+ Type string `xml:"type,attr,omitempty"`
+ Length string `xml:"length,attr,omitempty"`
+}
+
+type AtomFeed struct {
+ XMLName xml.Name `xml:"feed"`
+ Xmlns string `xml:"xmlns,attr"`
+ Title string `xml:"title"` // required
+ Id string `xml:"id"` // required
+ Updated string `xml:"updated"` // required
+ Category string `xml:"category,omitempty"`
+ Icon string `xml:"icon,omitempty"`
+ Logo string `xml:"logo,omitempty"`
+ Rights string `xml:"rights,omitempty"` // copyright used
+ Subtitle string `xml:"subtitle,omitempty"`
+ Link *AtomLink
+ Author *AtomAuthor `xml:"author,omitempty"`
+ Contributor *AtomContributor
+ Entries []*AtomEntry
+}
+
+type Atom struct {
+ *Feed
+}
+
+func newAtomEntry(i *Item) *AtomEntry {
+ id := i.Id
+ // assume the description is html
+ s := &AtomSummary{Content: i.Description, Type: "html"}
+
+ if len(id) == 0 {
+ // if there's no id set, try to create one, either from data or just a uuid
+ if len(i.Link.Href) > 0 && (!i.Created.IsZero() || !i.Updated.IsZero()) {
+ dateStr := anyTimeFormat("2006-01-02", i.Updated, i.Created)
+ host, path := i.Link.Href, "/invalid.html"
+ if url, err := url.Parse(i.Link.Href); err == nil {
+ host, path = url.Host, url.Path
+ }
+ id = fmt.Sprintf("tag:%s,%s:%s", host, dateStr, path)
+ } else {
+ id = "urn:uuid:" + NewUUID().String()
+ }
+ }
+ var name, email string
+ if i.Author != nil {
+ name, email = i.Author.Name, i.Author.Email
+ }
+
+ link_rel := i.Link.Rel
+ if link_rel == "" {
+ link_rel = "alternate"
+ }
+ x := &AtomEntry{
+ Title: i.Title,
+ Links: []AtomLink{{Href: i.Link.Href, Rel: link_rel, Type: i.Link.Type}},
+ Id: id,
+ Updated: anyTimeFormat(time.RFC3339, i.Updated, i.Created),
+ Summary: s,
+ }
+
+ // if there's a content, assume it's html
+ if len(i.Content) > 0 {
+ x.Content = &AtomContent{Content: i.Content, Type: "html"}
+ }
+
+ if i.Enclosure != nil && link_rel != "enclosure" {
+ x.Links = append(x.Links, AtomLink{Href: i.Enclosure.Url, Rel: "enclosure", Type: i.Enclosure.Type, Length: i.Enclosure.Length})
+ }
+
+ if len(name) > 0 || len(email) > 0 {
+ x.Author = &AtomAuthor{AtomPerson: AtomPerson{Name: name, Email: email}}
+ }
+ return x
+}
+
+// create a new AtomFeed with a generic Feed struct's data
+func (a *Atom) AtomFeed() *AtomFeed {
+ updated := anyTimeFormat(time.RFC3339, a.Updated, a.Created)
+ feed := &AtomFeed{
+ Xmlns: ns,
+ Title: a.Title,
+ Link: &AtomLink{Href: a.Link.Href, Rel: a.Link.Rel},
+ Subtitle: a.Description,
+ Id: a.Link.Href,
+ Updated: updated,
+ Rights: a.Copyright,
+ }
+ if a.Author != nil {
+ feed.Author = &AtomAuthor{AtomPerson: AtomPerson{Name: a.Author.Name, Email: a.Author.Email}}
+ }
+ for _, e := range a.Items {
+ feed.Entries = append(feed.Entries, newAtomEntry(e))
+ }
+ return feed
+}
+
+// return an XML-Ready object for an Atom object
+func (a *Atom) FeedXml() interface{} {
+ return a.AtomFeed()
+}
+
+// return an XML-ready object for an AtomFeed object
+func (a *AtomFeed) FeedXml() interface{} {
+ return a
+}
diff --git a/vendor/github.com/gorilla/feeds/doc.go b/vendor/github.com/gorilla/feeds/doc.go
new file mode 100644
index 0000000..4e0759c
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/doc.go
@@ -0,0 +1,73 @@
+/*
+Syndication (feed) generator library for golang.
+
+Installing
+
+ go get github.com/gorilla/feeds
+
+Feeds provides a simple, generic Feed interface with a generic Item object as well as RSS, Atom and JSON Feed specific RssFeed, AtomFeed and JSONFeed objects which allow access to all of each spec's defined elements.
+
+Examples
+
+Create a Feed and some Items in that feed using the generic interfaces:
+
+ import (
+ "time"
+ . "github.com/gorilla/feeds"
+ )
+
+ now = time.Now()
+
+ feed := &Feed{
+ Title: "jmoiron.net blog",
+ Link: &Link{Href: "http://jmoiron.net/blog"},
+ Description: "discussion about tech, footie, photos",
+ Author: &Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
+ Created: now,
+ Copyright: "This work is copyright © Benjamin Button",
+ }
+
+ feed.Items = []*Item{
+ &Item{
+ Title: "Limiting Concurrency in Go",
+ Link: &Link{Href: "http://jmoiron.net/blog/limiting-concurrency-in-go/"},
+ Description: "A discussion on controlled parallelism in golang",
+ Author: &Author{Name: "Jason Moiron", Email: "jmoiron@jmoiron.net"},
+ Created: now,
+ },
+ &Item{
+ Title: "Logic-less Template Redux",
+ Link: &Link{Href: "http://jmoiron.net/blog/logicless-template-redux/"},
+ Description: "More thoughts on logicless templates",
+ Created: now,
+ },
+ &Item{
+ Title: "Idiomatic Code Reuse in Go",
+ Link: &Link{Href: "http://jmoiron.net/blog/idiomatic-code-reuse-in-go/"},
+ Description: "How to use interfaces effectively ",
+ Created: now,
+ },
+ }
+
+From here, you can output Atom, RSS, or JSON Feed versions of this feed easily
+
+ atom, err := feed.ToAtom()
+ rss, err := feed.ToRss()
+ json, err := feed.ToJSON()
+
+You can also get access to the underlying objects that feeds uses to export its XML
+
+ atomFeed := (&Atom{Feed: feed}).AtomFeed()
+ rssFeed := (&Rss{Feed: feed}).RssFeed()
+ jsonFeed := (&JSON{Feed: feed}).JSONFeed()
+
+From here, you can modify or add each syndication's specific fields before outputting
+
+ atomFeed.Subtitle = "plays the blues"
+ atom, err := ToXML(atomFeed)
+ rssFeed.Generator = "gorilla/feeds v1.0 (github.com/gorilla/feeds)"
+ rss, err := ToXML(rssFeed)
+ jsonFeed.NextUrl = "https://www.example.com/feed.json?page=2"
+ json, err := jsonFeed.ToJSON()
+*/
+package feeds
diff --git a/vendor/github.com/gorilla/feeds/feed.go b/vendor/github.com/gorilla/feeds/feed.go
new file mode 100644
index 0000000..3474289
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/feed.go
@@ -0,0 +1,145 @@
+package feeds
+
+import (
+ "encoding/json"
+ "encoding/xml"
+ "io"
+ "sort"
+ "time"
+)
+
+type Link struct {
+ Href, Rel, Type, Length string
+}
+
+type Author struct {
+ Name, Email string
+}
+
+type Image struct {
+ Url, Title, Link string
+ Width, Height int
+}
+
+type Enclosure struct {
+ Url, Length, Type string
+}
+
+type Item struct {
+ Title string
+ Link *Link
+ Source *Link
+ Author *Author
+ Description string // used as description in rss, summary in atom
+ Id string // used as guid in rss, id in atom
+ Updated time.Time
+ Created time.Time
+ Enclosure *Enclosure
+ Content string
+}
+
+type Feed struct {
+ Title string
+ Link *Link
+ Description string
+ Author *Author
+ Updated time.Time
+ Created time.Time
+ Id string
+ Subtitle string
+ Items []*Item
+ Copyright string
+ Image *Image
+}
+
+// add a new Item to a Feed
+func (f *Feed) Add(item *Item) {
+ f.Items = append(f.Items, item)
+}
+
+// returns the first non-zero time formatted as a string or ""
+func anyTimeFormat(format string, times ...time.Time) string {
+ for _, t := range times {
+ if !t.IsZero() {
+ return t.Format(format)
+ }
+ }
+ return ""
+}
+
+// interface used by ToXML to get a object suitable for exporting XML.
+type XmlFeed interface {
+ FeedXml() interface{}
+}
+
+// turn a feed object (either a Feed, AtomFeed, or RssFeed) into xml
+// returns an error if xml marshaling fails
+func ToXML(feed XmlFeed) (string, error) {
+ x := feed.FeedXml()
+ data, err := xml.MarshalIndent(x, "", " ")
+ if err != nil {
+ return "", err
+ }
+ // strip empty line from default xml header
+ s := xml.Header[:len(xml.Header)-1] + string(data)
+ return s, nil
+}
+
+// Write a feed object (either a Feed, AtomFeed, or RssFeed) as XML into
+// the writer. Returns an error if XML marshaling fails.
+func WriteXML(feed XmlFeed, w io.Writer) error {
+ x := feed.FeedXml()
+ // write default xml header, without the newline
+ if _, err := w.Write([]byte(xml.Header[:len(xml.Header)-1])); err != nil {
+ return err
+ }
+ e := xml.NewEncoder(w)
+ e.Indent("", " ")
+ return e.Encode(x)
+}
+
+// creates an Atom representation of this feed
+func (f *Feed) ToAtom() (string, error) {
+ a := &Atom{f}
+ return ToXML(a)
+}
+
+// Writes an Atom representation of this feed to the writer.
+func (f *Feed) WriteAtom(w io.Writer) error {
+ return WriteXML(&Atom{f}, w)
+}
+
+// creates an Rss representation of this feed
+func (f *Feed) ToRss() (string, error) {
+ r := &Rss{f}
+ return ToXML(r)
+}
+
+// Writes an RSS representation of this feed to the writer.
+func (f *Feed) WriteRss(w io.Writer) error {
+ return WriteXML(&Rss{f}, w)
+}
+
+// ToJSON creates a JSON Feed representation of this feed
+func (f *Feed) ToJSON() (string, error) {
+ j := &JSON{f}
+ return j.ToJSON()
+}
+
+// WriteJSON writes an JSON representation of this feed to the writer.
+func (f *Feed) WriteJSON(w io.Writer) error {
+ j := &JSON{f}
+ feed := j.JSONFeed()
+
+ e := json.NewEncoder(w)
+ e.SetIndent("", " ")
+ return e.Encode(feed)
+}
+
+// Sort sorts the Items in the feed with the given less function.
+func (f *Feed) Sort(less func(a, b *Item) bool) {
+ lessFunc := func(i, j int) bool {
+ return less(f.Items[i], f.Items[j])
+ }
+ sort.SliceStable(f.Items, lessFunc)
+}
diff --git a/vendor/github.com/gorilla/feeds/json.go b/vendor/github.com/gorilla/feeds/json.go
new file mode 100644
index 0000000..75a82fd
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/json.go
@@ -0,0 +1,183 @@
+package feeds
+
+import (
+ "encoding/json"
+ "strings"
+ "time"
+)
+
+const jsonFeedVersion = "https://jsonfeed.org/version/1"
+
+// JSONAuthor represents the author of the feed or of an individual item
+// in the feed
+type JSONAuthor struct {
+ Name string `json:"name,omitempty"`
+ Url string `json:"url,omitempty"`
+ Avatar string `json:"avatar,omitempty"`
+}
+
+// JSONAttachment represents a related resource. Podcasts, for instance, would
+// include an attachment that’s an audio or video file.
+type JSONAttachment struct {
+ Url string `json:"url,omitempty"`
+ MIMEType string `json:"mime_type,omitempty"`
+ Title string `json:"title,omitempty"`
+ Size int32 `json:"size,omitempty"`
+ Duration time.Duration `json:"duration_in_seconds,omitempty"`
+}
+
+// MarshalJSON implements the json.Marshaler interface.
+// The Duration field is marshaled in seconds, all other fields are marshaled
+// based upon the definitions in struct tags.
+func (a *JSONAttachment) MarshalJSON() ([]byte, error) {
+ type EmbeddedJSONAttachment JSONAttachment
+ return json.Marshal(&struct {
+ Duration float64 `json:"duration_in_seconds,omitempty"`
+ *EmbeddedJSONAttachment
+ }{
+ EmbeddedJSONAttachment: (*EmbeddedJSONAttachment)(a),
+ Duration: a.Duration.Seconds(),
+ })
+}
+
+// UnmarshalJSON implements the json.Unmarshaler interface.
+// The Duration field is expected to be in seconds, all other field types
+// match the struct definition.
+func (a *JSONAttachment) UnmarshalJSON(data []byte) error {
+ type EmbeddedJSONAttachment JSONAttachment
+ var raw struct {
+ Duration float64 `json:"duration_in_seconds,omitempty"`
+ *EmbeddedJSONAttachment
+ }
+ raw.EmbeddedJSONAttachment = (*EmbeddedJSONAttachment)(a)
+
+ err := json.Unmarshal(data, &raw)
+ if err != nil {
+ return err
+ }
+
+ if raw.Duration > 0 {
+ nsec := int64(raw.Duration * float64(time.Second))
+ raw.EmbeddedJSONAttachment.Duration = time.Duration(nsec)
+ }
+
+ return nil
+}
+
+// JSONItem represents a single entry/post for the feed.
+type JSONItem struct {
+ Id string `json:"id"`
+ Url string `json:"url,omitempty"`
+ ExternalUrl string `json:"external_url,omitempty"`
+ Title string `json:"title,omitempty"`
+ ContentHTML string `json:"content_html,omitempty"`
+ ContentText string `json:"content_text,omitempty"`
+ Summary string `json:"summary,omitempty"`
+ Image string `json:"image,omitempty"`
+ BannerImage string `json:"banner_,omitempty"`
+ PublishedDate *time.Time `json:"date_published,omitempty"`
+ ModifiedDate *time.Time `json:"date_modified,omitempty"`
+ Author *JSONAuthor `json:"author,omitempty"`
+ Tags []string `json:"tags,omitempty"`
+ Attachments []JSONAttachment `json:"attachments,omitempty"`
+}
+
+// JSONHub describes an endpoint that can be used to subscribe to real-time
+// notifications from the publisher of this feed.
+type JSONHub struct {
+ Type string `json:"type"`
+ Url string `json:"url"`
+}
+
+// JSONFeed represents a syndication feed in the JSON Feed Version 1 format.
+// Matching the specification found here: https://jsonfeed.org/version/1.
+type JSONFeed struct {
+ Version string `json:"version"`
+ Title string `json:"title"`
+ HomePageUrl string `json:"home_page_url,omitempty"`
+ FeedUrl string `json:"feed_url,omitempty"`
+ Description string `json:"description,omitempty"`
+ UserComment string `json:"user_comment,omitempty"`
+ NextUrl string `json:"next_url,omitempty"`
+ Icon string `json:"icon,omitempty"`
+ Favicon string `json:"favicon,omitempty"`
+ Author *JSONAuthor `json:"author,omitempty"`
+ Expired *bool `json:"expired,omitempty"`
+ Hubs []*JSONItem `json:"hubs,omitempty"`
+ Items []*JSONItem `json:"items,omitempty"`
+}
+
+// JSON is used to convert a generic Feed to a JSONFeed.
+type JSON struct {
+ *Feed
+}
+
+// ToJSON encodes f into a JSON string. Returns an error if marshalling fails.
+func (f *JSON) ToJSON() (string, error) {
+ return f.JSONFeed().ToJSON()
+}
+
+// ToJSON encodes f into a JSON string. Returns an error if marshalling fails.
+func (f *JSONFeed) ToJSON() (string, error) {
+ data, err := json.MarshalIndent(f, "", " ")
+ if err != nil {
+ return "", err
+ }
+
+ return string(data), nil
+}
+
+// JSONFeed creates a new JSONFeed with a generic Feed struct's data.
+func (f *JSON) JSONFeed() *JSONFeed {
+ feed := &JSONFeed{
+ Version: jsonFeedVersion,
+ Title: f.Title,
+ Description: f.Description,
+ }
+
+ if f.Link != nil {
+ feed.HomePageUrl = f.Link.Href
+ }
+ if f.Author != nil {
+ feed.Author = &JSONAuthor{
+ Name: f.Author.Name,
+ }
+ }
+ for _, e := range f.Items {
+ feed.Items = append(feed.Items, newJSONItem(e))
+ }
+ return feed
+}
+
+func newJSONItem(i *Item) *JSONItem {
+ item := &JSONItem{
+ Id: i.Id,
+ Title: i.Title,
+ Summary: i.Description,
+
+ ContentHTML: i.Content,
+ }
+
+ if i.Link != nil {
+ item.Url = i.Link.Href
+ }
+ if i.Source != nil {
+ item.ExternalUrl = i.Source.Href
+ }
+ if i.Author != nil {
+ item.Author = &JSONAuthor{
+ Name: i.Author.Name,
+ }
+ }
+ if !i.Created.IsZero() {
+ item.PublishedDate = &i.Created
+ }
+ if !i.Updated.IsZero() {
+ item.ModifiedDate = &i.Updated
+ }
+ if i.Enclosure != nil && strings.HasPrefix(i.Enclosure.Type, "image/") {
+ item.Image = i.Enclosure.Url
+ }
+
+ return item
+}
diff --git a/vendor/github.com/gorilla/feeds/rss.go b/vendor/github.com/gorilla/feeds/rss.go
new file mode 100644
index 0000000..39fe84b
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/rss.go
@@ -0,0 +1,168 @@
+package feeds
+
+// rss support
+// validation done according to spec here:
+// http://cyber.law.harvard.edu/rss/rss.html
+
+import (
+ "encoding/xml"
+ "fmt"
+ "time"
+)
+
+// private wrapper around the RssFeed which gives us the .. xml
+type rssFeedXml struct {
+ XMLName xml.Name `xml:"rss"`
+ Version string `xml:"version,attr"`
+ ContentNamespace string `xml:"xmlns:content,attr"`
+ Channel *RssFeed
+}
+
+type RssContent struct {
+ XMLName xml.Name `xml:"content:encoded"`
+ Content string `xml:",cdata"`
+}
+
+type RssImage struct {
+ XMLName xml.Name `xml:"image"`
+ Url string `xml:"url"`
+ Title string `xml:"title"`
+ Link string `xml:"link"`
+ Width int `xml:"width,omitempty"`
+ Height int `xml:"height,omitempty"`
+}
+
+type RssTextInput struct {
+ XMLName xml.Name `xml:"textInput"`
+ Title string `xml:"title"`
+ Description string `xml:"description"`
+ Name string `xml:"name"`
+ Link string `xml:"link"`
+}
+
+type RssFeed struct {
+ XMLName xml.Name `xml:"channel"`
+ Title string `xml:"title"` // required
+ Link string `xml:"link"` // required
+ Description string `xml:"description"` // required
+ Language string `xml:"language,omitempty"`
+ Copyright string `xml:"copyright,omitempty"`
+ ManagingEditor string `xml:"managingEditor,omitempty"` // Author used
+ WebMaster string `xml:"webMaster,omitempty"`
+ PubDate string `xml:"pubDate,omitempty"` // created or updated
+ LastBuildDate string `xml:"lastBuildDate,omitempty"` // updated used
+ Category string `xml:"category,omitempty"`
+ Generator string `xml:"generator,omitempty"`
+ Docs string `xml:"docs,omitempty"`
+ Cloud string `xml:"cloud,omitempty"`
+ Ttl int `xml:"ttl,omitempty"`
+ Rating string `xml:"rating,omitempty"`
+ SkipHours string `xml:"skipHours,omitempty"`
+ SkipDays string `xml:"skipDays,omitempty"`
+ Image *RssImage
+ TextInput *RssTextInput
+ Items []*RssItem
+}
+
+type RssItem struct {
+ XMLName xml.Name `xml:"item"`
+ Title string `xml:"title"` // required
+ Link string `xml:"link"` // required
+ Description string `xml:"description"` // required
+ Content *RssContent
+ Author string `xml:"author,omitempty"`
+ Category string `xml:"category,omitempty"`
+ Comments string `xml:"comments,omitempty"`
+ Enclosure *RssEnclosure
+ Guid string `xml:"guid,omitempty"` // Id used
+ PubDate string `xml:"pubDate,omitempty"` // created or updated
+ Source string `xml:"source,omitempty"`
+}
+
+type RssEnclosure struct {
+ //RSS 2.0
+ XMLName xml.Name `xml:"enclosure"`
+ Url string `xml:"url,attr"`
+ Length string `xml:"length,attr"`
+ Type string `xml:"type,attr"`
+}
+
+type Rss struct {
+ *Feed
+}
+
+// create a new RssItem with a generic Item struct's data
+func newRssItem(i *Item) *RssItem {
+ item := &RssItem{
+ Title: i.Title,
+ Link: i.Link.Href,
+ Description: i.Description,
+ Guid: i.Id,
+ PubDate: anyTimeFormat(time.RFC1123Z, i.Created, i.Updated),
+ }
+ if len(i.Content) > 0 {
+ item.Content = &RssContent{Content: i.Content}
+ }
+ if i.Source != nil {
+ item.Source = i.Source.Href
+ }
+
+ // Define a closure
+ if i.Enclosure != nil && i.Enclosure.Type != "" && i.Enclosure.Length != "" {
+ item.Enclosure = &RssEnclosure{Url: i.Enclosure.Url, Type: i.Enclosure.Type, Length: i.Enclosure.Length}
+ }
+
+ if i.Author != nil {
+ item.Author = i.Author.Name
+ }
+ return item
+}
+
+// create a new RssFeed with a generic Feed struct's data
+func (r *Rss) RssFeed() *RssFeed {
+ pub := anyTimeFormat(time.RFC1123Z, r.Created, r.Updated)
+ build := anyTimeFormat(time.RFC1123Z, r.Updated)
+ author := ""
+ if r.Author != nil {
+ author = r.Author.Email
+ if len(r.Author.Name) > 0 {
+ author = fmt.Sprintf("%s (%s)", r.Author.Email, r.Author.Name)
+ }
+ }
+
+ var image *RssImage
+ if r.Image != nil {
+ image = &RssImage{Url: r.Image.Url, Title: r.Image.Title, Link: r.Image.Link, Width: r.Image.Width, Height: r.Image.Height}
+ }
+
+ channel := &RssFeed{
+ Title: r.Title,
+ Link: r.Link.Href,
+ Description: r.Description,
+ ManagingEditor: author,
+ PubDate: pub,
+ LastBuildDate: build,
+ Copyright: r.Copyright,
+ Image: image,
+ }
+ for _, i := range r.Items {
+ channel.Items = append(channel.Items, newRssItem(i))
+ }
+ return channel
+}
+
+// return an XML-Ready object for an Rss object
+func (r *Rss) FeedXml() interface{} {
+ // only generate version 2.0 feeds for now
+ return r.RssFeed().FeedXml()
+
+}
+
+// return an XML-ready object for an RssFeed object
+func (r *RssFeed) FeedXml() interface{} {
+ return &rssFeedXml{
+ Version: "2.0",
+ Channel: r,
+ ContentNamespace: "http://purl.org/rss/1.0/modules/content/",
+ }
+}
diff --git a/vendor/github.com/gorilla/feeds/to-implement.md b/vendor/github.com/gorilla/feeds/to-implement.md
new file mode 100644
index 0000000..45fd1e7
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/to-implement.md
@@ -0,0 +1,20 @@
+[Full iTunes list](https://help.apple.com/itc/podcasts_connect/#/itcb54353390)
+
+[Example of ideal iTunes RSS feed](https://help.apple.com/itc/podcasts_connect/#/itcbaf351599)
+
+```
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
\ No newline at end of file
diff --git a/vendor/github.com/gorilla/feeds/uuid.go b/vendor/github.com/gorilla/feeds/uuid.go
new file mode 100644
index 0000000..51bbafe
--- /dev/null
+++ b/vendor/github.com/gorilla/feeds/uuid.go
@@ -0,0 +1,27 @@
+package feeds
+
+// relevant bits from https://github.com/abneptis/GoUUID/blob/master/uuid.go
+
+import (
+ "crypto/rand"
+ "fmt"
+)
+
+type UUID [16]byte
+
+// create a new uuid v4
+func NewUUID() *UUID {
+ u := &UUID{}
+ _, err := rand.Read(u[:16])
+ if err != nil {
+ panic(err)
+ }
+
+ u[8] = (u[8] | 0x80) & 0xBf
+ u[6] = (u[6] | 0x40) & 0x4f
+ return u
+}
+
+func (u *UUID) String() string {
+ return fmt.Sprintf("%x-%x-%x-%x-%x", u[:4], u[4:6], u[6:8], u[8:10], u[10:])
+}
diff --git a/vendor/github.com/mmcdole/gofeed/LICENSE b/vendor/github.com/mmcdole/gofeed/LICENSE
new file mode 100644
index 0000000..054bf56
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2016 mmcdole
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/vendor/github.com/mmcdole/gofeed/README.md b/vendor/github.com/mmcdole/gofeed/README.md
new file mode 100644
index 0000000..7fb1ce9
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/README.md
@@ -0,0 +1,248 @@
+# gofeed
+
+[](https://travis-ci.org/mmcdole/gofeed) [](https://coveralls.io/github/mmcdole/gofeed?branch=master) [](https://goreportcard.com/report/github.com/mmcdole/gofeed) [](http://godoc.org/github.com/mmcdole/gofeed) [](http://doge.mit-license.org)
+
+The `gofeed` library is a robust feed parser that supports parsing both [RSS](https://en.wikipedia.org/wiki/RSS) and [Atom](https://en.wikipedia.org/wiki/Atom_(standard)) feeds. The universal `gofeed.Parser` will parse and convert all feed types into a hybrid `gofeed.Feed` model. You also have the option of parsing them into their respective `atom.Feed` and `rss.Feed` models using the feed specific `atom.Parser` or `rss.Parser`.
+
+##### Supported feed types:
+* RSS 0.90
+* Netscape RSS 0.91
+* Userland RSS 0.91
+* RSS 0.92
+* RSS 0.93
+* RSS 0.94
+* RSS 1.0
+* RSS 2.0
+* Atom 0.3
+* Atom 1.0
+
+It also provides support for parsing several popular predefined extension modules, including [Dublin Core](http://dublincore.org/documents/dces/) and [Apple’s iTunes](https://help.apple.com/itc/podcasts_connect/#/itcb54353390), as well as arbitrary extensions. See the [Extensions](#extensions) section for more details.
+
+## Table of Contents
+- [Overview](#overview)
+- [Basic Usage](#basic-usage)
+- [Advanced Usage](#advanced-usage)
+- [Extensions](#extensions)
+- [Invalid Feeds](#invalid-feeds)
+- [Default Mappings](#default-mappings)
+- [Dependencies](#dependencies)
+- [License](#license)
+- [Credits](#credits)
+
+## Overview
+
+#### Universal Feed Parser
+
+The universal `gofeed.Parser` works in 3 stages: detection, parsing and translation. It first detects the feed type that it is currently parsing. Then it uses a feed specific parser to parse the feed into its true representation which will be either a `rss.Feed` or `atom.Feed`. These models cover every field possible for their respective feed types. Finally, they are *translated* into a `gofeed.Feed` model that is a hybrid of both feed types. Performing the universal feed parsing in these 3 stages allows for more flexibility and keeps the code base more maintainable by separating RSS and Atom parsing into seperate packages.
+
+
+
+The translation step is done by anything which adheres to the `gofeed.Translator` interface. The `DefaultRSSTranslator` and `DefaultAtomTranslator` are used behind the scenes when you use the `gofeed.Parser` with its default settings. You can see how they translate fields from ```atom.Feed``` or ```rss.Feed``` to the universal ```gofeed.Feed``` struct in the [Default Mappings](#default-mappings) section. However, should you disagree with the way certain fields are translated you can easily supply your own `gofeed.Translator` and override this behavior. See the [Advanced Usage](#advanced-usage) section for an example how to do this.
+
+#### Feed Specific Parsers
+
+The `gofeed` library provides two feed specific parsers: `atom.Parser` and `rss.Parser`. If the hybrid `gofeed.Feed` model that the universal `gofeed.Parser` produces does not contain a field from the `atom.Feed` or `rss.Feed` model that you require, it might be beneficial to use the feed specific parsers. When using the `atom.Parser` or `rss.Parser` directly, you can access all of fields found in the `atom.Feed` and `rss.Feed` models. It is also marginally faster because you are able to skip the translation step.
+
+However, for the *vast* majority of users, the universal `gofeed.Parser` is the best way to parse feeds. This allows the user of `gofeed` library to not care about the differences between RSS or Atom feeds.
+
+## Basic Usage
+
+#### Universal Feed Parser
+
+The most common usage scenario will be to use ```gofeed.Parser``` to parse an arbitrary RSS or Atom feed into the hybrid ```gofeed.Feed``` model. This hybrid model allows you to treat RSS and Atom feeds the same.
+
+##### Parse a feed from an URL:
+
+```go
+fp := gofeed.NewParser()
+feed, _ := fp.ParseURL("http://feeds.twit.tv/twit.xml")
+fmt.Println(feed.Title)
+```
+
+##### Parse a feed from a string:
+
+```go
+feedData := `
+
+Sample Feed
+
+ `
+fp := gofeed.NewParser()
+feed, _ := fp.ParseString(feedData)
+fmt.Println(feed.Title)
+```
+
+##### Parse a feed from an io.Reader:
+
+```go
+file, _ := os.Open("/path/to/a/file.xml")
+defer file.Close()
+fp := gofeed.NewParser()
+feed, _ := fp.Parse(file)
+fmt.Println(feed.Title)
+```
+
+#### Feed Specific Parsers
+
+You can easily use the `rss.Parser` and `atom.Parser` directly if you have a usage scenario that requires it:
+
+##### Parse a RSS feed into a `rss.Feed`
+
+```go
+feedData := `
+
+example@site.com (Example Name)
+
+ `
+fp := rss.Parser{}
+rssFeed, _ := fp.Parse(strings.NewReader(feedData))
+fmt.Println(rssFeed.WebMaster)
+```
+
+##### Parse an Atom feed into a `atom.Feed`
+
+```go
+feedData := `
+Example Atom
+ `
+fp := atom.Parser{}
+atomFeed, _ := fp.Parse(strings.NewReader(feedData))
+fmt.Println(atomFeed.Subtitle)
+```
+
+## Advanced Usage
+
+##### Parse a feed while using a custom translator
+
+The mappings and precedence order that are outlined in the [Default Mappings](#default-mappings) section are provided by the following two structs: `DefaultRSSTranslator` and `DefaultAtomTranslator`. If you have fields that you think should have a different precedence, or if you want to make a translator that is aware of an unsupported extension you can do this by specifying your own RSS or Atom translator when using the `gofeed.Parser`.
+
+Here is a simple example of creating a custom `Translator` that makes the `/rss/channel/itunes:author` field have a higher precedence than the `/rss/channel/managingEditor` field in RSS feeds. We will wrap the existing `DefaultRSSTranslator` since we only want to change the behavior for a single field.
+
+First we must define a custom translator:
+
+```go
+
+import (
+ "fmt"
+
+ "github.com/mmcdole/gofeed"
+ "github.com/mmcdole/gofeed/rss"
+)
+
+type MyCustomTranslator struct {
+ defaultTranslator *gofeed.DefaultRSSTranslator
+}
+
+func NewMyCustomTranslator() *MyCustomTranslator {
+ t := &MyCustomTranslator{}
+
+ // We create a DefaultRSSTranslator internally so we can wrap its Translate
+ // call since we only want to modify the precedence for a single field.
+ t.defaultTranslator = &gofeed.DefaultRSSTranslator{}
+ return t
+}
+
+func (ct* MyCustomTranslator) Translate(feed interface{}) (*gofeed.Feed, error) {
+ rss, found := feed.(*rss.Feed)
+ if !found {
+ return nil, fmt.Errorf("Feed did not match expected type of *rss.Feed")
+ }
+
+ f, err := ct.defaultTranslator.Translate(rss)
+ if err != nil {
+ return nil, err
+ }
+
+ if rss.ITunesExt != nil && rss.ITunesExt.Author != "" {
+ f.Author = rss.ITunesExt.Author
+ } else {
+ f.Author = rss.ManagingEditor
+ }
+ return f
+}
+```
+
+Next you must configure your `gofeed.Parser` to utilize the new `gofeed.Translator`:
+
+```go
+feedData := `
+
+Ender Wiggin
+Valentine Wiggin
+
+ `
+
+fp := gofeed.NewParser()
+fp.RSSTranslator = NewMyCustomTranslator()
+feed, _ := fp.ParseString(feedData)
+fmt.Println(feed.Author) // Valentine Wiggin
+```
+
+## Extensions
+
+Every element which does not belong to the feed's default namespace is considered an extension by `gofeed`. These are parsed and stored in a tree-like structure located at `Feed.Extensions` and `Item.Extensions`. These fields should allow you to access and read any custom extension elements.
+
+In addition to the generic handling of extensions, `gofeed` also has built in support for parsing certain popular extensions into their own structs for convenience. It currently supports the [Dublin Core](http://dublincore.org/documents/dces/) and [Apple iTunes](https://help.apple.com/itc/podcasts_connect/#/itcb54353390) extensions which you can access at `Feed.ItunesExt`, `feed.DublinCoreExt` and `Item.ITunesExt` and `Item.DublinCoreExt`
+
+## Invalid Feeds
+
+A best-effort attempt is made at parsing broken and invalid XML feeds. Currently, `gofeed` can succesfully parse feeds with the following issues:
+- Unescaped/Naked Markup in feed elements
+- Undeclared namespace prefixes
+- Missing closing tags on certain elements
+- Illegal tags within feed elements without namespace prefixes
+- Missing "required" elements as specified by the respective feed specs.
+- Incorrect date formats
+
+## Default Mappings
+
+The ```DefaultRSSTranslator``` and the ```DefaultAtomTranslator``` map the following ```rss.Feed``` and ```atom.Feed``` fields to their respective ```gofeed.Feed``` fields. They are listed in order of precedence (highest to lowest):
+
+
+`gofeed.Feed` | RSS | Atom
+--- | --- | ---
+Title | /rss/channel/title /rdf:RDF/channel/title /rss/channel/dc:title /rdf:RDF/channel/dc:title | /feed/title
+Description | /rss/channel/description /rdf:RDF/channel/description /rss/channel/itunes:subtitle | /feed/subtitle /feed/tagline
+Link | /rss/channel/link /rdf:RDF/channel/link | /feed/link[@rel=”alternate”]/@href /feed/link[not(@rel)]/@href
+FeedLink | /rss/channel/atom:link[@rel="self"]/@href /rdf:RDF/channel/atom:link[@rel="self"]/@href | /feed/link[@rel="self"]/@href
+Updated | /rss/channel/lastBuildDate /rss/channel/dc:date /rdf:RDF/channel/dc:date | /feed/updated /feed/modified
+Published | /rss/channel/pubDate |
+Author | /rss/channel/managingEditor /rss/channel/webMaster /rss/channel/dc:author /rdf:RDF/channel/dc:author /rss/channel/dc:creator /rdf:RDF/channel/dc:creator /rss/channel/itunes:author | /feed/author
+Language | /rss/channel/language /rss/channel/dc:language /rdf:RDF/channel/dc:language | /feed/@xml:lang
+Image | /rss/channel/image /rdf:RDF/image /rss/channel/itunes:image | /feed/logo
+Copyright | /rss/channel/copyright /rss/channel/dc:rights /rdf:RDF/channel/dc:rights | /feed/rights /feed/copyright
+Generator | /rss/channel/generator | /feed/generator
+Categories | /rss/channel/category /rss/channel/itunes:category /rss/channel/itunes:keywords /rss/channel/dc:subject /rdf:RDF/channel/dc:subject | /feed/category
+
+
+`gofeed.Item` | RSS | Atom
+--- | --- | ---
+Title | /rss/channel/item/title /rdf:RDF/item/title /rdf:RDF/item/dc:title /rss/channel/item/dc:title | /feed/entry/title
+Description | /rss/channel/item/description /rdf:RDF/item/description /rss/channel/item/dc:description /rdf:RDF/item/dc:description | /feed/entry/summary
+Content | /rss/channel/item/content:encoded | /feed/entry/content
+Link | /rss/channel/item/link /rdf:RDF/item/link | /feed/entry/link[@rel=”alternate”]/@href /feed/entry/link[not(@rel)]/@href
+Updated | /rss/channel/item/dc:date /rdf:RDF/rdf:item/dc:date | /feed/entry/modified /feed/entry/updated
+Published | /rss/channel/item/pubDate /rss/channel/item/dc:date | /feed/entry/published /feed/entry/issued
+Author | /rss/channel/item/author /rss/channel/item/dc:author /rdf:RDF/item/dc:author /rss/channel/item/dc:creator /rdf:RDF/item/dc:creator /rss/channel/item/itunes:author | /feed/entry/author
+GUID | /rss/channel/item/guid | /feed/entry/id
+Image | /rss/channel/item/itunes:image /rss/channel/item/media:image |
+Categories | /rss/channel/item/category /rss/channel/item/dc:subject /rss/channel/item/itunes:keywords /rdf:RDF/channel/item/dc:subject | /feed/entry/category
+Enclosures | /rss/channel/item/enclosure | /feed/entry/link[@rel=”enclosure”]
+
+## Dependencies
+
+* [goxpp](https://github.com/mmcdole/goxpp) - XML Pull Parser
+* [goquery](https://github.com/PuerkitoBio/goquery) - Go jQuery-like interface
+* [testify](https://github.com/stretchr/testify) - Unit test enhancements
+
+## License
+
+This project is licensed under the [MIT License](https://raw.githubusercontent.com/mmcdole/gofeed/master/LICENSE)
+
+## Credits
+
+* [cristoper](https://github.com/cristoper) for his work on implementing xml:base relative URI handling.
+* [Mark Pilgrim](https://en.wikipedia.org/wiki/Mark_Pilgrim) and [Kurt McKee](http://kurtmckee.org) for their work on the excellent [Universal Feed Parser](https://github.com/kurtmckee/feedparser) Python library. This library was the inspiration for the `gofeed` library.
+* [Dan MacTough](http://blog.mact.me) for his work on [node-feedparser](https://github.com/danmactough/node-feedparser). It provided inspiration for the set of fields that should be covered in the hybrid `gofeed.Feed` model.
+* [Matt Jibson](https://mattjibson.com/) for his date parsing function in the [goread](https://github.com/mjibson/goread) project.
+* [Jim Teeuwen](https://github.com/jteeuwen) for his method of representing arbitrary feed extensions in the [go-pkg-rss](https://github.com/jteeuwen/go-pkg-rss) library.
diff --git a/vendor/github.com/mmcdole/gofeed/atom/feed.go b/vendor/github.com/mmcdole/gofeed/atom/feed.go
new file mode 100644
index 0000000..34f6afc
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/atom/feed.go
@@ -0,0 +1,114 @@
+package atom
+
+import (
+ "encoding/json"
+ "time"
+
+ "github.com/mmcdole/gofeed/extensions"
+)
+
+// Feed is an Atom Feed
+type Feed struct {
+ Title string `json:"title,omitempty"`
+ ID string `json:"id,omitempty"`
+ Updated string `json:"updated,omitempty"`
+ UpdatedParsed *time.Time `json:"updatedParsed,omitempty"`
+ Subtitle string `json:"subtitle,omitempty"`
+ Links []*Link `json:"links,omitempty"`
+ Language string `json:"language,omitempty"`
+ Generator *Generator `json:"generator,omitempty"`
+ Icon string `json:"icon,omitempty"`
+ Logo string `json:"logo,omitempty"`
+ Rights string `json:"rights,omitempty"`
+ Contributors []*Person `json:"contributors,omitempty"`
+ Authors []*Person `json:"authors,omitempty"`
+ Categories []*Category `json:"categories,omitempty"`
+ Entries []*Entry `json:"entries"`
+ Extensions ext.Extensions `json:"extensions,omitempty"`
+ Version string `json:"version"`
+}
+
+func (f Feed) String() string {
+ json, _ := json.MarshalIndent(f, "", " ")
+ return string(json)
+}
+
+// Entry is an Atom Entry
+type Entry struct {
+ Title string `json:"title,omitempty"`
+ ID string `json:"id,omitempty"`
+ Updated string `json:"updated,omitempty"`
+ UpdatedParsed *time.Time `json:"updatedParsed,omitempty"`
+ Summary string `json:"summary,omitempty"`
+ Authors []*Person `json:"authors,omitempty"`
+ Contributors []*Person `json:"contributors,omitempty"`
+ Categories []*Category `json:"categories,omitempty"`
+ Links []*Link `json:"links,omitempty"`
+ Rights string `json:"rights,omitempty"`
+ Published string `json:"published,omitempty"`
+ PublishedParsed *time.Time `json:"publishedParsed,omitempty"`
+ Source *Source `json:"source,omitempty"`
+ Content *Content `json:"content,omitempty"`
+ Extensions ext.Extensions `json:"extensions,omitempty"`
+}
+
+// Category is category metadata for Feeds and Entries
+type Category struct {
+ Term string `json:"term,omitempty"`
+ Scheme string `json:"scheme,omitempty"`
+ Label string `json:"label,omitempty"`
+}
+
+// Person represents a person in an Atom feed
+// for things like Authors, Contributors, etc
+type Person struct {
+ Name string `json:"name,omitempty"`
+ Email string `json:"email,omitempty"`
+ URI string `json:"uri,omitempty"`
+}
+
+// Link is an Atom link that defines a reference
+// from an entry or feed to a Web resource
+type Link struct {
+ Href string `json:"href,omitempty"`
+ Hreflang string `json:"hreflang,omitempty"`
+ Rel string `json:"rel,omitempty"`
+ Type string `json:"type,omitempty"`
+ Title string `json:"title,omitempty"`
+ Length string `json:"length,omitempty"`
+}
+
+// Content either contains or links to the content of
+// the entry
+type Content struct {
+ Src string `json:"src,omitempty"`
+ Type string `json:"type,omitempty"`
+ Value string `json:"value,omitempty"`
+}
+
+// Generator identifies the agent used to generate a
+// feed, for debugging and other purposes.
+type Generator struct {
+ Value string `json:"value,omitempty"`
+ URI string `json:"uri,omitempty"`
+ Version string `json:"version,omitempty"`
+}
+
+// Source contains the feed information for another
+// feed if a given entry came from that feed.
+type Source struct {
+ Title string `json:"title,omitempty"`
+ ID string `json:"id,omitempty"`
+ Updated string `json:"updated,omitempty"`
+ UpdatedParsed *time.Time `json:"updatedParsed,omitempty"`
+ Subtitle string `json:"subtitle,omitempty"`
+ Links []*Link `json:"links,omitempty"`
+ Generator *Generator `json:"generator,omitempty"`
+ Icon string `json:"icon,omitempty"`
+ Logo string `json:"logo,omitempty"`
+ Rights string `json:"rights,omitempty"`
+ Contributors []*Person `json:"contributors,omitempty"`
+ Authors []*Person `json:"authors,omitempty"`
+ Categories []*Category `json:"categories,omitempty"`
+ Extensions ext.Extensions `json:"extensions,omitempty"`
+}
diff --git a/vendor/github.com/mmcdole/gofeed/atom/parser.go b/vendor/github.com/mmcdole/gofeed/atom/parser.go
new file mode 100644
index 0000000..2309cb8
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/atom/parser.go
@@ -0,0 +1,763 @@
+package atom
+
+import (
+ "encoding/base64"
+ "io"
+ "strings"
+
+ "github.com/PuerkitoBio/goquery"
+ "github.com/mmcdole/gofeed/extensions"
+ "github.com/mmcdole/gofeed/internal/shared"
+ "github.com/mmcdole/goxpp"
+)
+
+var (
+ // Atom elements which contain URIs
+ // https://tools.ietf.org/html/rfc4287
+ uriElements = map[string]bool{
+ "icon": true,
+ "id": true,
+ "logo": true,
+ "uri": true,
+ "url": true, // atom 0.3
+ }
+
+ // Atom attributes which contain URIs
+ // https://tools.ietf.org/html/rfc4287
+ atomURIAttrs = map[string]bool{
+ "href": true,
+ "scheme": true,
+ "src": true,
+ "uri": true,
+ }
+)
+
+// Parser is an Atom Parser
+type Parser struct {
+ base *shared.XMLBase
+}
+
+// Parse parses an xml feed into an atom.Feed
+func (ap *Parser) Parse(feed io.Reader) (*Feed, error) {
+ p := xpp.NewXMLPullParser(feed, false, shared.NewReaderLabel)
+ ap.base = &shared.XMLBase{URIAttrs: atomURIAttrs}
+
+ _, err := ap.base.FindRoot(p)
+ if err != nil {
+ return nil, err
+ }
+
+ return ap.parseRoot(p)
+}
+
+func (ap *Parser) parseRoot(p *xpp.XMLPullParser) (*Feed, error) {
+ if err := p.Expect(xpp.StartTag, "feed"); err != nil {
+ return nil, err
+ }
+
+ atom := &Feed{}
+ atom.Entries = []*Entry{}
+ atom.Version = ap.parseVersion(p)
+ atom.Language = ap.parseLanguage(p)
+
+ contributors := []*Person{}
+ authors := []*Person{}
+ categories := []*Category{}
+ links := []*Link{}
+ extensions := ext.Extensions{}
+
+ for {
+ tok, err := ap.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+
+ name := strings.ToLower(p.Name)
+
+ if shared.IsExtension(p) {
+ e, err := shared.ParseExtension(extensions, p)
+ if err != nil {
+ return nil, err
+ }
+ extensions = e
+ } else if name == "title" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Title = result
+ } else if name == "id" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.ID = result
+ } else if name == "updated" ||
+ name == "modified" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Updated = result
+ date, err := shared.ParseDate(result)
+ if err == nil {
+ utcDate := date.UTC()
+ atom.UpdatedParsed = &utcDate
+ }
+ } else if name == "subtitle" ||
+ name == "tagline" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Subtitle = result
+ } else if name == "link" {
+ result, err := ap.parseLink(p)
+ if err != nil {
+ return nil, err
+ }
+ links = append(links, result)
+ } else if name == "generator" {
+ result, err := ap.parseGenerator(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Generator = result
+ } else if name == "icon" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Icon = result
+ } else if name == "logo" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Logo = result
+ } else if name == "rights" ||
+ name == "copyright" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Rights = result
+ } else if name == "contributor" {
+ result, err := ap.parsePerson("contributor", p)
+ if err != nil {
+ return nil, err
+ }
+ contributors = append(contributors, result)
+ } else if name == "author" {
+ result, err := ap.parsePerson("author", p)
+ if err != nil {
+ return nil, err
+ }
+ authors = append(authors, result)
+ } else if name == "category" {
+ result, err := ap.parseCategory(p)
+ if err != nil {
+ return nil, err
+ }
+ categories = append(categories, result)
+ } else if name == "entry" {
+ result, err := ap.parseEntry(p)
+ if err != nil {
+ return nil, err
+ }
+ atom.Entries = append(atom.Entries, result)
+ } else {
+ err := p.Skip()
+ if err != nil {
+ return nil, err
+ }
+ }
+ }
+ }
+
+ if len(categories) > 0 {
+ atom.Categories = categories
+ }
+
+ if len(authors) > 0 {
+ atom.Authors = authors
+ }
+
+ if len(contributors) > 0 {
+ atom.Contributors = contributors
+ }
+
+ if len(links) > 0 {
+ atom.Links = links
+ }
+
+ if len(extensions) > 0 {
+ atom.Extensions = extensions
+ }
+
+ if err := p.Expect(xpp.EndTag, "feed"); err != nil {
+ return nil, err
+ }
+
+ return atom, nil
+}
+
+func (ap *Parser) parseEntry(p *xpp.XMLPullParser) (*Entry, error) {
+ if err := p.Expect(xpp.StartTag, "entry"); err != nil {
+ return nil, err
+ }
+ entry := &Entry{}
+
+ contributors := []*Person{}
+ authors := []*Person{}
+ categories := []*Category{}
+ links := []*Link{}
+ extensions := ext.Extensions{}
+
+ for {
+ tok, err := ap.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+
+ name := strings.ToLower(p.Name)
+
+ if shared.IsExtension(p) {
+ e, err := shared.ParseExtension(extensions, p)
+ if err != nil {
+ return nil, err
+ }
+ extensions = e
+ } else if name == "title" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.Title = result
+ } else if name == "id" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.ID = result
+ } else if name == "rights" ||
+ name == "copyright" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.Rights = result
+ } else if name == "summary" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.Summary = result
+ } else if name == "source" {
+ result, err := ap.parseSource(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.Source = result
+ } else if name == "updated" ||
+ name == "modified" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.Updated = result
+ date, err := shared.ParseDate(result)
+ if err == nil {
+ utcDate := date.UTC()
+ entry.UpdatedParsed = &utcDate
+ }
+ } else if name == "contributor" {
+ result, err := ap.parsePerson("contributor", p)
+ if err != nil {
+ return nil, err
+ }
+ contributors = append(contributors, result)
+ } else if name == "author" {
+ result, err := ap.parsePerson("author", p)
+ if err != nil {
+ return nil, err
+ }
+ authors = append(authors, result)
+ } else if name == "category" {
+ result, err := ap.parseCategory(p)
+ if err != nil {
+ return nil, err
+ }
+ categories = append(categories, result)
+ } else if name == "link" {
+ result, err := ap.parseLink(p)
+ if err != nil {
+ return nil, err
+ }
+ links = append(links, result)
+ } else if name == "published" ||
+ name == "issued" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.Published = result
+ date, err := shared.ParseDate(result)
+ if err == nil {
+ utcDate := date.UTC()
+ entry.PublishedParsed = &utcDate
+ }
+ } else if name == "content" {
+ result, err := ap.parseContent(p)
+ if err != nil {
+ return nil, err
+ }
+ entry.Content = result
+ } else {
+ err := p.Skip()
+ if err != nil {
+ return nil, err
+ }
+ }
+ }
+ }
+
+ if len(categories) > 0 {
+ entry.Categories = categories
+ }
+
+ if len(authors) > 0 {
+ entry.Authors = authors
+ }
+
+ if len(links) > 0 {
+ entry.Links = links
+ }
+
+ if len(contributors) > 0 {
+ entry.Contributors = contributors
+ }
+
+ if len(extensions) > 0 {
+ entry.Extensions = extensions
+ }
+
+ if err := p.Expect(xpp.EndTag, "entry"); err != nil {
+ return nil, err
+ }
+
+ return entry, nil
+}
+
+func (ap *Parser) parseSource(p *xpp.XMLPullParser) (*Source, error) {
+
+ if err := p.Expect(xpp.StartTag, "source"); err != nil {
+ return nil, err
+ }
+
+ source := &Source{}
+
+ contributors := []*Person{}
+ authors := []*Person{}
+ categories := []*Category{}
+ links := []*Link{}
+ extensions := ext.Extensions{}
+
+ for {
+ tok, err := ap.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+
+ name := strings.ToLower(p.Name)
+
+ if shared.IsExtension(p) {
+ e, err := shared.ParseExtension(extensions, p)
+ if err != nil {
+ return nil, err
+ }
+ extensions = e
+ } else if name == "title" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ source.Title = result
+ } else if name == "id" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ source.ID = result
+ } else if name == "updated" ||
+ name == "modified" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ source.Updated = result
+ date, err := shared.ParseDate(result)
+ if err == nil {
+ utcDate := date.UTC()
+ source.UpdatedParsed = &utcDate
+ }
+ } else if name == "subtitle" ||
+ name == "tagline" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ source.Subtitle = result
+ } else if name == "link" {
+ result, err := ap.parseLink(p)
+ if err != nil {
+ return nil, err
+ }
+ links = append(links, result)
+ } else if name == "generator" {
+ result, err := ap.parseGenerator(p)
+ if err != nil {
+ return nil, err
+ }
+ source.Generator = result
+ } else if name == "icon" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ source.Icon = result
+ } else if name == "logo" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ source.Logo = result
+ } else if name == "rights" ||
+ name == "copyright" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ source.Rights = result
+ } else if name == "contributor" {
+ result, err := ap.parsePerson("contributor", p)
+ if err != nil {
+ return nil, err
+ }
+ contributors = append(contributors, result)
+ } else if name == "author" {
+ result, err := ap.parsePerson("author", p)
+ if err != nil {
+ return nil, err
+ }
+ authors = append(authors, result)
+ } else if name == "category" {
+ result, err := ap.parseCategory(p)
+ if err != nil {
+ return nil, err
+ }
+ categories = append(categories, result)
+ } else {
+ err := p.Skip()
+ if err != nil {
+ return nil, err
+ }
+ }
+ }
+ }
+
+ if len(categories) > 0 {
+ source.Categories = categories
+ }
+
+ if len(authors) > 0 {
+ source.Authors = authors
+ }
+
+ if len(contributors) > 0 {
+ source.Contributors = contributors
+ }
+
+ if len(links) > 0 {
+ source.Links = links
+ }
+
+ if len(extensions) > 0 {
+ source.Extensions = extensions
+ }
+
+ if err := p.Expect(xpp.EndTag, "source"); err != nil {
+ return nil, err
+ }
+
+ return source, nil
+}
+
+func (ap *Parser) parseContent(p *xpp.XMLPullParser) (*Content, error) {
+ c := &Content{}
+ c.Type = p.Attribute("type")
+ c.Src = p.Attribute("src")
+
+ text, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ c.Value = text
+
+ return c, nil
+}
+
+func (ap *Parser) parsePerson(name string, p *xpp.XMLPullParser) (*Person, error) {
+
+ if err := p.Expect(xpp.StartTag, name); err != nil {
+ return nil, err
+ }
+
+ person := &Person{}
+
+ for {
+ tok, err := ap.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+
+ name := strings.ToLower(p.Name)
+
+ if name == "name" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ person.Name = result
+ } else if name == "email" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ person.Email = result
+ } else if name == "uri" ||
+ name == "url" ||
+ name == "homepage" {
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+ person.URI = result
+ } else {
+ err := p.Skip()
+ if err != nil {
+ return nil, err
+ }
+ }
+ }
+ }
+
+ if err := p.Expect(xpp.EndTag, name); err != nil {
+ return nil, err
+ }
+
+ return person, nil
+}
+
+func (ap *Parser) parseLink(p *xpp.XMLPullParser) (*Link, error) {
+ if err := p.Expect(xpp.StartTag, "link"); err != nil {
+ return nil, err
+ }
+
+ l := &Link{}
+ l.Href = p.Attribute("href")
+ l.Hreflang = p.Attribute("hreflang")
+ l.Type = p.Attribute("type")
+ l.Length = p.Attribute("length")
+ l.Title = p.Attribute("title")
+ l.Rel = p.Attribute("rel")
+ if l.Rel == "" {
+ l.Rel = "alternate"
+ }
+
+ if err := p.Skip(); err != nil {
+ return nil, err
+ }
+
+ if err := p.Expect(xpp.EndTag, "link"); err != nil {
+ return nil, err
+ }
+ return l, nil
+}
+
+func (ap *Parser) parseCategory(p *xpp.XMLPullParser) (*Category, error) {
+ if err := p.Expect(xpp.StartTag, "category"); err != nil {
+ return nil, err
+ }
+
+ c := &Category{}
+ c.Term = p.Attribute("term")
+ c.Scheme = p.Attribute("scheme")
+ c.Label = p.Attribute("label")
+
+ if err := p.Skip(); err != nil {
+ return nil, err
+ }
+
+ if err := p.Expect(xpp.EndTag, "category"); err != nil {
+ return nil, err
+ }
+ return c, nil
+}
+
+func (ap *Parser) parseGenerator(p *xpp.XMLPullParser) (*Generator, error) {
+
+ if err := p.Expect(xpp.StartTag, "generator"); err != nil {
+ return nil, err
+ }
+
+ g := &Generator{}
+
+ uri := p.Attribute("uri") // Atom 1.0
+ url := p.Attribute("url") // Atom 0.3
+
+ if uri != "" {
+ g.URI = uri
+ } else if url != "" {
+ g.URI = url
+ }
+
+ g.Version = p.Attribute("version")
+
+ result, err := ap.parseAtomText(p)
+ if err != nil {
+ return nil, err
+ }
+
+ g.Value = result
+
+ if err := p.Expect(xpp.EndTag, "generator"); err != nil {
+ return nil, err
+ }
+
+ return g, nil
+}
+
+func (ap *Parser) parseAtomText(p *xpp.XMLPullParser) (string, error) {
+
+ var text struct {
+ Type string `xml:"type,attr"`
+ Mode string `xml:"mode,attr"`
+ InnerXML string `xml:",innerxml"`
+ }
+
+ err := p.DecodeElement(&text)
+ if err != nil {
+ return "", err
+ }
+
+ result := text.InnerXML
+ result = strings.TrimSpace(result)
+
+ lowerType := strings.ToLower(text.Type)
+ lowerMode := strings.ToLower(text.Mode)
+
+ if strings.HasPrefix(result, "") {
+ result = strings.TrimPrefix(result, "")
+ if lowerType == "html" || strings.Contains(lowerType, "xhtml") {
+ result, _ = ap.base.ResolveHTML(result)
+ }
+ } else {
+ // decode non-CDATA contents depending on type
+
+ if lowerType == "text" ||
+ strings.HasPrefix(lowerType, "text/") ||
+ (lowerType == "" && lowerMode == "") {
+ result, err = shared.DecodeEntities(result)
+ } else if strings.Contains(lowerType, "xhtml") {
+ result = ap.stripWrappingDiv(result)
+ result, _ = ap.base.ResolveHTML(result)
+ } else if lowerType == "html" {
+ result = ap.stripWrappingDiv(result)
+ result, err = shared.DecodeEntities(result)
+ if err == nil {
+ result, _ = ap.base.ResolveHTML(result)
+ }
+ } else {
+ decodedStr, err := base64.StdEncoding.DecodeString(result)
+ if err == nil {
+ result = string(decodedStr)
+ }
+ }
+ }
+
+ // resolve relative URIs in URI-containing elements according to xml:base
+ name := strings.ToLower(p.Name)
+ if uriElements[name] {
+ resolved, err := ap.base.ResolveURL(result)
+ if err == nil {
+ result = resolved
+ }
+ }
+
+ return result, err
+}
+
+func (ap *Parser) parseLanguage(p *xpp.XMLPullParser) string {
+ return p.Attribute("lang")
+}
+
+func (ap *Parser) parseVersion(p *xpp.XMLPullParser) string {
+ ver := p.Attribute("version")
+ if ver != "" {
+ return ver
+ }
+
+ ns := p.Attribute("xmlns")
+ if ns == "http://purl.org/atom/ns#" {
+ return "0.3"
+ }
+
+ if ns == "http://www.w3.org/2005/Atom" {
+ return "1.0"
+ }
+
+ return ""
+}
+
+func (ap *Parser) stripWrappingDiv(content string) (result string) {
+ result = content
+ r := strings.NewReader(result)
+ doc, err := goquery.NewDocumentFromReader(r)
+ if err == nil {
+ root := doc.Find("body").Children()
+ if root.Is("div") && root.Siblings().Size() == 0 {
+ html, err := root.Unwrap().Html()
+ if err == nil {
+ result = html
+ }
+ }
+ }
+ return
+}
diff --git a/vendor/github.com/mmcdole/gofeed/detector.go b/vendor/github.com/mmcdole/gofeed/detector.go
new file mode 100644
index 0000000..6f0eae4
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/detector.go
@@ -0,0 +1,48 @@
+package gofeed
+
+import (
+ "io"
+ "strings"
+
+ "github.com/mmcdole/gofeed/internal/shared"
+ "github.com/mmcdole/goxpp"
+)
+
+// FeedType represents one of the possible feed
+// types that we can detect.
+type FeedType int
+
+const (
+ // FeedTypeUnknown represents a feed that could not have its
+ // type determiend.
+ FeedTypeUnknown FeedType = iota
+ // FeedTypeAtom repesents an Atom feed
+ FeedTypeAtom
+ // FeedTypeRSS represents an RSS feed
+ FeedTypeRSS
+)
+
+// DetectFeedType attempts to determine the type of feed
+// by looking for specific xml elements unique to the
+// various feed types.
+func DetectFeedType(feed io.Reader) FeedType {
+ p := xpp.NewXMLPullParser(feed, false, shared.NewReaderLabel)
+
+ xmlBase := shared.XMLBase{}
+ _, err := xmlBase.FindRoot(p)
+ if err != nil {
+ return FeedTypeUnknown
+ }
+
+ name := strings.ToLower(p.Name)
+ switch name {
+ case "rdf":
+ return FeedTypeRSS
+ case "rss":
+ return FeedTypeRSS
+ case "feed":
+ return FeedTypeAtom
+ default:
+ return FeedTypeUnknown
+ }
+}
diff --git a/vendor/github.com/mmcdole/gofeed/extensions/dublincore.go b/vendor/github.com/mmcdole/gofeed/extensions/dublincore.go
new file mode 100644
index 0000000..c22132d
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/extensions/dublincore.go
@@ -0,0 +1,45 @@
+package ext
+
+// DublinCoreExtension represents a feed extension
+// for the Dublin Core specification.
+type DublinCoreExtension struct {
+ Title []string `json:"title,omitempty"`
+ Creator []string `json:"creator,omitempty"`
+ Author []string `json:"author,omitempty"`
+ Subject []string `json:"subject,omitempty"`
+ Description []string `json:"description,omitempty"`
+ Publisher []string `json:"publisher,omitempty"`
+ Contributor []string `json:"contributor,omitempty"`
+ Date []string `json:"date,omitempty"`
+ Type []string `json:"type,omitempty"`
+ Format []string `json:"format,omitempty"`
+ Identifier []string `json:"identifier,omitempty"`
+ Source []string `json:"source,omitempty"`
+ Language []string `json:"language,omitempty"`
+ Relation []string `json:"relation,omitempty"`
+ Coverage []string `json:"coverage,omitempty"`
+ Rights []string `json:"rights,omitempty"`
+}
+
+// NewDublinCoreExtension creates a new DublinCoreExtension
+// given the generic extension map for the "dc" prefix.
+func NewDublinCoreExtension(extensions map[string][]Extension) *DublinCoreExtension {
+ dc := &DublinCoreExtension{}
+ dc.Title = parseTextArrayExtension("title", extensions)
+ dc.Creator = parseTextArrayExtension("creator", extensions)
+ dc.Author = parseTextArrayExtension("author", extensions)
+ dc.Subject = parseTextArrayExtension("subject", extensions)
+ dc.Description = parseTextArrayExtension("description", extensions)
+ dc.Publisher = parseTextArrayExtension("publisher", extensions)
+ dc.Contributor = parseTextArrayExtension("contributor", extensions)
+ dc.Date = parseTextArrayExtension("date", extensions)
+ dc.Type = parseTextArrayExtension("type", extensions)
+ dc.Format = parseTextArrayExtension("format", extensions)
+ dc.Identifier = parseTextArrayExtension("identifier", extensions)
+ dc.Source = parseTextArrayExtension("source", extensions)
+ dc.Language = parseTextArrayExtension("language", extensions)
+ dc.Relation = parseTextArrayExtension("relation", extensions)
+ dc.Coverage = parseTextArrayExtension("coverage", extensions)
+ dc.Rights = parseTextArrayExtension("rights", extensions)
+ return dc
+}
diff --git a/vendor/github.com/mmcdole/gofeed/extensions/extensions.go b/vendor/github.com/mmcdole/gofeed/extensions/extensions.go
new file mode 100644
index 0000000..6c50d4a
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/extensions/extensions.go
@@ -0,0 +1,46 @@
+package ext
+
+// Extensions is the generic extension map for Feeds and Items.
+// The first map is for the element namespace prefix (e.g., itunes).
+// The second map is for the element name (e.g., author).
+type Extensions map[string]map[string][]Extension
+
+// Extension represents a single XML element that was in a non
+// default namespace in a Feed or Item/Entry.
+type Extension struct {
+ Name string `json:"name"`
+ Value string `json:"value"`
+ Attrs map[string]string `json:"attrs"`
+ Children map[string][]Extension `json:"children"`
+}
+
+func parseTextExtension(name string, extensions map[string][]Extension) (value string) {
+ if extensions == nil {
+ return
+ }
+
+ matches, ok := extensions[name]
+ if !ok || len(matches) == 0 {
+ return
+ }
+
+ match := matches[0]
+ return match.Value
+}
+
+func parseTextArrayExtension(name string, extensions map[string][]Extension) (values []string) {
+ if extensions == nil {
+ return
+ }
+
+ matches, ok := extensions[name]
+ if !ok || len(matches) == 0 {
+ return
+ }
+
+ values = []string{}
+ for _, m := range matches {
+ values = append(values, m.Value)
+ }
+ return
+}
diff --git a/vendor/github.com/mmcdole/gofeed/extensions/itunes.go b/vendor/github.com/mmcdole/gofeed/extensions/itunes.go
new file mode 100644
index 0000000..c3fa1c3
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/extensions/itunes.go
@@ -0,0 +1,142 @@
+package ext
+
+// ITunesFeedExtension is a set of extension
+// fields for RSS feeds.
+type ITunesFeedExtension struct {
+ Author string `json:"author,omitempty"`
+ Block string `json:"block,omitempty"`
+ Categories []*ITunesCategory `json:"categories,omitempty"`
+ Explicit string `json:"explicit,omitempty"`
+ Keywords string `json:"keywords,omitempty"`
+ Owner *ITunesOwner `json:"owner,omitempty"`
+ Subtitle string `json:"subtitle,omitempty"`
+ Summary string `json:"summary,omitempty"`
+ Image string `json:"image,omitempty"`
+ Complete string `json:"complete,omitempty"`
+ NewFeedURL string `json:"newFeedUrl,omitempty"`
+}
+
+// ITunesItemExtension is a set of extension
+// fields for RSS items.
+type ITunesItemExtension struct {
+ Author string `json:"author,omitempty"`
+ Block string `json:"block,omitempty"`
+ Duration string `json:"duration,omitempty"`
+ Explicit string `json:"explicit,omitempty"`
+ Keywords string `json:"keywords,omitempty"`
+ Subtitle string `json:"subtitle,omitempty"`
+ Summary string `json:"summary,omitempty"`
+ Image string `json:"image,omitempty"`
+ IsClosedCaptioned string `json:"isClosedCaptioned,omitempty"`
+ Order string `json:"order,omitempty"`
+}
+
+// ITunesCategory is a category element for itunes feeds.
+type ITunesCategory struct {
+ Text string `json:"text,omitempty"`
+ Subcategory *ITunesCategory `json:"subcategory,omitempty"`
+}
+
+// ITunesOwner is the owner of a particular itunes feed.
+type ITunesOwner struct {
+ Email string `json:"email,omitempty"`
+ Name string `json:"name,omitempty"`
+}
+
+// NewITunesFeedExtension creates an ITunesFeedExtension given an
+// extension map for the "itunes" key.
+func NewITunesFeedExtension(extensions map[string][]Extension) *ITunesFeedExtension {
+ feed := &ITunesFeedExtension{}
+ feed.Author = parseTextExtension("author", extensions)
+ feed.Block = parseTextExtension("block", extensions)
+ feed.Explicit = parseTextExtension("explicit", extensions)
+ feed.Keywords = parseTextExtension("keywords", extensions)
+ feed.Subtitle = parseTextExtension("subtitle", extensions)
+ feed.Summary = parseTextExtension("summary", extensions)
+ feed.Image = parseImage(extensions)
+ feed.Complete = parseTextExtension("complete", extensions)
+ feed.NewFeedURL = parseTextExtension("new-feed-url", extensions)
+ feed.Categories = parseCategories(extensions)
+ feed.Owner = parseOwner(extensions)
+ return feed
+}
+
+// NewITunesItemExtension creates an ITunesItemExtension given an
+// extension map for the "itunes" key.
+func NewITunesItemExtension(extensions map[string][]Extension) *ITunesItemExtension {
+ entry := &ITunesItemExtension{}
+ entry.Author = parseTextExtension("author", extensions)
+ entry.Block = parseTextExtension("block", extensions)
+ entry.Duration = parseTextExtension("duration", extensions)
+ entry.Explicit = parseTextExtension("explicit", extensions)
+ entry.Subtitle = parseTextExtension("subtitle", extensions)
+ entry.Summary = parseTextExtension("summary", extensions)
+ entry.Keywords = parseTextExtension("keywords", extensions)
+ entry.Image = parseImage(extensions)
+ entry.IsClosedCaptioned = parseTextExtension("isClosedCaptioned", extensions)
+ entry.Order = parseTextExtension("order", extensions)
+ return entry
+}
+
+func parseImage(extensions map[string][]Extension) (image string) {
+ if extensions == nil {
+ return
+ }
+
+ matches, ok := extensions["image"]
+ if !ok || len(matches) == 0 {
+ return
+ }
+
+ image = matches[0].Attrs["href"]
+ return
+}
+
+func parseOwner(extensions map[string][]Extension) (owner *ITunesOwner) {
+ if extensions == nil {
+ return
+ }
+
+ matches, ok := extensions["owner"]
+ if !ok || len(matches) == 0 {
+ return
+ }
+
+ owner = &ITunesOwner{}
+ if name, ok := matches[0].Children["name"]; ok {
+ owner.Name = name[0].Value
+ }
+ if email, ok := matches[0].Children["email"]; ok {
+ owner.Email = email[0].Value
+ }
+ return
+}
+
+func parseCategories(extensions map[string][]Extension) (categories []*ITunesCategory) {
+ if extensions == nil {
+ return
+ }
+
+ matches, ok := extensions["category"]
+ if !ok || len(matches) == 0 {
+ return
+ }
+
+ categories = []*ITunesCategory{}
+ for _, cat := range matches {
+ c := &ITunesCategory{}
+ if text, ok := cat.Attrs["text"]; ok {
+ c.Text = text
+ }
+
+ if subs, ok := cat.Children["category"]; ok {
+ s := &ITunesCategory{}
+ if text, ok := subs[0].Attrs["text"]; ok {
+ s.Text = text
+ }
+ c.Subcategory = s
+ }
+ categories = append(categories, c)
+ }
+ return
+}
diff --git a/vendor/github.com/mmcdole/gofeed/feed.go b/vendor/github.com/mmcdole/gofeed/feed.go
new file mode 100644
index 0000000..9ee1eaa
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/feed.go
@@ -0,0 +1,84 @@
+package gofeed
+
+import (
+ "encoding/json"
+ "time"
+
+ "github.com/mmcdole/gofeed/extensions"
+)
+
+// Feed is the universal Feed type that atom.Feed
+// and rss.Feed gets translated to. It represents
+// a web feed.
+type Feed struct {
+ Title string `json:"title,omitempty"`
+ Description string `json:"description,omitempty"`
+ Link string `json:"link,omitempty"`
+ FeedLink string `json:"feedLink,omitempty"`
+ Updated string `json:"updated,omitempty"`
+ UpdatedParsed *time.Time `json:"updatedParsed,omitempty"`
+ Published string `json:"published,omitempty"`
+ PublishedParsed *time.Time `json:"publishedParsed,omitempty"`
+ Author *Person `json:"author,omitempty"`
+ Language string `json:"language,omitempty"`
+ Image *Image `json:"image,omitempty"`
+ Copyright string `json:"copyright,omitempty"`
+ Generator string `json:"generator,omitempty"`
+ Categories []string `json:"categories,omitempty"`
+ DublinCoreExt *ext.DublinCoreExtension `json:"dcExt,omitempty"`
+ ITunesExt *ext.ITunesFeedExtension `json:"itunesExt,omitempty"`
+ Extensions ext.Extensions `json:"extensions,omitempty"`
+ Custom map[string]string `json:"custom,omitempty"`
+ Items []*Item `json:"items"`
+ FeedType string `json:"feedType"`
+ FeedVersion string `json:"feedVersion"`
+}
+
+func (f Feed) String() string {
+ json, _ := json.MarshalIndent(f, "", " ")
+ return string(json)
+}
+
+// Item is the universal Item type that atom.Entry
+// and rss.Item gets translated to. It represents
+// a single entry in a given feed.
+type Item struct {
+ Title string `json:"title,omitempty"`
+ Description string `json:"description,omitempty"`
+ Content string `json:"content,omitempty"`
+ Link string `json:"link,omitempty"`
+ Updated string `json:"updated,omitempty"`
+ UpdatedParsed *time.Time `json:"updatedParsed,omitempty"`
+ Published string `json:"published,omitempty"`
+ PublishedParsed *time.Time `json:"publishedParsed,omitempty"`
+ Author *Person `json:"author,omitempty"`
+ GUID string `json:"guid,omitempty"`
+ Image *Image `json:"image,omitempty"`
+ Categories []string `json:"categories,omitempty"`
+ Enclosures []*Enclosure `json:"enclosures,omitempty"`
+ DublinCoreExt *ext.DublinCoreExtension `json:"dcExt,omitempty"`
+ ITunesExt *ext.ITunesItemExtension `json:"itunesExt,omitempty"`
+ Extensions ext.Extensions `json:"extensions,omitempty"`
+ Custom map[string]string `json:"custom,omitempty"`
+}
+
+// Person is an individual specified in a feed
+// (e.g. an author)
+type Person struct {
+ Name string `json:"name,omitempty"`
+ Email string `json:"email,omitempty"`
+}
+
+// Image is an image that is the artwork for a given
+// feed or item.
+type Image struct {
+ URL string `json:"url,omitempty"`
+ Title string `json:"title,omitempty"`
+}
+
+// Enclosure is a file associated with a given Item.
+type Enclosure struct {
+ URL string `json:"url,omitempty"`
+ Length string `json:"length,omitempty"`
+ Type string `json:"type,omitempty"`
+}
diff --git a/vendor/github.com/mmcdole/gofeed/internal/shared/charsetconv.go b/vendor/github.com/mmcdole/gofeed/internal/shared/charsetconv.go
new file mode 100644
index 0000000..a6fcbc6
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/internal/shared/charsetconv.go
@@ -0,0 +1,19 @@
+package shared
+
+import (
+ "io"
+
+ "golang.org/x/net/html/charset"
+)
+
+func NewReaderLabel(label string, input io.Reader) (io.Reader, error) {
+ conv, err := charset.NewReaderLabel(label, input)
+
+ if err != nil {
+ return nil, err
+ }
+
+ // Wrap the charset decoder reader with a XML sanitizer
+ //clean := NewXMLSanitizerReader(conv)
+ return conv, nil
+}
diff --git a/vendor/github.com/mmcdole/gofeed/internal/shared/dateparser.go b/vendor/github.com/mmcdole/gofeed/internal/shared/dateparser.go
new file mode 100644
index 0000000..e0c3d5c
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/internal/shared/dateparser.go
@@ -0,0 +1,196 @@
+package shared
+
+import (
+ "fmt"
+ "strings"
+ "time"
+)
+
+// DateFormats taken from github.com/mjibson/goread
+var dateFormats = []string{
+ time.RFC822, // RSS
+ time.RFC822Z, // RSS
+ time.RFC3339, // Atom
+ time.UnixDate,
+ time.RubyDate,
+ time.RFC850,
+ time.RFC1123Z,
+ time.RFC1123,
+ time.ANSIC,
+ "Mon, January 2 2006 15:04:05 -0700",
+ "Mon, January 02, 2006, 15:04:05 MST",
+ "Mon, January 02, 2006 15:04:05 MST",
+ "Mon, Jan 2, 2006 15:04 MST",
+ "Mon, Jan 2 2006 15:04 MST",
+ "Mon, Jan 2, 2006 15:04:05 MST",
+ "Mon, Jan 2 2006 15:04:05 -700",
+ "Mon, Jan 2 2006 15:04:05 -0700",
+ "Mon Jan 2 15:04 2006",
+ "Mon Jan 2 15:04:05 2006 MST",
+ "Mon Jan 02, 2006 3:04 pm",
+ "Mon, Jan 02,2006 15:04:05 MST",
+ "Mon Jan 02 2006 15:04:05 -0700",
+ "Monday, January 2, 2006 15:04:05 MST",
+ "Monday, January 2, 2006 03:04 PM",
+ "Monday, January 2, 2006",
+ "Monday, January 02, 2006",
+ "Monday, 2 January 2006 15:04:05 MST",
+ "Monday, 2 January 2006 15:04:05 -0700",
+ "Monday, 2 Jan 2006 15:04:05 MST",
+ "Monday, 2 Jan 2006 15:04:05 -0700",
+ "Monday, 02 January 2006 15:04:05 MST",
+ "Monday, 02 January 2006 15:04:05 -0700",
+ "Monday, 02 January 2006 15:04:05",
+ "Mon, 2 January 2006 15:04 MST",
+ "Mon, 2 January 2006, 15:04 -0700",
+ "Mon, 2 January 2006, 15:04:05 MST",
+ "Mon, 2 January 2006 15:04:05 MST",
+ "Mon, 2 January 2006 15:04:05 -0700",
+ "Mon, 2 January 2006",
+ "Mon, 2 Jan 2006 3:04:05 PM -0700",
+ "Mon, 2 Jan 2006 15:4:5 MST",
+ "Mon, 2 Jan 2006 15:4:5 -0700 GMT",
+ "Mon, 2, Jan 2006 15:4",
+ "Mon, 2 Jan 2006 15:04 MST",
+ "Mon, 2 Jan 2006, 15:04 -0700",
+ "Mon, 2 Jan 2006 15:04 -0700",
+ "Mon, 2 Jan 2006 15:04:05 UT",
+ "Mon, 2 Jan 2006 15:04:05MST",
+ "Mon, 2 Jan 2006 15:04:05 MST",
+ "Mon 2 Jan 2006 15:04:05 MST",
+ "mon,2 Jan 2006 15:04:05 MST",
+ "Mon, 2 Jan 2006 15:04:05 -0700 MST",
+ "Mon, 2 Jan 2006 15:04:05-0700",
+ "Mon, 2 Jan 2006 15:04:05 -0700",
+ "Mon, 2 Jan 2006 15:04:05",
+ "Mon, 2 Jan 2006 15:04",
+ "Mon,2 Jan 2006",
+ "Mon, 2 Jan 2006",
+ "Mon, 2 Jan 15:04:05 MST",
+ "Mon, 2 Jan 06 15:04:05 MST",
+ "Mon, 2 Jan 06 15:04:05 -0700",
+ "Mon, 2006-01-02 15:04",
+ "Mon,02 January 2006 14:04:05 MST",
+ "Mon, 02 January 2006",
+ "Mon, 02 Jan 2006 3:04:05 PM MST",
+ "Mon, 02 Jan 2006 15 -0700",
+ "Mon,02 Jan 2006 15:04 MST",
+ "Mon, 02 Jan 2006 15:04 MST",
+ "Mon, 02 Jan 2006 15:04 -0700",
+ "Mon, 02 Jan 2006 15:04:05 Z",
+ "Mon, 02 Jan 2006 15:04:05 UT",
+ "Mon, 02 Jan 2006 15:04:05 MST-07:00",
+ "Mon, 02 Jan 2006 15:04:05 MST -0700",
+ "Mon, 02 Jan 2006, 15:04:05 MST",
+ "Mon, 02 Jan 2006 15:04:05MST",
+ "Mon, 02 Jan 2006 15:04:05 MST",
+ "Mon , 02 Jan 2006 15:04:05 MST",
+ "Mon, 02 Jan 2006 15:04:05 GMT-0700",
+ "Mon,02 Jan 2006 15:04:05 -0700",
+ "Mon, 02 Jan 2006 15:04:05 -0700",
+ "Mon, 02 Jan 2006 15:04:05 -07:00",
+ "Mon, 02 Jan 2006 15:04:05 --0700",
+ "Mon 02 Jan 2006 15:04:05 -0700",
+ "Mon, 02 Jan 2006 15:04:05 -07",
+ "Mon, 02 Jan 2006 15:04:05 00",
+ "Mon, 02 Jan 2006 15:04:05",
+ "Mon, 02 Jan 2006",
+ "Mon, 02 Jan 06 15:04:05 MST",
+ "January 2, 2006 3:04 PM",
+ "January 2, 2006, 3:04 p.m.",
+ "January 2, 2006 15:04:05 MST",
+ "January 2, 2006 15:04:05",
+ "January 2, 2006 03:04 PM",
+ "January 2, 2006",
+ "January 02, 2006 15:04:05 MST",
+ "January 02, 2006 15:04",
+ "January 02, 2006 03:04 PM",
+ "January 02, 2006",
+ "Jan 2, 2006 3:04:05 PM MST",
+ "Jan 2, 2006 3:04:05 PM",
+ "Jan 2, 2006 15:04:05 MST",
+ "Jan 2, 2006",
+ "Jan 02 2006 03:04:05PM",
+ "Jan 02, 2006",
+ "6/1/2 15:04",
+ "6-1-2 15:04",
+ "2 January 2006 15:04:05 MST",
+ "2 January 2006 15:04:05 -0700",
+ "2 January 2006",
+ "2 Jan 2006 15:04:05 Z",
+ "2 Jan 2006 15:04:05 MST",
+ "2 Jan 2006 15:04:05 -0700",
+ "2 Jan 2006",
+ "2.1.2006 15:04:05",
+ "2/1/2006",
+ "2-1-2006",
+ "2006 January 02",
+ "2006-1-2T15:04:05Z",
+ "2006-1-2 15:04:05",
+ "2006-1-2",
+ "2006-1-02T15:04:05Z",
+ "2006-01-02T15:04Z",
+ "2006-01-02T15:04-07:00",
+ "2006-01-02T15:04:05Z",
+ "2006-01-02T15:04:05-07:00:00",
+ "2006-01-02T15:04:05:-0700",
+ "2006-01-02T15:04:05-0700",
+ "2006-01-02T15:04:05-07:00",
+ "2006-01-02T15:04:05 -0700",
+ "2006-01-02T15:04:05:00",
+ "2006-01-02T15:04:05",
+ "2006-01-02 at 15:04:05",
+ "2006-01-02 15:04:05Z",
+ "2006-01-02 15:04:05 MST",
+ "2006-01-02 15:04:05-0700",
+ "2006-01-02 15:04:05-07:00",
+ "2006-01-02 15:04:05 -0700",
+ "2006-01-02 15:04",
+ "2006-01-02 00:00:00.0 15:04:05.0 -0700",
+ "2006/01/02",
+ "2006-01-02",
+ "15:04 02.01.2006 -0700",
+ "1/2/2006 3:04:05 PM MST",
+ "1/2/2006 3:04:05 PM",
+ "1/2/2006 15:04:05 MST",
+ "1/2/2006",
+ "06/1/2 15:04",
+ "06-1-2 15:04",
+ "02 Monday, Jan 2006 15:04",
+ "02 Jan 2006 15:04 MST",
+ "02 Jan 2006 15:04:05 UT",
+ "02 Jan 2006 15:04:05 MST",
+ "02 Jan 2006 15:04:05 -0700",
+ "02 Jan 2006 15:04:05",
+ "02 Jan 2006",
+ "02/01/2006 15:04 MST",
+ "02-01-2006 15:04:05 MST",
+ "02.01.2006 15:04:05",
+ "02/01/2006 15:04:05",
+ "02.01.2006 15:04",
+ "02/01/2006 - 15:04",
+ "02.01.2006 -0700",
+ "02/01/2006",
+ "02-01-2006",
+ "01/02/2006 3:04 PM",
+ "01/02/2006 15:04:05 MST",
+ "01/02/2006 - 15:04",
+ "01/02/2006",
+ "01-02-2006",
+}
+
+// ParseDate parses a given date string using a large
+// list of commonly found feed date formats.
+func ParseDate(ds string) (t time.Time, err error) {
+ d := strings.TrimSpace(ds)
+ if d == "" {
+ return t, fmt.Errorf("Date string is empty")
+ }
+ for _, f := range dateFormats {
+ if t, err = time.Parse(f, d); err == nil {
+ return
+ }
+ }
+ err = fmt.Errorf("Failed to parse date: %s", ds)
+ return
+}
diff --git a/vendor/github.com/mmcdole/gofeed/internal/shared/extparser.go b/vendor/github.com/mmcdole/gofeed/internal/shared/extparser.go
new file mode 100644
index 0000000..79c8d5a
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/internal/shared/extparser.go
@@ -0,0 +1,176 @@
+package shared
+
+import (
+ "strings"
+
+ "github.com/mmcdole/gofeed/extensions"
+ "github.com/mmcdole/goxpp"
+)
+
+// IsExtension returns whether or not the current
+// XML element is an extension element (if it has a
+// non empty prefix)
+func IsExtension(p *xpp.XMLPullParser) bool {
+ space := strings.TrimSpace(p.Space)
+ if prefix, ok := p.Spaces[space]; ok {
+ return !(prefix == "" || prefix == "rss" || prefix == "rdf" || prefix == "content")
+ }
+
+ return p.Space != ""
+}
+
+// ParseExtension parses the current element of the
+// XMLPullParser as an extension element and updates
+// the extension map
+func ParseExtension(fe ext.Extensions, p *xpp.XMLPullParser) (ext.Extensions, error) {
+ prefix := prefixForNamespace(p.Space, p)
+
+ result, err := parseExtensionElement(p)
+ if err != nil {
+ return nil, err
+ }
+
+ // Ensure the extension prefix map exists
+ if _, ok := fe[prefix]; !ok {
+ fe[prefix] = map[string][]ext.Extension{}
+ }
+ // Ensure the extension element slice exists
+ if _, ok := fe[prefix][p.Name]; !ok {
+ fe[prefix][p.Name] = []ext.Extension{}
+ }
+
+ fe[prefix][p.Name] = append(fe[prefix][p.Name], result)
+ return fe, nil
+}
+
+func parseExtensionElement(p *xpp.XMLPullParser) (e ext.Extension, err error) {
+ if err = p.Expect(xpp.StartTag, "*"); err != nil {
+ return e, err
+ }
+
+ e.Name = p.Name
+ e.Children = map[string][]ext.Extension{}
+ e.Attrs = map[string]string{}
+
+ for _, attr := range p.Attrs {
+ // TODO: Alright that we are stripping
+ // namespace information from attributes ?
+ e.Attrs[attr.Name.Local] = attr.Value
+ }
+
+ for {
+ tok, err := p.Next()
+ if err != nil {
+ return e, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+ child, err := parseExtensionElement(p)
+ if err != nil {
+ return e, err
+ }
+
+ if _, ok := e.Children[child.Name]; !ok {
+ e.Children[child.Name] = []ext.Extension{}
+ }
+
+ e.Children[child.Name] = append(e.Children[child.Name], child)
+ } else if tok == xpp.Text {
+ e.Value += p.Text
+ }
+ }
+
+ e.Value = strings.TrimSpace(e.Value)
+
+ if err = p.Expect(xpp.EndTag, e.Name); err != nil {
+ return e, err
+ }
+
+ return e, nil
+}
+
+func prefixForNamespace(space string, p *xpp.XMLPullParser) string {
+ // First we check if the global namespace map
+ // contains an entry for this namespace/prefix.
+ // This way we can use the canonical prefix for this
+ // ns instead of the one defined in the feed.
+ if prefix, ok := canonicalNamespaces[space]; ok {
+ return prefix
+ }
+
+ // Next we check if the feed itself defined this
+ // this namespace and return it if we have a result.
+ if prefix, ok := p.Spaces[space]; ok {
+ return prefix
+ }
+
+ // Lastly, any namespace which is not defined in the
+ // the feed will be the prefix itself when using Go's
+ // xml.Decoder.Token() method.
+ return space
+}
+
+// Namespaces taken from github.com/kurtmckee/feedparser
+// These are used for determining canonical name space prefixes
+// for many of the popular RSS/Atom extensions.
+//
+// These canonical prefixes override any prefixes used in the feed itself.
+var canonicalNamespaces = map[string]string{
+ "http://webns.net/mvcb/": "admin",
+ "http://purl.org/rss/1.0/modules/aggregation/": "ag",
+ "http://purl.org/rss/1.0/modules/annotate/": "annotate",
+ "http://media.tangent.org/rss/1.0/": "audio",
+ "http://backend.userland.com/blogChannelModule": "blogChannel",
+ "http://creativecommons.org/ns#license": "cc",
+ "http://web.resource.org/cc/": "cc",
+ "http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html": "creativeCommons",
+ "http://backend.userland.com/creativeCommonsRssModule": "creativeCommons",
+ "http://purl.org/rss/1.0/modules/company": "co",
+ "http://purl.org/rss/1.0/modules/content/": "content",
+ "http://my.theinfo.org/changed/1.0/rss/": "cp",
+ "http://purl.org/dc/elements/1.1/": "dc",
+ "http://purl.org/dc/terms/": "dcterms",
+ "http://purl.org/rss/1.0/modules/email/": "email",
+ "http://purl.org/rss/1.0/modules/event/": "ev",
+ "http://rssnamespace.org/feedburner/ext/1.0": "feedburner",
+ "http://freshmeat.net/rss/fm/": "fm",
+ "http://xmlns.com/foaf/0.1/": "foaf",
+ "http://www.w3.org/2003/01/geo/wgs84_pos#": "geo",
+ "http://www.georss.org/georss": "georss",
+ "http://www.opengis.net/gml": "gml",
+ "http://postneo.com/icbm/": "icbm",
+ "http://purl.org/rss/1.0/modules/image/": "image",
+ "http://www.itunes.com/DTDs/PodCast-1.0.dtd": "itunes",
+ "http://example.com/DTDs/PodCast-1.0.dtd": "itunes",
+ "http://purl.org/rss/1.0/modules/link/": "l",
+ "http://search.yahoo.com/mrss": "media",
+ "http://search.yahoo.com/mrss/": "media",
+ "http://madskills.com/public/xml/rss/module/pingback/": "pingback",
+ "http://prismstandard.org/namespaces/1.2/basic/": "prism",
+ "http://www.w3.org/1999/02/22-rdf-syntax-ns#": "rdf",
+ "http://www.w3.org/2000/01/rdf-schema#": "rdfs",
+ "http://purl.org/rss/1.0/modules/reference/": "ref",
+ "http://purl.org/rss/1.0/modules/richequiv/": "reqv",
+ "http://purl.org/rss/1.0/modules/search/": "search",
+ "http://purl.org/rss/1.0/modules/slash/": "slash",
+ "http://schemas.xmlsoap.org/soap/envelope/": "soap",
+ "http://purl.org/rss/1.0/modules/servicestatus/": "ss",
+ "http://hacks.benhammersley.com/rss/streaming/": "str",
+ "http://purl.org/rss/1.0/modules/subscription/": "sub",
+ "http://purl.org/rss/1.0/modules/syndication/": "sy",
+ "http://schemas.pocketsoap.com/rss/myDescModule/": "szf",
+ "http://purl.org/rss/1.0/modules/taxonomy/": "taxo",
+ "http://purl.org/rss/1.0/modules/threading/": "thr",
+ "http://purl.org/rss/1.0/modules/textinput/": "ti",
+ "http://madskills.com/public/xml/rss/module/trackback/": "trackback",
+ "http://wellformedweb.org/commentAPI/": "wfw",
+ "http://purl.org/rss/1.0/modules/wiki/": "wiki",
+ "http://www.w3.org/1999/xhtml": "xhtml",
+ "http://www.w3.org/1999/xlink": "xlink",
+ "http://www.w3.org/XML/1998/namespace": "xml",
+ "http://podlove.org/simple-chapters": "psc",
+}
diff --git a/vendor/github.com/mmcdole/gofeed/internal/shared/parseutils.go b/vendor/github.com/mmcdole/gofeed/internal/shared/parseutils.go
new file mode 100644
index 0000000..8c523bf
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/internal/shared/parseutils.go
@@ -0,0 +1,153 @@
+package shared
+
+import (
+ "bytes"
+ "errors"
+ "fmt"
+ "regexp"
+ "strconv"
+ "strings"
+
+ "github.com/mmcdole/goxpp"
+)
+
+var (
+ emailNameRgx = regexp.MustCompile(`^([^@]+@[^\s]+)\s+\(([^@]+)\)$`)
+ nameEmailRgx = regexp.MustCompile(`^([^@]+)\s+\(([^@]+@[^)]+)\)$`)
+ nameOnlyRgx = regexp.MustCompile(`^([^@()]+)$`)
+ emailOnlyRgx = regexp.MustCompile(`^([^@()]+@[^@()]+)$`)
+
+ TruncatedEntity = errors.New("truncated entity")
+ InvalidNumericReference = errors.New("invalid numeric reference")
+)
+
+// ParseText is a helper function for parsing the text
+// from the current element of the XMLPullParser.
+// This function can handle parsing naked XML text from
+// an element.
+func ParseText(p *xpp.XMLPullParser) (string, error) {
+ var text struct {
+ Type string `xml:"type,attr"`
+ InnerXML string `xml:",innerxml"`
+ }
+
+ err := p.DecodeElement(&text)
+ if err != nil {
+ return "", err
+ }
+
+ result := text.InnerXML
+ result = strings.TrimSpace(result)
+
+ if strings.HasPrefix(result, "") {
+ result = strings.TrimPrefix(result, "")
+ return result, nil
+ }
+
+ return DecodeEntities(result)
+}
+
+// DecodeEntities decodes escaped XML entities
+// in a string and returns the unescaped string
+func DecodeEntities(str string) (string, error) {
+ data := []byte(str)
+ buf := bytes.NewBuffer([]byte{})
+
+ for len(data) > 0 {
+ // Find the next entity
+ idx := bytes.IndexByte(data, '&')
+ if idx == -1 {
+ buf.Write(data)
+ break
+ }
+
+ // Write and skip everything before it
+ buf.Write(data[:idx])
+ data = data[idx+1:]
+
+ if len(data) == 0 {
+ return "", TruncatedEntity
+ }
+
+ // Find the end of the entity
+ end := bytes.IndexByte(data, ';')
+ if end == -1 {
+ return "", TruncatedEntity
+ }
+
+ if data[0] == '#' {
+ // Numerical character reference
+ var str string
+ base := 10
+
+ if len(data) > 1 && data[1] == 'x' {
+ str = string(data[2:end])
+ base = 16
+ } else {
+ str = string(data[1:end])
+ }
+
+ i, err := strconv.ParseUint(str, base, 32)
+ if err != nil {
+ return "", InvalidNumericReference
+ }
+
+ buf.WriteRune(rune(i))
+ } else {
+ // Predefined entity
+ name := string(data[:end])
+
+ var c byte
+ switch name {
+ case "lt":
+ c = '<'
+ case "gt":
+ c = '>'
+ case "quot":
+ c = '"'
+ case "apos":
+ c = '\''
+ case "amp":
+ c = '&'
+ default:
+ return "", fmt.Errorf("unknown predefined "+
+ "entity &%s;", name)
+ }
+
+ buf.WriteByte(c)
+ }
+
+ // Skip the entity
+ data = data[end+1:]
+ }
+
+ return buf.String(), nil
+}
+
+// ParseNameAddress parses name/email strings commonly
+// found in RSS feeds of the format "Example Name (example@site.com)"
+// and other variations of this format.
+func ParseNameAddress(nameAddressText string) (name string, address string) {
+ if nameAddressText == "" {
+ return
+ }
+
+ if emailNameRgx.MatchString(nameAddressText) {
+ result := emailNameRgx.FindStringSubmatch(nameAddressText)
+ address = result[1]
+ name = result[2]
+ } else if nameEmailRgx.MatchString(nameAddressText) {
+ result := nameEmailRgx.FindStringSubmatch(nameAddressText)
+ name = result[1]
+ address = result[2]
+ } else if nameOnlyRgx.MatchString(nameAddressText) {
+ result := nameOnlyRgx.FindStringSubmatch(nameAddressText)
+ name = result[1]
+ } else if emailOnlyRgx.MatchString(nameAddressText) {
+ result := emailOnlyRgx.FindStringSubmatch(nameAddressText)
+ address = result[1]
+ }
+ return
+}
diff --git a/vendor/github.com/mmcdole/gofeed/internal/shared/xmlbase.go b/vendor/github.com/mmcdole/gofeed/internal/shared/xmlbase.go
new file mode 100644
index 0000000..bfab57e
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/internal/shared/xmlbase.go
@@ -0,0 +1,258 @@
+package shared
+
+import (
+ "bytes"
+ "fmt"
+ "golang.org/x/net/html"
+ "net/url"
+ "strings"
+
+ "github.com/mmcdole/goxpp"
+)
+
+var (
+ // HTML attributes which contain URIs
+ // https://pythonhosted.org/feedparser/resolving-relative-links.html
+ // To catch every possible URI attribute is non-trivial:
+ // https://stackoverflow.com/questions/2725156/complete-list-of-html-tag-attributes-which-have-a-url-value
+ htmlURIAttrs = map[string]bool{
+ "action": true,
+ "background": true,
+ "cite": true,
+ "codebase": true,
+ "data": true,
+ "href": true,
+ "poster": true,
+ "profile": true,
+ "scheme": true,
+ "src": true,
+ "uri": true,
+ "usemap": true,
+ }
+)
+
+type urlStack []*url.URL
+
+func (s *urlStack) push(u *url.URL) {
+ *s = append([]*url.URL{u}, *s...)
+}
+
+func (s *urlStack) pop() *url.URL {
+ if s == nil || len(*s) == 0 {
+ return nil
+ }
+ var top *url.URL
+ top, *s = (*s)[0], (*s)[1:]
+ return top
+}
+
+func (s *urlStack) top() *url.URL {
+ if s == nil || len(*s) == 0 {
+ return nil
+ }
+ return (*s)[0]
+}
+
+type XMLBase struct {
+ stack urlStack
+ URIAttrs map[string]bool
+}
+
+// FindRoot iterates through the tokens of an xml document until
+// it encounters its first StartTag event. It returns an error
+// if it reaches EndDocument before finding a tag.
+func (b *XMLBase) FindRoot(p *xpp.XMLPullParser) (event xpp.XMLEventType, err error) {
+ for {
+ event, err = b.NextTag(p)
+ if err != nil {
+ return event, err
+ }
+ if event == xpp.StartTag {
+ break
+ }
+
+ if event == xpp.EndDocument {
+ return event, fmt.Errorf("Failed to find root node before document end.")
+ }
+ }
+ return
+}
+
+// XMLBase.NextTag iterates through the tokens until it reaches a StartTag or
+// EndTag It maintains the urlStack upon encountering StartTag and EndTags, so
+// that the top of the stack (accessible through the CurrentBase() and
+// CurrentBaseURL() methods) is the absolute base URI by which relative URIs
+// should be resolved.
+//
+// NextTag is similar to goxpp's NextTag method except it wont throw an error
+// if the next immediate token isnt a Start/EndTag. Instead, it will continue
+// to consume tokens until it hits a Start/EndTag or EndDocument.
+func (b *XMLBase) NextTag(p *xpp.XMLPullParser) (event xpp.XMLEventType, err error) {
+ for {
+
+ if p.Event == xpp.EndTag {
+ // Pop xml:base after each end tag
+ b.pop()
+ }
+
+ event, err = p.Next()
+ if err != nil {
+ return event, err
+ }
+
+ if event == xpp.EndTag {
+ break
+ }
+
+ if event == xpp.StartTag {
+ base := parseBase(p)
+ err = b.push(base)
+ if err != nil {
+ return
+ }
+
+ err = b.resolveAttrs(p)
+ if err != nil {
+ return
+ }
+
+ break
+ }
+
+ if event == xpp.EndDocument {
+ return event, fmt.Errorf("Failed to find NextTag before reaching the end of the document.")
+ }
+
+ }
+ return
+}
+
+func parseBase(p *xpp.XMLPullParser) string {
+ xmlURI := "http://www.w3.org/XML/1998/namespace"
+ for _, attr := range p.Attrs {
+ if attr.Name.Local == "base" && attr.Name.Space == xmlURI {
+ return attr.Value
+ }
+ }
+ return ""
+}
+
+func (b *XMLBase) push(base string) error {
+ newURL, err := url.Parse(base)
+ if err != nil {
+ return err
+ }
+
+ topURL := b.CurrentBaseURL()
+ if topURL != nil {
+ newURL = topURL.ResolveReference(newURL)
+ }
+ b.stack.push(newURL)
+ return nil
+}
+
+// returns the popped base URL
+func (b *XMLBase) pop() string {
+ url := b.stack.pop()
+ if url != nil {
+ return url.String()
+ }
+ return ""
+}
+
+func (b *XMLBase) CurrentBaseURL() *url.URL {
+ return b.stack.top()
+}
+
+func (b *XMLBase) CurrentBase() string {
+ if url := b.CurrentBaseURL(); url != nil {
+ return url.String()
+ }
+ return ""
+}
+
+// resolve the given string as a URL relative to current base
+func (b *XMLBase) ResolveURL(u string) (string, error) {
+ if b.CurrentBase() == "" {
+ return u, nil
+ }
+
+ relURL, err := url.Parse(u)
+ if err != nil {
+ return u, err
+ }
+ curr := b.CurrentBaseURL()
+ if curr.Path != "" && u != "" && curr.Path[len(curr.Path)-1] != '/' {
+ // There's no reason someone would use a path in xml:base if they
+ // didn't mean for it to be a directory
+ curr.Path = curr.Path + "/"
+ }
+ absURL := b.CurrentBaseURL().ResolveReference(relURL)
+ return absURL.String(), nil
+}
+
+// resolve relative URI attributes according to xml:base
+func (b *XMLBase) resolveAttrs(p *xpp.XMLPullParser) error {
+ for i, attr := range p.Attrs {
+ lowerName := strings.ToLower(attr.Name.Local)
+ if b.URIAttrs[lowerName] {
+ absURL, err := b.ResolveURL(attr.Value)
+ if err != nil {
+ return err
+ }
+ p.Attrs[i].Value = absURL
+ }
+ }
+ return nil
+}
+
+// Transforms html by resolving any relative URIs in attributes
+// if an error occurs during parsing or serialization, then the original string
+// is returned along with the error.
+func (b *XMLBase) ResolveHTML(relHTML string) (string, error) {
+ if b.CurrentBase() == "" {
+ return relHTML, nil
+ }
+
+ htmlReader := strings.NewReader(relHTML)
+
+ doc, err := html.Parse(htmlReader)
+ if err != nil {
+ return relHTML, err
+ }
+
+ var visit func(*html.Node)
+
+ // recursively traverse HTML resolving any relative URIs in attributes
+ visit = func(n *html.Node) {
+ if n.Type == html.ElementNode {
+ for i, a := range n.Attr {
+ if htmlURIAttrs[a.Key] {
+ absVal, err := b.ResolveURL(a.Val)
+ if err == nil {
+ n.Attr[i].Val = absVal
+ }
+ break
+ }
+ }
+ }
+ for c := n.FirstChild; c != nil; c = c.NextSibling {
+ visit(c)
+ }
+ }
+
+ visit(doc)
+ var w bytes.Buffer
+ err = html.Render(&w, doc)
+ if err != nil {
+ return relHTML, err
+ }
+
+ // html.Render() always writes a complete html5 document, so strip the html
+ // and body tags
+ absHTML := w.String()
+ absHTML = strings.TrimPrefix(absHTML, "")
+ absHTML = strings.TrimSuffix(absHTML, "")
+
+ return absHTML, err
+}
diff --git a/vendor/github.com/mmcdole/gofeed/internal/shared/xmlsanitizer.go b/vendor/github.com/mmcdole/gofeed/internal/shared/xmlsanitizer.go
new file mode 100644
index 0000000..14c9ead
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/internal/shared/xmlsanitizer.go
@@ -0,0 +1,23 @@
+package shared
+
+import (
+ "io"
+
+ "golang.org/x/text/transform"
+)
+
+// NewXMLSanitizerReader creates an io.Reader that
+// wraps another io.Reader and removes illegal xml
+// characters from the io stream.
+func NewXMLSanitizerReader(xml io.Reader) io.Reader {
+ isIllegal := func(r rune) bool {
+ return !(r == 0x09 ||
+ r == 0x0A ||
+ r == 0x0D ||
+ r >= 0x20 && r <= 0xDF77 ||
+ r >= 0xE000 && r <= 0xFFFD ||
+ r >= 0x10000 && r <= 0x10FFFF)
+ }
+ t := transform.Chain(transform.RemoveFunc(isIllegal))
+ return transform.NewReader(xml, t)
+}
diff --git a/vendor/github.com/mmcdole/gofeed/parser.go b/vendor/github.com/mmcdole/gofeed/parser.go
new file mode 100644
index 0000000..1c5a243
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/parser.go
@@ -0,0 +1,153 @@
+package gofeed
+
+import (
+ "bytes"
+ "errors"
+ "fmt"
+ "io"
+ "net/http"
+ "strings"
+
+ "github.com/mmcdole/gofeed/atom"
+ "github.com/mmcdole/gofeed/rss"
+)
+
+// ErrFeedTypeNotDetected is returned when the detection system can not figure
+// out the Feed format
+var ErrFeedTypeNotDetected = errors.New("Failed to detect feed type")
+
+// HTTPError represents an HTTP error returned by a server.
+type HTTPError struct {
+ StatusCode int
+ Status string
+}
+
+func (err HTTPError) Error() string {
+ return fmt.Sprintf("http error: %s", err.Status)
+}
+
+// Parser is a universal feed parser that detects
+// a given feed type, parsers it, and translates it
+// to the universal feed type.
+type Parser struct {
+ AtomTranslator Translator
+ RSSTranslator Translator
+ Client *http.Client
+ rp *rss.Parser
+ ap *atom.Parser
+}
+
+// NewParser creates a universal feed parser.
+func NewParser() *Parser {
+ fp := Parser{
+ rp: &rss.Parser{},
+ ap: &atom.Parser{},
+ }
+ return &fp
+}
+
+// Parse parses a RSS or Atom feed into
+// the universal gofeed.Feed. It takes an
+// io.Reader which should return the xml content.
+func (f *Parser) Parse(feed io.Reader) (*Feed, error) {
+ // Wrap the feed io.Reader in a io.TeeReader
+ // so we can capture all the bytes read by the
+ // DetectFeedType function and construct a new
+ // reader with those bytes intact for when we
+ // attempt to parse the feeds.
+ var buf bytes.Buffer
+ tee := io.TeeReader(feed, &buf)
+ feedType := DetectFeedType(tee)
+
+ // Glue the read bytes from the detect function
+ // back into a new reader
+ r := io.MultiReader(&buf, feed)
+
+ switch feedType {
+ case FeedTypeAtom:
+ return f.parseAtomFeed(r)
+ case FeedTypeRSS:
+ return f.parseRSSFeed(r)
+ }
+
+ return nil, ErrFeedTypeNotDetected
+}
+
+// ParseURL fetches the contents of a given url and
+// attempts to parse the response into the universal feed type.
+func (f *Parser) ParseURL(feedURL string) (feed *Feed, err error) {
+ client := f.httpClient()
+
+ req, _ := http.NewRequest("GET", feedURL, nil)
+ req.Header.Set("User-Agent", "Gofeed/1.0")
+ resp, err := client.Do(req)
+
+ if err != nil {
+ return nil, err
+ }
+
+ if resp != nil {
+ defer func() {
+ ce := resp.Body.Close()
+ if ce != nil {
+ err = ce
+ }
+ }()
+ }
+
+ if resp.StatusCode < 200 || resp.StatusCode >= 300 {
+ return nil, HTTPError{
+ StatusCode: resp.StatusCode,
+ Status: resp.Status,
+ }
+ }
+
+ return f.Parse(resp.Body)
+}
+
+// ParseString parses a feed XML string and into the
+// universal feed type.
+func (f *Parser) ParseString(feed string) (*Feed, error) {
+ return f.Parse(strings.NewReader(feed))
+}
+
+func (f *Parser) parseAtomFeed(feed io.Reader) (*Feed, error) {
+ af, err := f.ap.Parse(feed)
+ if err != nil {
+ return nil, err
+ }
+ return f.atomTrans().Translate(af)
+}
+
+func (f *Parser) parseRSSFeed(feed io.Reader) (*Feed, error) {
+ rf, err := f.rp.Parse(feed)
+ if err != nil {
+ return nil, err
+ }
+
+ return f.rssTrans().Translate(rf)
+}
+
+func (f *Parser) atomTrans() Translator {
+ if f.AtomTranslator != nil {
+ return f.AtomTranslator
+ }
+ f.AtomTranslator = &DefaultAtomTranslator{}
+ return f.AtomTranslator
+}
+
+func (f *Parser) rssTrans() Translator {
+ if f.RSSTranslator != nil {
+ return f.RSSTranslator
+ }
+ f.RSSTranslator = &DefaultRSSTranslator{}
+ return f.RSSTranslator
+}
+
+func (f *Parser) httpClient() *http.Client {
+ if f.Client != nil {
+ return f.Client
+ }
+ f.Client = &http.Client{}
+ return f.Client
+}
diff --git a/vendor/github.com/mmcdole/gofeed/rss/feed.go b/vendor/github.com/mmcdole/gofeed/rss/feed.go
new file mode 100644
index 0000000..5366a4e
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/rss/feed.go
@@ -0,0 +1,120 @@
+package rss
+
+import (
+ "encoding/json"
+ "time"
+
+ "github.com/mmcdole/gofeed/extensions"
+)
+
+// Feed is an RSS Feed
+type Feed struct {
+ Title string `json:"title,omitempty"`
+ Link string `json:"link,omitempty"`
+ Description string `json:"description,omitempty"`
+ Language string `json:"language,omitempty"`
+ Copyright string `json:"copyright,omitempty"`
+ ManagingEditor string `json:"managingEditor,omitempty"`
+ WebMaster string `json:"webMaster,omitempty"`
+ PubDate string `json:"pubDate,omitempty"`
+ PubDateParsed *time.Time `json:"pubDateParsed,omitempty"`
+ LastBuildDate string `json:"lastBuildDate,omitempty"`
+ LastBuildDateParsed *time.Time `json:"lastBuildDateParsed,omitempty"`
+ Categories []*Category `json:"categories,omitempty"`
+ Generator string `json:"generator,omitempty"`
+ Docs string `json:"docs,omitempty"`
+ TTL string `json:"ttl,omitempty"`
+ Image *Image `json:"image,omitempty"`
+ Rating string `json:"rating,omitempty"`
+ SkipHours []string `json:"skipHours,omitempty"`
+ SkipDays []string `json:"skipDays,omitempty"`
+ Cloud *Cloud `json:"cloud,omitempty"`
+ TextInput *TextInput `json:"textInput,omitempty"`
+ DublinCoreExt *ext.DublinCoreExtension `json:"dcExt,omitempty"`
+ ITunesExt *ext.ITunesFeedExtension `json:"itunesExt,omitempty"`
+ Extensions ext.Extensions `json:"extensions,omitempty"`
+ Items []*Item `json:"items"`
+ Version string `json:"version"`
+}
+
+func (f Feed) String() string {
+ json, _ := json.MarshalIndent(f, "", " ")
+ return string(json)
+}
+
+// Item is an RSS Item
+type Item struct {
+ Title string `json:"title,omitempty"`
+ Link string `json:"link,omitempty"`
+ Description string `json:"description,omitempty"`
+ Content string `json:"content,omitempty"`
+ Author string `json:"author,omitempty"`
+ Categories []*Category `json:"categories,omitempty"`
+ Comments string `json:"comments,omitempty"`
+ Enclosure *Enclosure `json:"enclosure,omitempty"`
+ GUID *GUID `json:"guid,omitempty"`
+ PubDate string `json:"pubDate,omitempty"`
+ PubDateParsed *time.Time `json:"pubDateParsed,omitempty"`
+ Source *Source `json:"source,omitempty"`
+ DublinCoreExt *ext.DublinCoreExtension `json:"dcExt,omitempty"`
+ ITunesExt *ext.ITunesItemExtension `json:"itunesExt,omitempty"`
+ Extensions ext.Extensions `json:"extensions,omitempty"`
+}
+
+// Image is an image that represents the feed
+type Image struct {
+ URL string `json:"url,omitempty"`
+ Link string `json:"link,omitempty"`
+ Title string `json:"title,omitempty"`
+ Width string `json:"width,omitempty"`
+ Height string `json:"height,omitempty"`
+ Description string `json:"description,omitempty"`
+}
+
+// Enclosure is a media object that is attached to
+// the item
+type Enclosure struct {
+ URL string `json:"url,omitempty"`
+ Length string `json:"length,omitempty"`
+ Type string `json:"type,omitempty"`
+}
+
+// GUID is a unique identifier for an item
+type GUID struct {
+ Value string `json:"value,omitempty"`
+ IsPermalink string `json:"isPermalink,omitempty"`
+}
+
+// Source contains feed information for another
+// feed if a given item came from that feed
+type Source struct {
+ Title string `json:"title,omitempty"`
+ URL string `json:"url,omitempty"`
+}
+
+// Category is category metadata for Feeds and Entries
+type Category struct {
+ Domain string `json:"domain,omitempty"`
+ Value string `json:"value,omitempty"`
+}
+
+// TextInput specifies a text input box that
+// can be displayed with the channel
+type TextInput struct {
+ Title string `json:"title,omitempty"`
+ Description string `json:"description,omitempty"`
+ Name string `json:"name,omitempty"`
+ Link string `json:"link,omitempty"`
+}
+
+// Cloud allows processes to register with a
+// cloud to be notified of updates to the channel,
+// implementing a lightweight publish-subscribe protocol
+// for RSS feeds
+type Cloud struct {
+ Domain string `json:"domain,omitempty"`
+ Port string `json:"port,omitempty"`
+ Path string `json:"path,omitempty"`
+ RegisterProcedure string `json:"registerProcedure,omitempty"`
+ Protocol string `json:"protocol,omitempty"`
+}
diff --git a/vendor/github.com/mmcdole/gofeed/rss/parser.go b/vendor/github.com/mmcdole/gofeed/rss/parser.go
new file mode 100644
index 0000000..9fe9029
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/rss/parser.go
@@ -0,0 +1,770 @@
+package rss
+
+import (
+ "fmt"
+ "io"
+ "strings"
+
+ "github.com/mmcdole/gofeed/extensions"
+ "github.com/mmcdole/gofeed/internal/shared"
+ "github.com/mmcdole/goxpp"
+)
+
+// Parser is a RSS Parser
+type Parser struct {
+ base *shared.XMLBase
+}
+
+// Parse parses an xml feed into an rss.Feed
+func (rp *Parser) Parse(feed io.Reader) (*Feed, error) {
+ p := xpp.NewXMLPullParser(feed, false, shared.NewReaderLabel)
+ rp.base = &shared.XMLBase{}
+
+ _, err := rp.base.FindRoot(p)
+ if err != nil {
+ return nil, err
+ }
+
+ return rp.parseRoot(p)
+}
+
+func (rp *Parser) parseRoot(p *xpp.XMLPullParser) (*Feed, error) {
+ rssErr := p.Expect(xpp.StartTag, "rss")
+ rdfErr := p.Expect(xpp.StartTag, "rdf")
+ if rssErr != nil && rdfErr != nil {
+ return nil, fmt.Errorf("%s or %s", rssErr.Error(), rdfErr.Error())
+ }
+
+ // Items found in feed root
+ var channel *Feed
+ var textinput *TextInput
+ var image *Image
+ items := []*Item{}
+
+ ver := rp.parseVersion(p)
+
+ for {
+ tok, err := rp.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+
+ // Skip any extensions found in the feed root.
+ if shared.IsExtension(p) {
+ p.Skip()
+ continue
+ }
+
+ name := strings.ToLower(p.Name)
+
+ if name == "channel" {
+ channel, err = rp.parseChannel(p)
+ if err != nil {
+ return nil, err
+ }
+ } else if name == "item" {
+ item, err := rp.parseItem(p)
+ if err != nil {
+ return nil, err
+ }
+ items = append(items, item)
+ } else if name == "textinput" {
+ textinput, err = rp.parseTextInput(p)
+ if err != nil {
+ return nil, err
+ }
+ } else if name == "image" {
+ image, err = rp.parseImage(p)
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ p.Skip()
+ }
+ }
+ }
+
+ rssErr = p.Expect(xpp.EndTag, "rss")
+ rdfErr = p.Expect(xpp.EndTag, "rdf")
+ if rssErr != nil && rdfErr != nil {
+ return nil, fmt.Errorf("%s or %s", rssErr.Error(), rdfErr.Error())
+ }
+
+ if channel == nil {
+ channel = &Feed{}
+ channel.Items = []*Item{}
+ }
+
+ if len(items) > 0 {
+ channel.Items = append(channel.Items, items...)
+ }
+
+ if textinput != nil {
+ channel.TextInput = textinput
+ }
+
+ if image != nil {
+ channel.Image = image
+ }
+
+ channel.Version = ver
+ return channel, nil
+}
+
+func (rp *Parser) parseChannel(p *xpp.XMLPullParser) (rss *Feed, err error) {
+
+ if err = p.Expect(xpp.StartTag, "channel"); err != nil {
+ return nil, err
+ }
+
+ rss = &Feed{}
+ rss.Items = []*Item{}
+
+ extensions := ext.Extensions{}
+ categories := []*Category{}
+
+ for {
+ tok, err := rp.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+
+ name := strings.ToLower(p.Name)
+
+ if shared.IsExtension(p) {
+ ext, err := shared.ParseExtension(extensions, p)
+ if err != nil {
+ return nil, err
+ }
+ extensions = ext
+ } else if name == "title" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Title = result
+ } else if name == "description" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Description = result
+ } else if name == "link" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Link = result
+ } else if name == "language" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Language = result
+ } else if name == "copyright" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Copyright = result
+ } else if name == "managingeditor" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.ManagingEditor = result
+ } else if name == "webmaster" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.WebMaster = result
+ } else if name == "pubdate" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.PubDate = result
+ date, err := shared.ParseDate(result)
+ if err == nil {
+ utcDate := date.UTC()
+ rss.PubDateParsed = &utcDate
+ }
+ } else if name == "lastbuilddate" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.LastBuildDate = result
+ date, err := shared.ParseDate(result)
+ if err == nil {
+ utcDate := date.UTC()
+ rss.LastBuildDateParsed = &utcDate
+ }
+ } else if name == "generator" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Generator = result
+ } else if name == "docs" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Docs = result
+ } else if name == "ttl" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.TTL = result
+ } else if name == "rating" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Rating = result
+ } else if name == "skiphours" {
+ result, err := rp.parseSkipHours(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.SkipHours = result
+ } else if name == "skipdays" {
+ result, err := rp.parseSkipDays(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.SkipDays = result
+ } else if name == "item" {
+ result, err := rp.parseItem(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Items = append(rss.Items, result)
+ } else if name == "cloud" {
+ result, err := rp.parseCloud(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Cloud = result
+ } else if name == "category" {
+ result, err := rp.parseCategory(p)
+ if err != nil {
+ return nil, err
+ }
+ categories = append(categories, result)
+ } else if name == "image" {
+ result, err := rp.parseImage(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.Image = result
+ } else if name == "textinput" {
+ result, err := rp.parseTextInput(p)
+ if err != nil {
+ return nil, err
+ }
+ rss.TextInput = result
+ } else {
+ // Skip element as it isn't an extension and not
+ // part of the spec
+ p.Skip()
+ }
+ }
+ }
+
+ if err = p.Expect(xpp.EndTag, "channel"); err != nil {
+ return nil, err
+ }
+
+ if len(categories) > 0 {
+ rss.Categories = categories
+ }
+
+ if len(extensions) > 0 {
+ rss.Extensions = extensions
+
+ if itunes, ok := rss.Extensions["itunes"]; ok {
+ rss.ITunesExt = ext.NewITunesFeedExtension(itunes)
+ }
+
+ if dc, ok := rss.Extensions["dc"]; ok {
+ rss.DublinCoreExt = ext.NewDublinCoreExtension(dc)
+ }
+ }
+
+ return rss, nil
+}
+
+func (rp *Parser) parseItem(p *xpp.XMLPullParser) (item *Item, err error) {
+
+ if err = p.Expect(xpp.StartTag, "item"); err != nil {
+ return nil, err
+ }
+
+ item = &Item{}
+ extensions := ext.Extensions{}
+ categories := []*Category{}
+
+ for {
+ tok, err := rp.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+
+ name := strings.ToLower(p.Name)
+
+ if shared.IsExtension(p) {
+ ext, err := shared.ParseExtension(extensions, p)
+ if err != nil {
+ return nil, err
+ }
+ item.Extensions = ext
+ } else if name == "title" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Title = result
+ } else if name == "description" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Description = result
+ } else if name == "encoded" {
+ space := strings.TrimSpace(p.Space)
+ if prefix, ok := p.Spaces[space]; ok && prefix == "content" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Content = result
+ }
+ } else if name == "link" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Link = result
+ } else if name == "author" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Author = result
+ } else if name == "comments" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Comments = result
+ } else if name == "pubdate" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ item.PubDate = result
+ date, err := shared.ParseDate(result)
+ if err == nil {
+ utcDate := date.UTC()
+ item.PubDateParsed = &utcDate
+ }
+ } else if name == "source" {
+ result, err := rp.parseSource(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Source = result
+ } else if name == "enclosure" {
+ result, err := rp.parseEnclosure(p)
+ if err != nil {
+ return nil, err
+ }
+ item.Enclosure = result
+ } else if name == "guid" {
+ result, err := rp.parseGUID(p)
+ if err != nil {
+ return nil, err
+ }
+ item.GUID = result
+ } else if name == "category" {
+ result, err := rp.parseCategory(p)
+ if err != nil {
+ return nil, err
+ }
+ categories = append(categories, result)
+ } else {
+ // Skip any elements not part of the item spec
+ p.Skip()
+ }
+ }
+ }
+
+ if len(categories) > 0 {
+ item.Categories = categories
+ }
+
+ if len(extensions) > 0 {
+ item.Extensions = extensions
+
+ if itunes, ok := item.Extensions["itunes"]; ok {
+ item.ITunesExt = ext.NewITunesItemExtension(itunes)
+ }
+
+ if dc, ok := item.Extensions["dc"]; ok {
+ item.DublinCoreExt = ext.NewDublinCoreExtension(dc)
+ }
+ }
+
+ if err = p.Expect(xpp.EndTag, "item"); err != nil {
+ return nil, err
+ }
+
+ return item, nil
+}
+
+func (rp *Parser) parseSource(p *xpp.XMLPullParser) (source *Source, err error) {
+ if err = p.Expect(xpp.StartTag, "source"); err != nil {
+ return nil, err
+ }
+
+ source = &Source{}
+ source.URL = p.Attribute("url")
+
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return source, err
+ }
+ source.Title = result
+
+ if err = p.Expect(xpp.EndTag, "source"); err != nil {
+ return nil, err
+ }
+ return source, nil
+}
+
+func (rp *Parser) parseEnclosure(p *xpp.XMLPullParser) (enclosure *Enclosure, err error) {
+ if err = p.Expect(xpp.StartTag, "enclosure"); err != nil {
+ return nil, err
+ }
+
+ enclosure = &Enclosure{}
+ enclosure.URL = p.Attribute("url")
+ enclosure.Length = p.Attribute("length")
+ enclosure.Type = p.Attribute("type")
+
+ // Ignore any enclosure text
+ _, err = p.NextText()
+ if err != nil {
+ return enclosure, err
+ }
+
+ if err = p.Expect(xpp.EndTag, "enclosure"); err != nil {
+ return nil, err
+ }
+
+ return enclosure, nil
+}
+
+func (rp *Parser) parseImage(p *xpp.XMLPullParser) (image *Image, err error) {
+ if err = p.Expect(xpp.StartTag, "image"); err != nil {
+ return nil, err
+ }
+
+ image = &Image{}
+
+ for {
+ tok, err := rp.base.NextTag(p)
+ if err != nil {
+ return image, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+ name := strings.ToLower(p.Name)
+
+ if name == "url" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ image.URL = result
+ } else if name == "title" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ image.Title = result
+ } else if name == "link" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ image.Link = result
+ } else if name == "width" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ image.Width = result
+ } else if name == "height" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ image.Height = result
+ } else if name == "description" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ image.Description = result
+ } else {
+ p.Skip()
+ }
+ }
+ }
+
+ if err = p.Expect(xpp.EndTag, "image"); err != nil {
+ return nil, err
+ }
+
+ return image, nil
+}
+
+func (rp *Parser) parseGUID(p *xpp.XMLPullParser) (guid *GUID, err error) {
+ if err = p.Expect(xpp.StartTag, "guid"); err != nil {
+ return nil, err
+ }
+
+ guid = &GUID{}
+ guid.IsPermalink = p.Attribute("isPermalink")
+
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return
+ }
+ guid.Value = result
+
+ if err = p.Expect(xpp.EndTag, "guid"); err != nil {
+ return nil, err
+ }
+
+ return guid, nil
+}
+
+func (rp *Parser) parseCategory(p *xpp.XMLPullParser) (cat *Category, err error) {
+
+ if err = p.Expect(xpp.StartTag, "category"); err != nil {
+ return nil, err
+ }
+
+ cat = &Category{}
+ cat.Domain = p.Attribute("domain")
+
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+
+ cat.Value = result
+
+ if err = p.Expect(xpp.EndTag, "category"); err != nil {
+ return nil, err
+ }
+ return cat, nil
+}
+
+func (rp *Parser) parseTextInput(p *xpp.XMLPullParser) (*TextInput, error) {
+ if err := p.Expect(xpp.StartTag, "textinput"); err != nil {
+ return nil, err
+ }
+
+ ti := &TextInput{}
+
+ for {
+ tok, err := rp.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+ name := strings.ToLower(p.Name)
+
+ if name == "title" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ ti.Title = result
+ } else if name == "description" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ ti.Description = result
+ } else if name == "name" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ ti.Name = result
+ } else if name == "link" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ ti.Link = result
+ } else {
+ p.Skip()
+ }
+ }
+ }
+
+ if err := p.Expect(xpp.EndTag, "textinput"); err != nil {
+ return nil, err
+ }
+
+ return ti, nil
+}
+
+func (rp *Parser) parseSkipHours(p *xpp.XMLPullParser) ([]string, error) {
+ if err := p.Expect(xpp.StartTag, "skiphours"); err != nil {
+ return nil, err
+ }
+
+ hours := []string{}
+
+ for {
+ tok, err := rp.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+ name := strings.ToLower(p.Name)
+ if name == "hour" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ hours = append(hours, result)
+ } else {
+ p.Skip()
+ }
+ }
+ }
+
+ if err := p.Expect(xpp.EndTag, "skiphours"); err != nil {
+ return nil, err
+ }
+
+ return hours, nil
+}
+
+func (rp *Parser) parseSkipDays(p *xpp.XMLPullParser) ([]string, error) {
+ if err := p.Expect(xpp.StartTag, "skipdays"); err != nil {
+ return nil, err
+ }
+
+ days := []string{}
+
+ for {
+ tok, err := rp.base.NextTag(p)
+ if err != nil {
+ return nil, err
+ }
+
+ if tok == xpp.EndTag {
+ break
+ }
+
+ if tok == xpp.StartTag {
+ name := strings.ToLower(p.Name)
+ if name == "day" {
+ result, err := shared.ParseText(p)
+ if err != nil {
+ return nil, err
+ }
+ days = append(days, result)
+ } else {
+ p.Skip()
+ }
+ }
+ }
+
+ if err := p.Expect(xpp.EndTag, "skipdays"); err != nil {
+ return nil, err
+ }
+
+ return days, nil
+}
+
+func (rp *Parser) parseCloud(p *xpp.XMLPullParser) (*Cloud, error) {
+ if err := p.Expect(xpp.StartTag, "cloud"); err != nil {
+ return nil, err
+ }
+
+ cloud := &Cloud{}
+ cloud.Domain = p.Attribute("domain")
+ cloud.Port = p.Attribute("port")
+ cloud.Path = p.Attribute("path")
+ cloud.RegisterProcedure = p.Attribute("registerProcedure")
+ cloud.Protocol = p.Attribute("protocol")
+
+ rp.base.NextTag(p)
+
+ if err := p.Expect(xpp.EndTag, "cloud"); err != nil {
+ return nil, err
+ }
+
+ return cloud, nil
+}
+
+func (rp *Parser) parseVersion(p *xpp.XMLPullParser) (ver string) {
+ name := strings.ToLower(p.Name)
+ if name == "rss" {
+ ver = p.Attribute("version")
+ } else if name == "rdf" {
+ ns := p.Attribute("xmlns")
+ if ns == "http://channel.netscape.com/rdf/simple/0.9/" ||
+ ns == "http://my.netscape.com/rdf/simple/0.9/" {
+ ver = "0.9"
+ } else if ns == "http://purl.org/rss/1.0/" {
+ ver = "1.0"
+ }
+ }
+ return
+}
diff --git a/vendor/github.com/mmcdole/gofeed/translator.go b/vendor/github.com/mmcdole/gofeed/translator.go
new file mode 100644
index 0000000..244ce75
--- /dev/null
+++ b/vendor/github.com/mmcdole/gofeed/translator.go
@@ -0,0 +1,686 @@
+package gofeed
+
+import (
+ "fmt"
+ "strings"
+ "time"
+
+ "github.com/mmcdole/gofeed/atom"
+ "github.com/mmcdole/gofeed/extensions"
+ "github.com/mmcdole/gofeed/internal/shared"
+ "github.com/mmcdole/gofeed/rss"
+)
+
+// Translator converts a particular feed (atom.Feed or rss.Feed)
+// into the generic Feed struct
+type Translator interface {
+ Translate(feed interface{}) (*Feed, error)
+}
+
+// DefaultRSSTranslator converts an rss.Feed struct
+// into the generic Feed struct.
+//
+// This default implementation defines a set of
+// mapping rules between rss.Feed -> Feed
+// for each of the fields in Feed.
+type DefaultRSSTranslator struct{}
+
+// Translate converts an RSS feed into the universal
+// feed type.
+func (t *DefaultRSSTranslator) Translate(feed interface{}) (*Feed, error) {
+ rss, found := feed.(*rss.Feed)
+ if !found {
+ return nil, fmt.Errorf("Feed did not match expected type of *rss.Feed")
+ }
+
+ result := &Feed{}
+ result.Title = t.translateFeedTitle(rss)
+ result.Description = t.translateFeedDescription(rss)
+ result.Link = t.translateFeedLink(rss)
+ result.FeedLink = t.translateFeedFeedLink(rss)
+ result.Updated = t.translateFeedUpdated(rss)
+ result.UpdatedParsed = t.translateFeedUpdatedParsed(rss)
+ result.Published = t.translateFeedPublished(rss)
+ result.PublishedParsed = t.translateFeedPublishedParsed(rss)
+ result.Author = t.translateFeedAuthor(rss)
+ result.Language = t.translateFeedLanguage(rss)
+ result.Image = t.translateFeedImage(rss)
+ result.Copyright = t.translateFeedCopyright(rss)
+ result.Generator = t.translateFeedGenerator(rss)
+ result.Categories = t.translateFeedCategories(rss)
+ result.Items = t.translateFeedItems(rss)
+ result.ITunesExt = rss.ITunesExt
+ result.DublinCoreExt = rss.DublinCoreExt
+ result.Extensions = rss.Extensions
+ result.FeedVersion = rss.Version
+ result.FeedType = "rss"
+ return result, nil
+}
+
+func (t *DefaultRSSTranslator) translateFeedItem(rssItem *rss.Item) (item *Item) {
+ item = &Item{}
+ item.Title = t.translateItemTitle(rssItem)
+ item.Description = t.translateItemDescription(rssItem)
+ item.Content = t.translateItemContent(rssItem)
+ item.Link = t.translateItemLink(rssItem)
+ item.Published = t.translateItemPublished(rssItem)
+ item.PublishedParsed = t.translateItemPublishedParsed(rssItem)
+ item.Author = t.translateItemAuthor(rssItem)
+ item.GUID = t.translateItemGUID(rssItem)
+ item.Image = t.translateItemImage(rssItem)
+ item.Categories = t.translateItemCategories(rssItem)
+ item.Enclosures = t.translateItemEnclosures(rssItem)
+ item.DublinCoreExt = rssItem.DublinCoreExt
+ item.ITunesExt = rssItem.ITunesExt
+ item.Extensions = rssItem.Extensions
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedTitle(rss *rss.Feed) (title string) {
+ if rss.Title != "" {
+ title = rss.Title
+ } else if rss.DublinCoreExt != nil && rss.DublinCoreExt.Title != nil {
+ title = t.firstEntry(rss.DublinCoreExt.Title)
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedDescription(rss *rss.Feed) (desc string) {
+ return rss.Description
+}
+
+func (t *DefaultRSSTranslator) translateFeedLink(rss *rss.Feed) (link string) {
+ if rss.Link != "" {
+ link = rss.Link
+ } else if rss.ITunesExt != nil && rss.ITunesExt.Subtitle != "" {
+ link = rss.ITunesExt.Subtitle
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedFeedLink(rss *rss.Feed) (link string) {
+ atomExtensions := t.extensionsForKeys([]string{"atom", "atom10", "atom03"}, rss.Extensions)
+ for _, ex := range atomExtensions {
+ if links, ok := ex["link"]; ok {
+ for _, l := range links {
+ if l.Attrs["Rel"] == "self" {
+ link = l.Value
+ }
+ }
+ }
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedUpdated(rss *rss.Feed) (updated string) {
+ if rss.LastBuildDate != "" {
+ updated = rss.LastBuildDate
+ } else if rss.DublinCoreExt != nil && rss.DublinCoreExt.Date != nil {
+ updated = t.firstEntry(rss.DublinCoreExt.Date)
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedUpdatedParsed(rss *rss.Feed) (updated *time.Time) {
+ if rss.LastBuildDateParsed != nil {
+ updated = rss.LastBuildDateParsed
+ } else if rss.DublinCoreExt != nil && rss.DublinCoreExt.Date != nil {
+ dateText := t.firstEntry(rss.DublinCoreExt.Date)
+ date, err := shared.ParseDate(dateText)
+ if err == nil {
+ updated = &date
+ }
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedPublished(rss *rss.Feed) (published string) {
+ return rss.PubDate
+}
+
+func (t *DefaultRSSTranslator) translateFeedPublishedParsed(rss *rss.Feed) (published *time.Time) {
+ return rss.PubDateParsed
+}
+
+func (t *DefaultRSSTranslator) translateFeedAuthor(rss *rss.Feed) (author *Person) {
+ if rss.ManagingEditor != "" {
+ name, address := shared.ParseNameAddress(rss.ManagingEditor)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ } else if rss.WebMaster != "" {
+ name, address := shared.ParseNameAddress(rss.WebMaster)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ } else if rss.DublinCoreExt != nil && rss.DublinCoreExt.Author != nil {
+ dcAuthor := t.firstEntry(rss.DublinCoreExt.Author)
+ name, address := shared.ParseNameAddress(dcAuthor)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ } else if rss.DublinCoreExt != nil && rss.DublinCoreExt.Creator != nil {
+ dcCreator := t.firstEntry(rss.DublinCoreExt.Creator)
+ name, address := shared.ParseNameAddress(dcCreator)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ } else if rss.ITunesExt != nil && rss.ITunesExt.Author != "" {
+ name, address := shared.ParseNameAddress(rss.ITunesExt.Author)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedLanguage(rss *rss.Feed) (language string) {
+ if rss.Language != "" {
+ language = rss.Language
+ } else if rss.DublinCoreExt != nil && rss.DublinCoreExt.Language != nil {
+ language = t.firstEntry(rss.DublinCoreExt.Language)
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedImage(rss *rss.Feed) (image *Image) {
+ if rss.Image != nil {
+ image = &Image{}
+ image.Title = rss.Image.Title
+ image.URL = rss.Image.URL
+ } else if rss.ITunesExt != nil && rss.ITunesExt.Image != "" {
+ image = &Image{}
+ image.URL = rss.ITunesExt.Image
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedCopyright(rss *rss.Feed) (rights string) {
+ if rss.Copyright != "" {
+ rights = rss.Copyright
+ } else if rss.DublinCoreExt != nil && rss.DublinCoreExt.Rights != nil {
+ rights = t.firstEntry(rss.DublinCoreExt.Rights)
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedGenerator(rss *rss.Feed) (generator string) {
+ return rss.Generator
+}
+
+func (t *DefaultRSSTranslator) translateFeedCategories(rss *rss.Feed) (categories []string) {
+ cats := []string{}
+ if rss.Categories != nil {
+ for _, c := range rss.Categories {
+ cats = append(cats, c.Value)
+ }
+ }
+
+ if rss.ITunesExt != nil && rss.ITunesExt.Keywords != "" {
+ keywords := strings.Split(rss.ITunesExt.Keywords, ",")
+ for _, k := range keywords {
+ cats = append(cats, k)
+ }
+ }
+
+ if rss.ITunesExt != nil && rss.ITunesExt.Categories != nil {
+ for _, c := range rss.ITunesExt.Categories {
+ cats = append(cats, c.Text)
+ if c.Subcategory != nil {
+ cats = append(cats, c.Subcategory.Text)
+ }
+ }
+ }
+
+ if rss.DublinCoreExt != nil && rss.DublinCoreExt.Subject != nil {
+ for _, c := range rss.DublinCoreExt.Subject {
+ cats = append(cats, c)
+ }
+ }
+
+ if len(cats) > 0 {
+ categories = cats
+ }
+
+ return
+}
+
+func (t *DefaultRSSTranslator) translateFeedItems(rss *rss.Feed) (items []*Item) {
+ items = []*Item{}
+ for _, i := range rss.Items {
+ items = append(items, t.translateFeedItem(i))
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemTitle(rssItem *rss.Item) (title string) {
+ if rssItem.Title != "" {
+ title = rssItem.Title
+ } else if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Title != nil {
+ title = t.firstEntry(rssItem.DublinCoreExt.Title)
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemDescription(rssItem *rss.Item) (desc string) {
+ if rssItem.Description != "" {
+ desc = rssItem.Description
+ } else if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Description != nil {
+ desc = t.firstEntry(rssItem.DublinCoreExt.Description)
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemContent(rssItem *rss.Item) (content string) {
+ return rssItem.Content
+}
+
+func (t *DefaultRSSTranslator) translateItemLink(rssItem *rss.Item) (link string) {
+ return rssItem.Link
+}
+
+func (t *DefaultRSSTranslator) translateItemUpdated(rssItem *rss.Item) (updated string) {
+ if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Date != nil {
+ updated = t.firstEntry(rssItem.DublinCoreExt.Date)
+ }
+ return updated
+}
+
+func (t *DefaultRSSTranslator) translateItemUpdatedParsed(rssItem *rss.Item) (updated *time.Time) {
+ if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Date != nil {
+ updatedText := t.firstEntry(rssItem.DublinCoreExt.Date)
+ updatedDate, err := shared.ParseDate(updatedText)
+ if err == nil {
+ updated = &updatedDate
+ }
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemPublished(rssItem *rss.Item) (pubDate string) {
+ if rssItem.PubDate != "" {
+ return rssItem.PubDate
+ } else if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Date != nil {
+ return t.firstEntry(rssItem.DublinCoreExt.Date)
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemPublishedParsed(rssItem *rss.Item) (pubDate *time.Time) {
+ if rssItem.PubDateParsed != nil {
+ return rssItem.PubDateParsed
+ } else if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Date != nil {
+ pubDateText := t.firstEntry(rssItem.DublinCoreExt.Date)
+ pubDateParsed, err := shared.ParseDate(pubDateText)
+ if err == nil {
+ pubDate = &pubDateParsed
+ }
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemAuthor(rssItem *rss.Item) (author *Person) {
+ if rssItem.Author != "" {
+ name, address := shared.ParseNameAddress(rssItem.Author)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ } else if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Author != nil {
+ dcAuthor := t.firstEntry(rssItem.DublinCoreExt.Author)
+ name, address := shared.ParseNameAddress(dcAuthor)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ } else if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Creator != nil {
+ dcCreator := t.firstEntry(rssItem.DublinCoreExt.Creator)
+ name, address := shared.ParseNameAddress(dcCreator)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ } else if rssItem.ITunesExt != nil && rssItem.ITunesExt.Author != "" {
+ name, address := shared.ParseNameAddress(rssItem.ITunesExt.Author)
+ author = &Person{}
+ author.Name = name
+ author.Email = address
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemGUID(rssItem *rss.Item) (guid string) {
+ if rssItem.GUID != nil {
+ guid = rssItem.GUID.Value
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemImage(rssItem *rss.Item) (image *Image) {
+ if rssItem.ITunesExt != nil && rssItem.ITunesExt.Image != "" {
+ image = &Image{}
+ image.URL = rssItem.ITunesExt.Image
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemCategories(rssItem *rss.Item) (categories []string) {
+ cats := []string{}
+ if rssItem.Categories != nil {
+ for _, c := range rssItem.Categories {
+ cats = append(cats, c.Value)
+ }
+ }
+
+ if rssItem.ITunesExt != nil && rssItem.ITunesExt.Keywords != "" {
+ keywords := strings.Split(rssItem.ITunesExt.Keywords, ",")
+ for _, k := range keywords {
+ cats = append(cats, k)
+ }
+ }
+
+ if rssItem.DublinCoreExt != nil && rssItem.DublinCoreExt.Subject != nil {
+ for _, c := range rssItem.DublinCoreExt.Subject {
+ cats = append(cats, c)
+ }
+ }
+
+ if len(cats) > 0 {
+ categories = cats
+ }
+
+ return
+}
+
+func (t *DefaultRSSTranslator) translateItemEnclosures(rssItem *rss.Item) (enclosures []*Enclosure) {
+ if rssItem.Enclosure != nil {
+ e := &Enclosure{}
+ e.URL = rssItem.Enclosure.URL
+ e.Type = rssItem.Enclosure.Type
+ e.Length = rssItem.Enclosure.Length
+ enclosures = []*Enclosure{e}
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) extensionsForKeys(keys []string, extensions ext.Extensions) (matches []map[string][]ext.Extension) {
+ matches = []map[string][]ext.Extension{}
+
+ if extensions == nil {
+ return
+ }
+
+ for _, key := range keys {
+ if match, ok := extensions[key]; ok {
+ matches = append(matches, match)
+ }
+ }
+ return
+}
+
+func (t *DefaultRSSTranslator) firstEntry(entries []string) (value string) {
+ if entries == nil {
+ return
+ }
+
+ if len(entries) == 0 {
+ return
+ }
+
+ return entries[0]
+}
+
+// DefaultAtomTranslator converts an atom.Feed struct
+// into the generic Feed struct.
+//
+// This default implementation defines a set of
+// mapping rules between atom.Feed -> Feed
+// for each of the fields in Feed.
+type DefaultAtomTranslator struct{}
+
+// Translate converts an Atom feed into the universal
+// feed type.
+func (t *DefaultAtomTranslator) Translate(feed interface{}) (*Feed, error) {
+ atom, found := feed.(*atom.Feed)
+ if !found {
+ return nil, fmt.Errorf("Feed did not match expected type of *atom.Feed")
+ }
+
+ result := &Feed{}
+ result.Title = t.translateFeedTitle(atom)
+ result.Description = t.translateFeedDescription(atom)
+ result.Link = t.translateFeedLink(atom)
+ result.FeedLink = t.translateFeedFeedLink(atom)
+ result.Updated = t.translateFeedUpdated(atom)
+ result.UpdatedParsed = t.translateFeedUpdatedParsed(atom)
+ result.Author = t.translateFeedAuthor(atom)
+ result.Language = t.translateFeedLanguage(atom)
+ result.Image = t.translateFeedImage(atom)
+ result.Copyright = t.translateFeedCopyright(atom)
+ result.Categories = t.translateFeedCategories(atom)
+ result.Generator = t.translateFeedGenerator(atom)
+ result.Items = t.translateFeedItems(atom)
+ result.Extensions = atom.Extensions
+ result.FeedVersion = atom.Version
+ result.FeedType = "atom"
+ return result, nil
+}
+
+func (t *DefaultAtomTranslator) translateFeedItem(entry *atom.Entry) (item *Item) {
+ item = &Item{}
+ item.Title = t.translateItemTitle(entry)
+ item.Description = t.translateItemDescription(entry)
+ item.Content = t.translateItemContent(entry)
+ item.Link = t.translateItemLink(entry)
+ item.Updated = t.translateItemUpdated(entry)
+ item.UpdatedParsed = t.translateItemUpdatedParsed(entry)
+ item.Published = t.translateItemPublished(entry)
+ item.PublishedParsed = t.translateItemPublishedParsed(entry)
+ item.Author = t.translateItemAuthor(entry)
+ item.GUID = t.translateItemGUID(entry)
+ item.Image = t.translateItemImage(entry)
+ item.Categories = t.translateItemCategories(entry)
+ item.Enclosures = t.translateItemEnclosures(entry)
+ item.Extensions = entry.Extensions
+ return
+}
+
+func (t *DefaultAtomTranslator) translateFeedTitle(atom *atom.Feed) (title string) {
+ return atom.Title
+}
+
+func (t *DefaultAtomTranslator) translateFeedDescription(atom *atom.Feed) (desc string) {
+ return atom.Subtitle
+}
+
+func (t *DefaultAtomTranslator) translateFeedLink(atom *atom.Feed) (link string) {
+ l := t.firstLinkWithType("alternate", atom.Links)
+ if l != nil {
+ link = l.Href
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateFeedFeedLink(atom *atom.Feed) (link string) {
+ feedLink := t.firstLinkWithType("self", atom.Links)
+ if feedLink != nil {
+ link = feedLink.Href
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateFeedUpdated(atom *atom.Feed) (updated string) {
+ return atom.Updated
+}
+
+func (t *DefaultAtomTranslator) translateFeedUpdatedParsed(atom *atom.Feed) (updated *time.Time) {
+ return atom.UpdatedParsed
+}
+
+func (t *DefaultAtomTranslator) translateFeedAuthor(atom *atom.Feed) (author *Person) {
+ a := t.firstPerson(atom.Authors)
+ if a != nil {
+ feedAuthor := Person{}
+ feedAuthor.Name = a.Name
+ feedAuthor.Email = a.Email
+ author = &feedAuthor
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateFeedLanguage(atom *atom.Feed) (language string) {
+ return atom.Language
+}
+
+func (t *DefaultAtomTranslator) translateFeedImage(atom *atom.Feed) (image *Image) {
+ if atom.Logo != "" {
+ feedImage := Image{}
+ feedImage.URL = atom.Logo
+ image = &feedImage
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateFeedCopyright(atom *atom.Feed) (rights string) {
+ return atom.Rights
+}
+
+func (t *DefaultAtomTranslator) translateFeedGenerator(atom *atom.Feed) (generator string) {
+ if atom.Generator != nil {
+ if atom.Generator.Value != "" {
+ generator += atom.Generator.Value
+ }
+ if atom.Generator.Version != "" {
+ generator += " v" + atom.Generator.Version
+ }
+ if atom.Generator.URI != "" {
+ generator += " " + atom.Generator.URI
+ }
+ generator = strings.TrimSpace(generator)
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateFeedCategories(atom *atom.Feed) (categories []string) {
+ if atom.Categories != nil {
+ categories = []string{}
+ for _, c := range atom.Categories {
+ categories = append(categories, c.Term)
+ }
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateFeedItems(atom *atom.Feed) (items []*Item) {
+ items = []*Item{}
+ for _, entry := range atom.Entries {
+ items = append(items, t.translateFeedItem(entry))
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateItemTitle(entry *atom.Entry) (title string) {
+ return entry.Title
+}
+
+func (t *DefaultAtomTranslator) translateItemDescription(entry *atom.Entry) (desc string) {
+ return entry.Summary
+}
+
+func (t *DefaultAtomTranslator) translateItemContent(entry *atom.Entry) (content string) {
+ if entry.Content != nil {
+ content = entry.Content.Value
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateItemLink(entry *atom.Entry) (link string) {
+ l := t.firstLinkWithType("alternate", entry.Links)
+ if l != nil {
+ link = l.Href
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateItemUpdated(entry *atom.Entry) (updated string) {
+ return entry.Updated
+}
+
+func (t *DefaultAtomTranslator) translateItemUpdatedParsed(entry *atom.Entry) (updated *time.Time) {
+ return entry.UpdatedParsed
+}
+
+func (t *DefaultAtomTranslator) translateItemPublished(entry *atom.Entry) (updated string) {
+ return entry.Published
+}
+
+func (t *DefaultAtomTranslator) translateItemPublishedParsed(entry *atom.Entry) (updated *time.Time) {
+ return entry.PublishedParsed
+}
+
+func (t *DefaultAtomTranslator) translateItemAuthor(entry *atom.Entry) (author *Person) {
+ a := t.firstPerson(entry.Authors)
+ if a != nil {
+ author = &Person{}
+ author.Name = a.Name
+ author.Email = a.Email
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateItemGUID(entry *atom.Entry) (guid string) {
+ return entry.ID
+}
+
+func (t *DefaultAtomTranslator) translateItemImage(entry *atom.Entry) (image *Image) {
+ return nil
+}
+
+func (t *DefaultAtomTranslator) translateItemCategories(entry *atom.Entry) (categories []string) {
+ if entry.Categories != nil {
+ categories = []string{}
+ for _, c := range entry.Categories {
+ categories = append(categories, c.Term)
+ }
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) translateItemEnclosures(entry *atom.Entry) (enclosures []*Enclosure) {
+ if entry.Links != nil {
+ enclosures = []*Enclosure{}
+ for _, e := range entry.Links {
+ if e.Rel == "enclosure" {
+ enclosure := &Enclosure{}
+ enclosure.URL = e.Href
+ enclosure.Length = e.Length
+ enclosure.Type = e.Type
+ enclosures = append(enclosures, enclosure)
+ }
+ }
+
+ if len(enclosures) == 0 {
+ enclosures = nil
+ }
+ }
+ return
+}
+
+func (t *DefaultAtomTranslator) firstLinkWithType(linkType string, links []*atom.Link) *atom.Link {
+ if links == nil {
+ return nil
+ }
+
+ for _, link := range links {
+ if link.Rel == linkType {
+ return link
+ }
+ }
+ return nil
+}
+
+func (t *DefaultAtomTranslator) firstPerson(persons []*atom.Person) (person *atom.Person) {
+ if persons == nil || len(persons) == 0 {
+ return
+ }
+
+ person = persons[0]
+ return
+}
diff --git a/vendor/github.com/mmcdole/goxpp/LICENSE b/vendor/github.com/mmcdole/goxpp/LICENSE
new file mode 100644
index 0000000..054bf56
--- /dev/null
+++ b/vendor/github.com/mmcdole/goxpp/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2016 mmcdole
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/vendor/github.com/mmcdole/goxpp/README.md b/vendor/github.com/mmcdole/goxpp/README.md
new file mode 100644
index 0000000..1f38b65
--- /dev/null
+++ b/vendor/github.com/mmcdole/goxpp/README.md
@@ -0,0 +1,8 @@
+# goxpp
+
+[](https://travis-ci.org/mmcdole/goxpp) [](https://coveralls.io/github/mmcdole/goxpp?branch=master) [](http://doge.mit-license.org)
+
+The `goxpp` library is an XML parser library that is loosely based on the [Java XMLPullParser](http://www.xmlpull.org/v1/download/unpacked/doc/quick_intro.html). This library allows you to easily parse arbitary XML content using a pull parser. You can think of `goxpp` as a lightweight wrapper around Go's XML `Decoder` that provides a set of functions that make it easier to parse XML content than using the raw decoder itself.
+
+This project is licensed under the [MIT License](https://raw.githubusercontent.com/mmcdole/goxpp/master/LICENSE)
+
diff --git a/vendor/github.com/mmcdole/goxpp/xpp.go b/vendor/github.com/mmcdole/goxpp/xpp.go
new file mode 100644
index 0000000..a319468
--- /dev/null
+++ b/vendor/github.com/mmcdole/goxpp/xpp.go
@@ -0,0 +1,342 @@
+package xpp
+
+import (
+ "encoding/xml"
+ "errors"
+ "fmt"
+ "io"
+ "strings"
+)
+
+type XMLEventType int
+type CharsetReader func(charset string, input io.Reader) (io.Reader, error)
+
+const (
+ StartDocument XMLEventType = iota
+ EndDocument
+ StartTag
+ EndTag
+ Text
+ Comment
+ ProcessingInstruction
+ Directive
+ IgnorableWhitespace // TODO: ?
+ // TODO: CDSECT ?
+)
+
+type XMLPullParser struct {
+ // Document State
+ Spaces map[string]string
+ SpacesStack []map[string]string
+
+ // Token State
+ Depth int
+ Event XMLEventType
+ Attrs []xml.Attr
+ Name string
+ Space string
+ Text string
+
+ decoder *xml.Decoder
+ token interface{}
+}
+
+func NewXMLPullParser(r io.Reader, strict bool, cr CharsetReader) *XMLPullParser {
+ d := xml.NewDecoder(r)
+ d.Strict = strict
+ d.CharsetReader = cr
+ return &XMLPullParser{
+ decoder: d,
+ Event: StartDocument,
+ Depth: 0,
+ Spaces: map[string]string{},
+ }
+}
+
+func (p *XMLPullParser) NextTag() (event XMLEventType, err error) {
+ t, err := p.Next()
+ if err != nil {
+ return event, err
+ }
+
+ for t == Text && p.IsWhitespace() {
+ t, err = p.Next()
+ if err != nil {
+ return event, err
+ }
+ }
+
+ if t != StartTag && t != EndTag {
+ return event, fmt.Errorf("Expected StartTag or EndTag but got %s at offset: %d", p.EventName(t), p.decoder.InputOffset())
+ }
+
+ return t, nil
+}
+
+func (p *XMLPullParser) Next() (event XMLEventType, err error) {
+ for {
+ event, err = p.NextToken()
+ if err != nil {
+ return event, err
+ }
+
+ // Return immediately after encountering a StartTag
+ // EndTag, Text, EndDocument
+ if event == StartTag ||
+ event == EndTag ||
+ event == EndDocument ||
+ event == Text {
+ return event, nil
+ }
+
+ // Skip Comment/Directive and ProcessingInstruction
+ if event == Comment ||
+ event == Directive ||
+ event == ProcessingInstruction {
+ continue
+ }
+ }
+ return event, nil
+}
+
+func (p *XMLPullParser) NextToken() (event XMLEventType, err error) {
+ // Clear any state held for the previous token
+ p.resetTokenState()
+
+ token, err := p.decoder.Token()
+ if err != nil {
+ if err == io.EOF {
+ // XML decoder returns the EOF as an error
+ // but we want to return it as a valid
+ // EndDocument token instead
+ p.token = nil
+ p.Event = EndDocument
+ return p.Event, nil
+ }
+ return event, err
+ }
+
+ p.token = xml.CopyToken(token)
+ p.processToken(p.token)
+ p.Event = p.EventType(p.token)
+
+ return p.Event, nil
+}
+
+func (p *XMLPullParser) NextText() (string, error) {
+ if p.Event != StartTag {
+ return "", errors.New("Parser must be on StartTag to get NextText()")
+ }
+
+ t, err := p.Next()
+ if err != nil {
+ return "", err
+ }
+
+ if t != EndTag && t != Text {
+ return "", errors.New("Parser must be on EndTag or Text to read text")
+ }
+
+ var result string
+ for t == Text {
+ result = result + p.Text
+ t, err = p.Next()
+ if err != nil {
+ return "", err
+ }
+
+ if t != EndTag && t != Text {
+ errstr := fmt.Sprintf("Event Text must be immediately followed by EndTag or Text but got %s", p.EventName(t))
+ return "", errors.New(errstr)
+ }
+ }
+
+ return result, nil
+}
+
+func (p *XMLPullParser) Skip() error {
+ for {
+ tok, err := p.NextToken()
+ if err != nil {
+ return err
+ }
+ if tok == StartTag {
+ if err := p.Skip(); err != nil {
+ return err
+ }
+ } else if tok == EndTag {
+ return nil
+ }
+ }
+}
+
+func (p *XMLPullParser) Attribute(name string) string {
+ for _, attr := range p.Attrs {
+ if attr.Name.Local == name {
+ return attr.Value
+ }
+ }
+ return ""
+}
+
+func (p *XMLPullParser) Expect(event XMLEventType, name string) (err error) {
+ return p.ExpectAll(event, "*", name)
+}
+
+func (p *XMLPullParser) ExpectAll(event XMLEventType, space string, name string) (err error) {
+ if !(p.Event == event && (strings.ToLower(p.Space) == strings.ToLower(space) || space == "*") && (strings.ToLower(p.Name) == strings.ToLower(name) || name == "*")) {
+ err = fmt.Errorf("Expected Space:%s Name:%s Event:%s but got Space:%s Name:%s Event:%s at offset: %d", space, name, p.EventName(event), p.Space, p.Name, p.EventName(p.Event), p.decoder.InputOffset())
+ }
+ return
+}
+
+func (p *XMLPullParser) DecodeElement(v interface{}) error {
+ if p.Event != StartTag {
+ return errors.New("DecodeElement can only be called from a StartTag event")
+ }
+
+ //tok := &p.token
+
+ startToken := p.token.(xml.StartElement)
+
+ // Consumes all tokens until the matching end token.
+ err := p.decoder.DecodeElement(v, &startToken)
+ if err != nil {
+ return err
+ }
+
+ name := p.Name
+
+ // Need to set the "current" token name/event
+ // to the previous StartTag event's name
+ p.resetTokenState()
+ p.Event = EndTag
+ p.Depth--
+ p.Name = name
+ p.token = nil
+ return nil
+}
+
+func (p *XMLPullParser) IsWhitespace() bool {
+ return strings.TrimSpace(p.Text) == ""
+}
+
+func (p *XMLPullParser) EventName(e XMLEventType) (name string) {
+ switch e {
+ case StartTag:
+ name = "StartTag"
+ case EndTag:
+ name = "EndTag"
+ case StartDocument:
+ name = "StartDocument"
+ case EndDocument:
+ name = "EndDocument"
+ case ProcessingInstruction:
+ name = "ProcessingInstruction"
+ case Directive:
+ name = "Directive"
+ case Comment:
+ name = "Comment"
+ case Text:
+ name = "Text"
+ case IgnorableWhitespace:
+ name = "IgnorableWhitespace"
+ }
+ return
+}
+
+func (p *XMLPullParser) EventType(t xml.Token) (event XMLEventType) {
+ switch t.(type) {
+ case xml.StartElement:
+ event = StartTag
+ case xml.EndElement:
+ event = EndTag
+ case xml.CharData:
+ event = Text
+ case xml.Comment:
+ event = Comment
+ case xml.ProcInst:
+ event = ProcessingInstruction
+ case xml.Directive:
+ event = Directive
+ }
+ return
+}
+
+func (p *XMLPullParser) processToken(t xml.Token) {
+ switch tt := t.(type) {
+ case xml.StartElement:
+ p.processStartToken(tt)
+ case xml.EndElement:
+ p.processEndToken(tt)
+ case xml.CharData:
+ p.processCharDataToken(tt)
+ case xml.Comment:
+ p.processCommentToken(tt)
+ case xml.ProcInst:
+ p.processProcInstToken(tt)
+ case xml.Directive:
+ p.processDirectiveToken(tt)
+ }
+}
+
+func (p *XMLPullParser) processStartToken(t xml.StartElement) {
+ p.Depth++
+ p.Attrs = t.Attr
+ p.Name = t.Name.Local
+ p.Space = t.Name.Space
+ p.trackNamespaces(t)
+}
+
+func (p *XMLPullParser) processEndToken(t xml.EndElement) {
+ p.Depth--
+ p.SpacesStack = p.SpacesStack[:len(p.SpacesStack)-1]
+ if len(p.SpacesStack) == 0 {
+ p.Spaces = map[string]string{}
+ } else {
+ p.Spaces = p.SpacesStack[len(p.SpacesStack)-1]
+ }
+ p.Name = t.Name.Local
+}
+
+func (p *XMLPullParser) processCharDataToken(t xml.CharData) {
+ p.Text = string([]byte(t))
+}
+
+func (p *XMLPullParser) processCommentToken(t xml.Comment) {
+ p.Text = string([]byte(t))
+}
+
+func (p *XMLPullParser) processProcInstToken(t xml.ProcInst) {
+ p.Text = fmt.Sprintf("%s %s", t.Target, string(t.Inst))
+}
+
+func (p *XMLPullParser) processDirectiveToken(t xml.Directive) {
+ p.Text = string([]byte(t))
+}
+
+func (p *XMLPullParser) resetTokenState() {
+ p.Attrs = nil
+ p.Name = ""
+ p.Space = ""
+ p.Text = ""
+}
+
+func (p *XMLPullParser) trackNamespaces(t xml.StartElement) {
+ newSpace := map[string]string{}
+ for k, v := range p.Spaces {
+ newSpace[k] = v
+ }
+ for _, attr := range t.Attr {
+ if attr.Name.Space == "xmlns" {
+ space := strings.TrimSpace(attr.Value)
+ spacePrefix := strings.TrimSpace(strings.ToLower(attr.Name.Local))
+ newSpace[space] = spacePrefix
+ } else if attr.Name.Local == "xmlns" {
+ space := strings.TrimSpace(attr.Value)
+ newSpace[space] = ""
+ }
+ }
+ p.Spaces = newSpace
+ p.SpacesStack = append(p.SpacesStack, newSpace)
+}
diff --git a/vendor/vendor.json b/vendor/vendor.json
new file mode 100644
index 0000000..95b9b55
--- /dev/null
+++ b/vendor/vendor.json
@@ -0,0 +1,175 @@
+{
+ "comment": "",
+ "ignore": "test",
+ "package": [
+ {
+ "checksumSHA1": "U/wItGewd+iZXeuFJoUGChSlTn0=",
+ "path": "github.com/PuerkitoBio/goquery",
+ "revision": "8311f594d701949445e752a2c4794db4d4a7e204",
+ "revisionTime": "2018-10-03T00:21:05Z"
+ },
+ {
+ "checksumSHA1": "Q/2QpI7E35SsNYfaxLsWHFry9k4=",
+ "path": "github.com/andybalholm/cascadia",
+ "revision": "901648c87902174f774fac311d7f176f8647bdaa",
+ "revisionTime": "2018-02-20T18:43:36Z"
+ },
+ {
+ "checksumSHA1": "/MBntFRUTjxLNt1ciAqbuIzkcnc=",
+ "path": "github.com/golang-collections/go-datastructures/queue",
+ "revision": "59788d5eb2591d3497ffb8fafed2f16fe00e7775",
+ "revisionTime": "2015-02-11T16:07:25Z"
+ },
+ {
+ "checksumSHA1": "Kh/EnzmngEgl6DxIndgpflFLm00=",
+ "path": "github.com/google/uuid",
+ "revision": "0cd6bf5da1e1c83f8b45653022c74f71af0538a4",
+ "revisionTime": "2019-02-27T21:05:49Z"
+ },
+ {
+ "checksumSHA1": "B0dyaTc5x4eV+kJfAyr+kQogxcY=",
+ "path": "github.com/gorilla/feeds",
+ "revision": "762f7414cb4b615000daa195dfaf09b5aa503590",
+ "revisionTime": "2018-08-19T13:17:00Z"
+ },
+ {
+ "checksumSHA1": "HxNpMrVvflBjc7jse2UVd2u1Ga0=",
+ "path": "github.com/mmcdole/gofeed",
+ "revision": "aaa570041b9e1faf7433ab0e299d26ca2549023f",
+ "revisionTime": "2018-10-02T21:58:57Z"
+ },
+ {
+ "checksumSHA1": "Z5g81IePo5p+/qvAn5sAVCkXBgM=",
+ "path": "github.com/mmcdole/gofeed/atom",
+ "revision": "aaa570041b9e1faf7433ab0e299d26ca2549023f",
+ "revisionTime": "2018-10-02T21:58:57Z"
+ },
+ {
+ "checksumSHA1": "SsLhENxzlzx7h1+yJcbBgLT2lSM=",
+ "path": "github.com/mmcdole/gofeed/extensions",
+ "revision": "aaa570041b9e1faf7433ab0e299d26ca2549023f",
+ "revisionTime": "2018-10-02T21:58:57Z"
+ },
+ {
+ "checksumSHA1": "DYqdUxvChB42WDJ9TNLEbdhtvPg=",
+ "path": "github.com/mmcdole/gofeed/internal/shared",
+ "revision": "aaa570041b9e1faf7433ab0e299d26ca2549023f",
+ "revisionTime": "2018-10-02T21:58:57Z"
+ },
+ {
+ "checksumSHA1": "A4hb/GQtEk5ml8l9u2FYqSwhapE=",
+ "path": "github.com/mmcdole/gofeed/rss",
+ "revision": "aaa570041b9e1faf7433ab0e299d26ca2549023f",
+ "revisionTime": "2018-10-02T21:58:57Z"
+ },
+ {
+ "checksumSHA1": "vt+b8Q63wnZYbJuALvnxhlbAOa4=",
+ "path": "github.com/mmcdole/goxpp",
+ "revision": "e18b6e9b49411ec3116496e8e4e7af93015dfbfc",
+ "revisionTime": "2018-10-01T16:50:12Z"
+ },
+ {
+ "checksumSHA1": "barUU39reQ7LdgYLA323hQ/UGy4=",
+ "path": "golang.org/x/net/html/charset",
+ "revision": "ab34263943818b32f575efc978a3d24e80b04bd7",
+ "revisionTime": "2020-07-06T17:30:18Z"
+ },
+ {
+ "checksumSHA1": "tqqo7DEeFCclb58XbN44WwdpWww=",
+ "path": "golang.org/x/text/encoding",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "HgcUFTOQF5jOYtTIj5obR3GVN9A=",
+ "path": "golang.org/x/text/encoding/charmap",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "UYlVRSWAA5srH3iWvrJz++Zhpr0=",
+ "path": "golang.org/x/text/encoding/htmlindex",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "zeHyHebIZl1tGuwGllIhjfci+wI=",
+ "path": "golang.org/x/text/encoding/internal",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "46UIK1h/DTupMdRnLkijrEIwzv4=",
+ "path": "golang.org/x/text/encoding/internal/identifier",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "DhdZROnJq+cEcQ/sHY7GEq5wQ8U=",
+ "path": "golang.org/x/text/encoding/japanese",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "qHQ79q9peY8ZkCMC8kJAb52BAWg=",
+ "path": "golang.org/x/text/encoding/korean",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "55UdScb+EMOCPr7OW0hCwDsVxpg=",
+ "path": "golang.org/x/text/encoding/simplifiedchinese",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "9EZF1SHTpjVmaT9sARitvGKUXOY=",
+ "path": "golang.org/x/text/encoding/traditionalchinese",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "W00jrssFTBJPpLXYXTQCTqxpkww=",
+ "path": "golang.org/x/text/encoding/unicode",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "8ea1h1pimPfXc6cE5l3SQTe7SVo=",
+ "path": "golang.org/x/text/internal/language",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "GxBlFOqWoIsWCMswUHh6dUqM5no=",
+ "path": "golang.org/x/text/internal/language/compact",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "hyNCcTwMQnV6/MK8uUW9E5H0J0M=",
+ "path": "golang.org/x/text/internal/tag",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "Qk7dljcrEK1BJkAEZguxAbG9dSo=",
+ "path": "golang.org/x/text/internal/utf8internal",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "kgODOZdRLWKSppiHzrqOKdtrGHA=",
+ "path": "golang.org/x/text/language",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ },
+ {
+ "checksumSHA1": "IV4MN7KGBSocu/5NR3le3sxup4Y=",
+ "path": "golang.org/x/text/runes",
+ "revision": "23ae387dee1f90d29a23c0e87ee0b46038fbed0e",
+ "revisionTime": "2020-06-11T18:50:30Z"
+ }
+ ],
+ "rootPath": "local/rssmon3"
+}