In my last post, I started setting out some of the work we did to create GapVis, an online interface for reading and visualizing GAP texts. In this post, I’ll go a bit more into the technical details of the application, which uses the Backbone.js framework.
A lot of the process of building a web application like GapVis (at least the way I do it) is about iteratively coming around to a solid architecture. For example, I started off without storing application state in any single place; but I discovered I was rapidly entering a tangled web of cross-referenced function calls in which too many parts of the application had to be aware of each other. Eventually, I arrived at the “global state” pattern, allowing different pieces to be nicely independent of each other by having everything listen to events on a single State model.
At the same time, I realized that in many cases coordinating the different components would be easier if “parent” components were responsible for their children, so I added some structure to make this simpler. As I went along, I noted the choices I was making, to help me follow a consistent pattern as I added new components:
Basic architecture: - Models are responsible for getting book data from API - Singleton state model is responsible for ui state data - Views are responsible for: initialize: - instantiating/fetching their models if necessary - instantiating sub-views - listening for state changes - listening for model changes render: - adjusting the layout of their container boxes - creating their content events: - listening for ui events, updating state ui methods: - updating ui on state change - updating ui on model change - Routers are responsible for: - setting state depending on route - setting route depending on state Process of opening a view: - URL router or UI event sets state.topview to the view class - State fires topview:change - AppView receives event, closes other views, calls view.open() - view clears previous content if necessary - view either renders immediately, or fetches data and renders
This is actually taken straight from the code comment I used to keep track of it. While this kind of documentation is usually used to enforce consistency across multiple programmers, I found it helpful for my own use as well, if only as a way of forcing me to solidify my architectural choices.
I hope that wasn’t too much technical detail (and if you want more, let me know – I’m happy to answer questions!). The last post will talk more about some of the interface and design choices we made, and how we think this kind of interface enhances and deepens the experience of reading GAP texts. And if you haven’t done so yet, please check out the GapVis application and let us know what you think!
Just on the technical stuff;
How would you separate out say a dynamic server layer (e.g. using REST and your favorite server side language and framework) that allows you to proxy e.g. texts from Perseus via your server? Doing that entirely on the client-side you’d have cross-site scripting sandbox issues. Also, I can see applications for this where you’d go not just from texts to places but places to texts. For example find all Latin/Greek texts where place A is mentioned in relation to place B. Surely that’s gonna need some type of back end!
I’m also curious as to why you’re using an antiquated old build system like Ant? If you don’t like Maven there’s always Gradle, or perhaps language-specific systems like Rake for Rails apps (not sure about pure-js solutions). I have not used Ant for some years now and every time I have to touch an old system that uses it I’m reminded just how horrific it really is (I am a part-time Classical History PhD student who works as a programmer but I’m not working in the ‘digital classics’ field at all – straight history (Livy)).
Glad you like it! A few responses:
* This is very much a cop-out, but the great thing about fully separating the interface from the data API is that I can pass on the blame for geoparsing issues to my colleagues :). We’re obviously still working out the kinks in correctly identifying and geolocating placenames in the results – you can read more here: https://googleancientplaces.wordpress.com/2011/10/11/how-we-tweaked-the-geoparsing/
* Right now the API is entirely static, but the intent is to move to a dynamic API in the near future – the static files were essentially just my stub data. For example, I’d love to offer a full-text search box that would allow you to jump to different pages by keyword, and you’re right, that requires a dynamic back-end. In the end, we’ll probably go for the best of both worlds by using a dynamic back-end with aggressive caching on the server.
* As for “why ant?”, the short answer is, “that’s the tool I know the best.” We don’t have a particularly complex build system, and most of the heavy lifting is done by separate utilities like the YUI compressor. Ant is perfectly serviceable for concatenating files, replacing tokens in script tags, running .jar utilities, etc, and it offers some easy set-up options for configuring different build targets. And it’s pretty flexible, with some coaxing – for example, there isn’t much Git support, but it wasn’t too hard to cobble together an ant target that could switch branches, build the production code, and deploy to GitHub pages. While I’m sure there are better tools now available, my basic rule is usually to only switch tools when there’s a compelling reason – otherwise, given the speed at which new tools become available, you end up having to learn an entirely new set of skills for each project. The Backbone learning curve seemed like enough in this case :).
Thanks for your reply. I can definitely understand why you chose to use a technology for the build that you understand, though. I have strong technical objections to Ant as a tool which I won’t bother you about.
Looking at that post about the geo-tagging that you linked to (nice way to deflect the defects! ;-)), I wonder if some of of the issues with people/places (I found it tagging ‘Syria’ as being in the Ionian sea off Greece!) could not be somewhat ameliorated by using the Latin version of the text rather than the English? Although the ‘Syria’ as above was in Gibbon not Tac. Hist.
Hi. Just to join in the “passing the buck” game… Some of the geoparser’s point locations fall, sadly, into the sea, and that’s often because the data is derived from the Barrington Atlas which doesn’t have lat/longs but quite big grid squares. Very often the best location we can get is the centre of the relevant square, which may be some distance from the correct place.
Using Latin versions of the text would be an interesting experiment but I suspect our part of speech tagger would struggle. 🙂 Seriously though, you’re quite right that there are all sorts of things that could be done (and I expect POS taggers and language models for Latin and ancient Greek do exist), but there’s only so much time, alas. We’re certainly not claiming the Geoparser is 100% accurate, but it’s quicker than a human having to do the marking-up by hand.
Pingback: Designing a Visual Interface for GAP Texts | Google Ancient Places
Pingback: Designing a Visual Interface for GAP Texts | archaeoinaction.info