My MSc. consists of six taught modules, and I sat the exam for the module #6 Optimization for Learning, Planning and Problem Solving this morning. It seemed to go pretty well, nothing in there that I hadn’t prepared for so with any luck there’ll be no resits and that was my last exam. At least, for this MSc, anyway.
I usually post up about each day as I’m doing a module, but I didn’t this last time. The module was pretty heavy on the coursework, involving a bigger than usual time investment, plus trying to balance that with my day job and my dissertation project is tough going. To be honest, trying to split my focus over these things and still retain some semblance of a home and social life was taxing, and it felt a little like it was maybe a bit too much. That’s a depressing feeling, but hey. With this last module down, there’s one less thing I need to split my time over.
The optimization module was actually very good, covering a pretty wide range of material in enough depth to be implementable. The lecturer, Dr Joshua Knowles, made all the course materials are available at the site I linked to above, as well as details about further reading, self-test questions, background materials and the like, broken down by week. If you want to know what a CS module at Manchester is like, I don’t think you can do better than familiarising with the background stuff on there and then trying to follow the course in sequence completing the coursework as you go.
I might post up more about how I found the course sometime later. Right now, it’s time to get back on top of my project.
According to this blog, I started work on the background report around the tenth of October last year and today it’s all done, ready to be submitted.
As you might expect, there was a lot of reading involved, and a great deal of writing, re-writing, editing, and all that other good stuff that comes with trying to put together a 25-page document with proper referencing. Since New Year, I’ve also started prepping for the final taught module I’ll be taking which starts in February.
There hasn’t been much time for blogging in amongst all that, so the posts are a even more sparse than usual! The project submission deadline is September, so I hope that life will return to something like normal following that. Until then, I expect things will be a bit pressured!
One of the great things about doing an academic qualification through a major institution like the University of Manchester is the access you get to scientific literature.
A huge number of research papers are locked away behind paywalls. Sites like Google Scholar can show you what’s out there, but you’ll only be able to see abstracts for most of it. To get at the good stuff, you’ll be paying tens of poinds Sterling. That doesn’t sound like much, but to do a reasonably rigorous literature search you’d need to access lots of them. I’ve probably read a few dozen papers now that are related to my project, and many that weren’t – which would have been annoying if I’d paid for them individually. I expect there must be ways to pay for bulk access, but there are also many different sources you might need to get that access with too.
It seems like a shame this information needs to be locked away but of course it’s additional revenue for some organisation – hopefully the money goes back into supporting research and researchers.
The breadth and depth of research going on out there on every conceivable topic is astonishing. Getting access to all that stuff is a definite plus.
I’ve been wrapping up the first set of coursework assignments for the project today with a quick check over the material before submission.
The next job now is the background report. This document will summarise what I learned during my literature search in the context of my project and needs to be less than twenty pages long (not counting paraphernalia like covers, tables, references and appendices). I’ve prepared a new git repository for the work, but I’ll be hosting a git server on my EC2 instance this time. Whilst having my git repository on Dropbox was convenient and gave me a backup, it wasn’t the easiest thing to clone if I need to pull a copy down for some opportunistic work. The setup was pretty straightforward with gitosis and we’ll see how it pans out.
After a week of beavering away with JSP, HTML and CSS, I reckon my project website is about ready.
I was in Manchester on Tuesday, meeting my project supervisor and one of the guys who runs the taught module associated with the project. There don’t seem to be any problems, and it helped to clarify some of the vagueness I referred to previously.
So, the website content needed to include a statement of the project aims and objectives, a summary of the progress to date, the project plan (significantly cut down from the previous detailed exposition) and a summary of the literature search so far – bringing together what I’d already done, about a week’s work so far. I also decided to take a middle road between the simplistic html-in-a-zip approach and an all-singing-all-dancing one. I’m not going to get any more marks for going nuts on this thing, so I just took the aspects that mitigate risks or save time – for example, using a custom tag library to template out the elements that would otherwise need to be duplicated, thus saving time especially when they needed to be changed. I also decided not to compromise on the HTML/CSS separation, again in the interests of making changes to stylistic aspects as simple as possible.
All three elements of the project to date save data in a text-based format: the summary is written in LaTeX; the plan saves an XML document; and the website of course is a structure made up of HTML, CSS and JSP files. This means that all three play nicely with a version control system, and I decided to give Git a whirl at the outset. In a nutshell, I’ve been making small changes, then storing those changes along with messages as part of a ‘commit’ process. These messages can be extracted, providing a kind of timeline of what I’ve been doing for the past few weeks much better than I would have done in my own notes. I can take those timestamped messages and push them into the website during the build process, then use a simple renderer to print them out on the site when certain links are clicked. Seemed like a good way to augment the ‘summary to date’ deliverable.
I’ve also spent a few hours updating and tidying up this blog as I’ve linked appropriate posts into the site as another way of tracking progress and my hosting provider took it down over the weekend, as well as a nasty surprise with my original EC2 instance… maybe good for another post.
Yep, COMP61032 ‘Optimization for Learning, Planning and Problem Solving‘ has appeared in my field of vision and it looks a bit hardcore. It’s part of the ‘Learning from Data’ theme – I guess optimisation is a natural partner to machine learning approaches, owing to the need to chew up a whole lot of information as quickly as possible.
Why is it tempting? Lots of algorithms and computational complexity going on – it’s one of those modules that’s shouting “Bet you can’t pass me”. More than that though, it’s modules with that computational theory slant that have shown me moments of catch-your-breath clarity in the way that messy practicality distils to elegant mathematical beauty. It’s a great sense of satisfaction when you persevere and get to see it.
So – Ontology engineering, or Optimisation? Hey, I warned you it was geeky.
Optimization for learning, planning and problem-solving
One of the assessed deliverables for my MSc project is a project website, so I’ve been having a bit of a setup session this weekend.
The objectives set for the website are a little… what’s the word… vague? See what you think:
A multipage website summarizing the work so far.
That’s it as far as I can tell. Exactly how will the delivered work be assessed? Your guess is probably about as good as mine. Having looked at the discussion forum for the module (the full-timers did this in the first half of the year – I’ve been told I set my own deadlines when it comes to the project stuff as I’m not a full-time student) it seems that the marking scheme was quite severe with many complaints about low marks and little evident explanation, so I’ll make some enquiries before I start work on the content proper.
Back in April, I asked how the website deliverable should be ‘handed in’ and was told that a zip with some files in it would be fine.
I shan’t be hosting my site on getNetPortal though. As I spend most of my professional life working on the Java EE platform, Java is the obvious choice. Why not use a different language for the experience? Whilst I’ve got the time to learn a bit about hosting a public-facing website, I’m not sure I’ll have the time to learn a new way of creating websites that I’ll be happy with… not to mention that there’s a toolset and delivery pipeline that varies from platform to platform. Playing about with Erlang or some such will have to wait for another day.
GetNetPortal do host Java web applications, but it’s a shared Tomcat environment with a bunch of limitations as well as apparently risks to other people’s app availability if I deploy more than three times in a day. So where else can I go? Other specialised hosting companies are out there, but they’re not exactly cheap…
So I’ve provisioned myself a server on Amazon’s Elastic Compute Cloud (Amazon EC2). Amazon provide a bunch of images themselves and one of them happens to be a Linux-based 64bit Tomcat 7 server. Time between me finding the image I wanted and having a working server available? About five minutes. No matter how you cut it, that’s pretty awesome. To be honest, the biggest challenge was choosing an image – there’s a huge number to choose from and I tried a couple of other images that weren’t as well set up before settling on the Amazon-provided one. The best thing – EC2 is pay-as-you-go, at dirt cheap rates for low utilisation.
For those of you who haven’t seen EC2, here’s a couple of screenshots that might help explain what it’s all about. First up, let’s take a look at the application server I provisioned.
Checking my bill tonight, I can see an itemised account of exactly what I’ve been billed for. Being able to see this level of detail should let me stay in control of what I’m spending.
The rest of my time has been spent having a look around my new server, setting up Tomcat (to host a placeholder app in the root context) and iptables (to route traffic from the privileged ports 80 and 443 out to the ports Tomcat is listening on – 8080 and 8443 – thus avoiding the need to install a dedicated webserver or run Tomcat with root privileges), setting up some self-signed SSL certificates (I’ll need those so that I can bring up apps that require logon – without SSL, those usernames and passwords would be floating around the internetz in clear, negating the point of their existence) and finally scripting up the setup process in case I need to set this stuff up again.
Now, I can tick off the project tasks around setting up hosting nice and early. Quite a productive weekend!
It’s been a bit quiet on crossedstreams.com for the past month or so. Between lots of great stuff going on at work keeping me very busy, some Stag Do related shenanigans and working on my project, here hasn’t been much time for blogging.
In order to complete my MSc, I need to complete a project and produce a dissertation. In addition there is a pre-requisite module that sets up the project, requiring the submission of a project statement, a project plan, a project website and a project background report. It’s these aspects I have been working on.
Additional complexity is introduced by my choice to prepare my own project involving what I do for a day job. This introduces certain additional hoops that need to be jumped through that happen to take a fair bit of time and effort, but wih any luck those hurdles are nearly cleared now and the actual work can kick off properly.
Between prep for my MSc. project, getting married, snowed under at work, starting the my next MSc. module and being full of cold, there hasn’t been much time for blogging…
So today was day 4 of the Text Mining module. As a friend put it, “Text Mining? What – like using grep?”
Text Mining is defined as finding previously unknown information in unstructured data. Unknown – as in never explicitly written down.
So by ‘text’, we mean un- or partially-structured data, like word documents or this blog page. There’s some structure here, headings, subheadings, lists and the like. but it’s not ‘structured’ in the sense that database tables are, with fields and columns and a type system.
Tools like grep can match words (more generally, expressions describing relatively simple patterns of characters called regular expressions), so whilst they’re fairly easy to use (so long as you don’t try to push them too far), they are limited in the complexity of what they can do.
For example, you can’t easily use grammatical ideas, like identifying documents that are about fish (a fish), but not fishing (I fish). You can’t search for documents related to a concept, and recognising generic names or technical terms is out. You can’t build structures like indices to help with searches, which means that over reasonably large collections of documents, grep is too slow to be very useful.
I’m still getting my head around how it hangs together, but text mining seems like a set of gloriously messy, pragmatic and seemingly pretty successful ways to let computers listen in on the languages that humans have evolved.
I took the Logic and Applications exam last Friday. I think I’m ready now to talk about the ordeal…
It wasn’t so bad really, I guess. I made a bad call as to which questions to answer (it was one of those answer three of four kind-of-things) and ran out of time. One of the questions I initially chose had what was for me a brick wall towards the midway point, and on a two hour exam, spending 20-25 mins heading down a dead end isn’t the best idea!
I guess the two frustrations I felt with this exam were firstly that the course covered so much material so quickly, but each of the topics turned out to be a bit of a rabbit-hole when I got to thinking about it during the revision process – the more I thought about it, the more questions I found!
On top of that, one of the key aspects of a course like this is transformation of formulae into alternative forms which have properties we want – usually, more efficient solving algorithms. These transformations are rather like the algebraic manipulation of mathematical formulae we did at school – progressing in unit steps, painstakingly copying out each new form as you go. That consumes a lot of time, especially when the formulae don’t give out easily, but it doesn’t really seem to prove much about the student’s skills – the pages-of-transforms kind of work was all hammered pretty hard in the coursework, after all. Then again, maybe I just screwed something up early doors and that led to the extensive transform.
The course was new this year anyway, so maybe it takes a little time for the exams to settle in terms of difficulty. Or I’m just a dumbass. Anyway, it’s too late to worry about all that now. Hopefully, I passed – that’s the main thing, right?