Why I bought a System76 Gazelle Pro laptop

My laptop is a little underpowered these days and I’ve been having a bit of trouble with up to date support for the AMD Radeon graphics hardware it packs, so I’ve been thinking about upgrading for a few months. I wanted to get a machine designed for Linux, rather than a buying a Windows machine and installing my distribution du jour on it. There are a couple of reasons for this desire. First, it seems to be getting more difficult to be sure that a machine designed for Windows is going to work well with a Linux distro, thanks to features like NVIDIA Optimus and UEFI secure boot, and second I object to paying for an operating system I have no intention of using. I’d rather my money went to the projects and supporters of the open source communities that provide the operating system I choose to use.

The only viable options I found for a well specified bit of laptop kit designed for Linux are the System76, ZaReason and Dell. There are others providing Linux laptops but mostly as cheap or refurbished options.

I have a couple of specific requirements other than good Linux support. I want a 15.6 inch 1080p flat panel, because my eyesight is pretty good and I value screen real estate because I use software like photo editing suites and development environments that have big complicated user interfaces. Having run short of memory on a couple of projects recently I want at least 8GB memory, and I want decent processor. I’d like a fast hard disk or an SSD, and I also want to avoid NVIDIA and AMD graphics hardware and stick with Intel graphics, as I don’t do anything that needs epic graphics power and I’d rather have graphics hardware with a good reputation for long-term Linux support.

ZaReason, a US-based company, offers the Verix 530 which comes close but packs NVIDIA graphics hardware and needs both the memory and hard drive boosting to meet my spec, bumping up the price. Dell only offers one Linux laptop which is a bit pricey in comparison to the others and doesn’t have many customisation options. In only offering one machine and whacking a “Dell Recommends Windows” banner on the pages for their Linux machine, Dell’s not building my confidence that they really know what they’re doing with Linux.

System76 won my business with their Gazelle Pro. It comes close out of the box and I can customise the couple of other options I need without breaking the bank. The important options I chose are:

  • 15.6″ 1080p Full High Definition LED Backlit Display with Glossy Surface (1920 x 1080)
  • Intel HD Graphics 4000
  • 3rd Generation Intel Core i7-3630QM Processor (2.40GHz 6MB L3 Cache – 4 Cores plus Hyperthreading)
  • 8 GB Dual Channel DDR3 SDRAM at 1600MHz – 2 X 4GB
  • 500 GB 7200 RPM SATA II HDD
  • International UK Keyboard Layout – Including Pound, Euro, and Alt GR keys

It’s a shame they’re based out of the US as it adds shipping time and cost on. I also wasn’t sure exactly what happens about paying UK taxes on the import. I put the order in last week and the machine arrived today. Next up, unboxing and first impressions!

Sponsored Post Learn from the experts: Create a successful blog with our brand new courseThe WordPress.com Blog

Are you new to blogging, and do you want step-by-step guidance on how to publish and grow your blog? Learn more about our new Blogging for Beginners course and get 50% off through December 10th.

WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.

A disaster, minimised!

I’ve not been blogging this last few months what with all my spare time going into trying to do some proper computer science and then writing my dissertation. Last night, I had a catastrophe – I noticed something in my results that should be impossible and traced it back to a subtle bug that compromised all my results to date! Pretty nasty at this stage in the project…

The effect was subtle and I didn’t think it would alter my conclusions. That said, to ignore it and continue wouldn’t be right. The alternative of explaining about the bug and its effects in my dissertation is not something I wanted to have to do either.

After a few minutes of sitting with my head in my hands I decided to fix it and start again. After all, it’s just compute time and elbow grease, it’s not like I just threw away a month’s time on the LHC or anything! Turns out, my decision to script everything paid off and I could pretty much throw a few tens of hours of compute time in to reproduce all my data and then a couple of hours with the charting software and I’m good to go. The choice of LaTeX also turned out to be even more of a winner as I was able to rebuild my document with the new figures and any layout modifications required almost trivially.

I was right – the conclusions do not change, however they are now more striking and there are no oddities that I can’t really explain. Tips of the day for those doing work like this:

  • script everything you can – just in case you need to redo stuff
  • use LaTeX – because you can swap out every figure for a new version easily

There are plenty of other reasons for applying these two tips, but there’s two reasons I hadn’t thought of before yesterday.

 

Scripting Java with JavaScript

Java programs run on the Java Virtual Machine, a kind of virtual computer that hides many of the differences between the different kinds of computers it’s running on. Folks have been writing implementations of other languages that run on this virtual machine for a while now – besides JVM-specific languages like Scala and Groovy, you can also get ports of existing languages like JRuby, Jython and JavaScript.

Conveniently, in the Java 6 specification (released way back in September, 2006), official scripting support is required in the javax.script package, and a slightly stripped-down build of Mozilla Rhino, the JavaScript implementation is shipped with the JVM.

I’ve been meaning to take a look at this for a while now, and I decided to use these facilities to solve a problem I was having in my MSc. project.

My project consists of runnable experiments that produce some kind of results over sets of data. I want to have fully set up experiments ready to run so that I can repeat or extend the experiment very easily without having to refer to notes or other documentation, which involves programs that accept configuration information and wire up components.

The Java code to do this kind of thing tends to be very verbose – lots of parsing, type-checking and an inability to declare simple data structures straight into code. It’s tedious to write and then hard to read afterwards. Using JavaScript to describe my experiment setup looked like a good solution.

Example: creating a data structure that provides two named date parameters in Java, as concisely as I can:

package com.crossedstreams.experiment;

import java.text.SimpleDateFormat;
import java.util.HashMap;
import java.util.Map;

public class RunExperiment {
  public static void main(String[] args) throws Exception {
    SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd hh:hh:ss");

    Map config = new HashMap();

    config.put("start", format.parse("2012-02-01 00:00:00"));
    config.put("end", format.parse("2012-02-08 00:00:00"));

    // do stuff with this config object...
  }
}

That’s a lot of code just to name a couple of dates! The amount of code involved hides the important stuff – the dates. Now, achieving the same with JavaScript…

var config = {
  start: new Date("February 1, 2012 00:00:00"),
  end: new Date("February 8, 2012 00:00:00")
}

// do stuff with this config

When there are many parameters and components do deal with, it gets tough to stay on top of. Some of what I’m doing involves defining functions to implement filters and generate new views over data elements and JavaScript helps again here letting me define my implementers inline as part of the configuration:

filter: new com.crossedstreams.Filter({
  accept: function(element) {
    return element == awesome;
  }
})

This approach isn’t without problems, for example there’s some ugliness when it comes to using Java collections in JavaScript and JavaScript objects as collections. To be expected I guess – they are different languages that work in different ways so there’s going to be some ugly at some of the interfaces, maybe even some interpretation questions that don’t have one right answer.

Nothing I’ve come up against so far can’t be fairly easily overcome when you figure it out. I think that using Java to build components to strict interfaces and then configuring and wiring them up using a scripting language like JavaScript without leaving the JVM can be a pretty good solution.

 

My last exam… hopefully

My MSc. consists of six taught modules, and I sat the exam for the module #6 Optimization for Learning, Planning and Problem Solving this morning. It seemed to go pretty well, nothing in there that I hadn’t prepared for so with any luck there’ll be no resits and that was my last exam. At least, for this MSc, anyway.

I usually post up about each day as I’m doing a module, but I didn’t this last time. The module was pretty heavy on the coursework, involving a bigger than usual time investment, plus trying to balance that with my day job and my dissertation project is tough going. To be honest, trying to split my focus over these things and still retain some semblance of a home and social life was taxing, and it felt a little like it was maybe a bit too much. That’s a depressing feeling, but hey. With this last module down, there’s one less thing I need to split my time over.

The optimization module was actually very good, covering a pretty wide range of material in enough depth to be implementable. The lecturer, Dr Joshua Knowles, made all the course materials are available at the site I linked to above, as well as details about further reading, self-test questions, background materials and the like, broken down by week. If you want to know what a CS module at Manchester is like, I don’t think you can do better than familiarising with the background stuff on there and then trying to follow the course in sequence completing the coursework as you go.

I might post up more about how I found the course sometime later. Right now, it’s time to get back on top of my project.

Done with the Background Report

According to this blog, I started work on the background report around the tenth of October last year and today it’s all done, ready to be submitted.

As you might expect, there was a lot of reading involved, and a great deal of writing, re-writing, editing, and all that other good stuff that comes with trying to put together a 25-page document with proper referencing. Since New Year, I’ve also started prepping for the final taught module I’ll be taking which starts in February.

There hasn’t been much time for blogging in amongst all that, so the posts are a even more sparse than usual! The project submission deadline is September, so I hope that life will return to something like normal following that. Until then, I expect things will be a bit pressured!

What scientific literature?

One of the great things about doing an academic qualification through a major institution like the University of Manchester is the access you get to scientific literature.

A huge number of research papers are locked away behind paywalls. Sites like Google Scholar can show you what’s out there, but you’ll only be able to see abstracts for most of it. To get at the good stuff, you’ll be paying tens of poinds Sterling. That doesn’t sound like much, but to do a reasonably rigorous literature search you’d need to access lots of them. I’ve probably read a few dozen papers now that are related to my project, and many that weren’t – which would have been annoying if I’d paid for them individually. I expect there must be ways to pay for bulk access, but there are also many different sources you might need to get that access with too.

It seems like a shame this information needs to be locked away but of course it’s additional revenue for some organisation – hopefully the money goes back into supporting research and researchers.

The breadth and depth of research going on out there on every conceivable topic is astonishing. Getting access to all that stuff is a definite plus.

First three courswork items submitted

I’ve been wrapping up the first set of coursework assignments for the project today with a quick check over the material before submission.

The next job now is the background report. This document will summarise what I learned during my literature search in the context of my project and needs to be less than twenty pages long (not counting paraphernalia like covers, tables, references and appendices). I’ve prepared a new git repository for the work, but I’ll be hosting a git server on my EC2 instance this time. Whilst having my git repository on Dropbox was convenient and gave me a backup, it wasn’t the easiest thing to clone if I need to pull a copy down for some opportunistic work. The setup was pretty straightforward with gitosis and we’ll see how it pans out.

Best get cracking then!

Well, the website is up…

After a week of beavering away with JSP, HTML and CSS, I reckon my project website is about ready.

I was in Manchester on Tuesday, meeting my project supervisor and one of the guys who runs the taught module associated with the project. There don’t seem to be any problems, and it helped to clarify some of the vagueness I referred to previously.

So, the website content needed to include a statement of the project aims and objectives, a summary of the progress to date, the project plan (significantly cut down from the previous detailed exposition) and a summary of the literature search so far – bringing together what I’d already done, about a week’s work so far. I also decided to take a middle road between the simplistic html-in-a-zip approach and an all-singing-all-dancing one. I’m not going to get any more marks for going nuts on this thing, so I just took the aspects that mitigate risks or save time – for example, using a custom tag library to template out the elements that would otherwise need to be duplicated, thus saving time especially when they needed to be changed. I also decided not to compromise on the HTML/CSS separation, again in the interests of making changes to stylistic aspects as simple as possible.

All three elements of the project to date save data in a text-based format: the summary is written in LaTeX; the plan saves an XML document; and the website of course is a structure made up of HTML, CSS and JSP files. This means that all three play nicely with a version control system, and I decided to give Git a whirl at the outset. In a nutshell, I’ve been making small changes, then storing those changes along with messages as part of a ‘commit’ process. These messages can be extracted, providing a kind of timeline of what I’ve been doing for the past few weeks much better than I would have done in my own notes. I can take those timestamped messages and push them into the website during the build process, then use a simple renderer to print them out on the site when certain links are clicked. Seemed like a good way to augment the ‘summary to date’ deliverable.

I’ve also spent a few hours updating and tidying up this blog as I’ve linked appropriate posts into the site as another way of tracking progress and my hosting provider took it down over the weekend, as well as a nasty surprise with my original EC2 instance… maybe good for another post.

A Very Geeky Dilemma

A new module has appeared on the University of Manchester CS horizon, and it’s temping me away from wrapping up the taught course with my previous front-runner ‘Ontology Engineering for the Semantic Web‘.

Yep, COMP61032 ‘Optimization for Learning, Planning and Problem Solving‘ has appeared in my field of vision and it looks a bit hardcore. It’s part of the ‘Learning from Data’ theme – I guess optimisation is a natural partner to machine learning approaches, owing to the need to chew up a whole lot of information as quickly as possible.

Why is it tempting? Lots of algorithms and computational complexity going on – it’s one of those modules that’s shouting “Bet you can’t pass me”. More than that though, it’s modules with that computational theory slant that have shown me moments of catch-your-breath clarity in the way that messy practicality distils to elegant mathematical beauty. It’s a great sense of satisfaction when you persevere and get to see it.

So – Ontology engineering, or Optimisation? Hey, I warned you it was geeky.

Optimization for learning, planning and problem-solving

Setting up my Project Website

One of the assessed deliverables for my MSc project is a project website, so I’ve been having a bit of a setup session this weekend.

The objectives set for the website are a little… what’s the word… vague? See what you think:

A multipage website summarizing the work so far.
– Objectives
– Deliverables
– Plan
– Literature

That’s it as far as I can tell. Exactly how will the delivered work be assessed? Your guess is probably about as good as mine. Having looked at the discussion forum for the module (the full-timers did this in the first half of the year – I’ve been told I set my own deadlines when it comes to the project stuff as I’m not a full-time student) it seems that the marking scheme was quite severe with many complaints about low marks and little evident explanation, so I’ll make some enquiries before I start work on the content proper.

Back in April, I asked how the website deliverable should be ‘handed in’ and was told that a zip with some files in it would be fine.

Screw that.

I mean, seriously – the world has moved on. To be even vaguely interesting, I’m thinking about reusing relevant content from this blog, and some of the tooling I’m using like Ganttproject saves XML data that’s crying out for some transformation and JavaScript magic.  I have my own domain name and there’s an opportunity here to learn some stuff about infrastructure (and I am doing this MSc. to learn stuff in the first place), so I’ve been setting up a server. Again, checking back on the forums, some of the other students went the same route and there’s no evidence of it harming their chances. I think hosting the project website as a subdomain of crossedstreams.com makes sense – I already own the domain name and subdomains are a simple matter of extra DNS records, which is dead easy to set up with my provider, getNetPortal.

I shan’t be hosting my site on getNetPortal though. As I spend most of my professional life working on the Java EE platform, Java is the obvious choice. Why not use a different language for the experience? Whilst I’ve got the time to learn a bit about hosting a public-facing website, I’m not sure I’ll have the time to learn a new way of creating websites that I’ll be happy with… not to mention that there’s a toolset and delivery pipeline that varies from platform to platform. Playing about with Erlang or some such will have to wait for another day.

GetNetPortal do host Java web applications, but it’s a shared Tomcat environment with a bunch of limitations as well as apparently risks to other people’s app availability if I deploy more than three times in a day. So where else can I go? Other specialised hosting companies are out there, but they’re not exactly cheap…

So I’ve provisioned myself a server on Amazon’s Elastic Compute Cloud (Amazon EC2). Amazon provide a bunch of images themselves and one of them happens to be a Linux-based 64bit Tomcat 7 server. Time between me finding the image I wanted and having a working server available? About five minutes. No matter how you cut it, that’s pretty awesome. To be honest, the biggest challenge was choosing an image – there’s a huge number to choose from and I tried a couple of other images that weren’t as well set up before settling on the Amazon-provided one. The best thing – EC2 is pay-as-you-go, at dirt cheap rates for low utilisation.

For those of you who haven’t seen EC2, here’s a couple of screenshots that might help explain what it’s all about. First up, let’s take a look at the application server I provisioned.

AWS Management Console with my instances
AWS Management Console with my instances

Checking my bill tonight, I can see an itemised account of exactly what I’ve been billed for. Being able to see this level of detail should let me stay in control of what I’m spending.

Amazon Web Services - Billing
Amazon Web Services - Billing

The rest of my time has been spent having a look around my new server, setting up Tomcat (to host a placeholder app in the root context) and iptables (to route traffic from the privileged ports 80 and 443 out to the ports Tomcat is listening on – 8080 and 8443 – thus avoiding the need to install a dedicated webserver or run Tomcat with root privileges), setting up some self-signed SSL certificates (I’ll need those so that I can bring up apps that require logon – without SSL, those usernames and passwords would be floating around the internetz in clear, negating the point of their existence) and finally scripting up the setup process in case I need to set this stuff up again.

Now, I can tick off the project tasks around setting up hosting nice and early. Quite a productive weekend!