The Alien Isolation motion tracker on Playstation 4

I have to admit I haven’t played on the new generation of consoles much but recently I did get the chance to play on a Playstation 4. I had heard that the PS4 pad had a built in speaker but hadn’t really given it any thought, the Wii remote also had a speaker built in to it and I hadn’t ever really seen it be used for anything useful. The PS4 game I had was fortunate enough to play was the excellent Alien Isolation, it is a great game and really captures the atmosphere of the first Alien film, although it looks and feels very much like an 360/PS3 game and I was struggling to spot anything new and exciting being delivered by the PS4 experience.

I had in fact forgotten that I was playing the game on a PS4 until walking down a spooky corridor I heard a crackle from the controller I was holding. The sound was a crackling, like an ancient piece radio coming to life, within the crackling I could here familiar beeps that I recognised from the alien franchise, I was holding a motion tracker! The PS4 Pad also has a light emitting from it, as the motion tracker appeared on the TV screen the pad light changed to green to match its onscreen colour. It started to beep and vibrate in time with the beeps. While the information from the tracker was on the TV screen I felt it really added to the experience, especially since it is a game filled with suspense and the tracker in your hand makes it feel much more immediate. I think it is the only time I’ve ever seen a speaker on a pad do anything cool!

Some dude made his own physical version of the tracker. Got to love this:

 

 

Posted in Computer Games

Preposts, freemium, addictiveness and education

Sometimes over the course of a few days I read a couple of posts, have a few interesting meetings over a pot of tea ,watch some good television, play some good games and think to myself ‘the messages that all these things are telling me are related! There is a narrative here and I must blog about it. I wait until Friday, which of course, is the best day to write rambly blog posts, in the mean time I play a few video games in the mean time to think things over. Friday comes, boot up Microsoft Word, fingers to keys, mind goes blank. How where all these things linked again? Where do I start? Sometimes I need to write a post just to work out what my post should be about, something of a pre-post:

Today I read this blog post: Big Data, Social Ecology and the Surveillance of Management by the people. In which Mark describes one of the most distasteful things about big data, the top-down-ness. The idea of it becoming a ‘science’ of the future based on surveillance with results only available to the elite resulting ‘decisions based on a particular elite interpretation’. It is a great post and reminded me of the latest episode of South Park I watched yesterday, which dealt with the topic of addiction. In particular it focused on gamification and the freemium model. (‘mium’ being Latin for ‘not really’). This gist of the episode was that the model prays on a small number of users, it only needs to pray on a few peoples addictive behaviour make a profit, it is made worse by the fact these games have access to all this data about the user. I worry that these elite decisions that Mark talks about are not just particular elite interpretations but are made in a similar fashion to the designer of a slot machine. I did enjoy the fake advert for beer that aired after the ad break:

The show was showing that other industries sucker people in with their vices, know damn well what they are doing but justify it with a ‘well we told them to do it responsibly’. I also had some thoughts that perhaps the elites making their interpretations have their own addictions to feed.. but perhaps that is for another post. Before the episode of South Park I had seen an infomercial for a for-profit University:

The infomercial introduces gamification, micro-payments, data-harvesting bots and many of the same techniques South Park was knocking freemium for. What really struck me was that these techniques weren’t even the butt of the joke, instead the clip went all sci-fi and focused around a data-harvesting bot becoming sentient. The themes that education is starting to prey on addictive behaviour using a mixture of social pressure and the data it knows about us isn’t even the joke anymore.

I’m still not sure where I am going with this, but since I started writing this post I have received the following information in 3 emails. I think the theme for my post should be addiction, data, education and dangerous personalization

laceproject tweets

boss

Screen Shot 2014-11-07 at 11.02.11

 

 

 

Posted in Education, General Chatter

Finding symbols when you don’t know the name

findingcharacternames

You know, that fish thing.

Everyone hates it when you want to put a character in to a Word document but you don’t know the name of the character, just what it looks like. “how do you do one of those things that look like a cross between a ‘a’ and a fish” you ask your college. He has no idea what you are on about and you end up trying to draw it in the air with your finger.

I bet there are loads of technological solutions to this that I haven’t come across yet, but a college, fed up of my finger wagging did point out how to do it in a Google Doc.

 

1. Open Google Doc

2. Go to Insert->Special Character

3. Draw the symbol in the ‘draw symbol here’ box

4. IT TELLS YOU WHAT YOU WANT. MIND BLOWN

Video of the steps

 

Posted in General Chatter

Starting to explore Wikipedia: Part 1. Query woes.

I’ve started to wonder, just how much can we find out about a subject from Wikipedia? I’ve been wondering if I can ask serious questions and big questions to the data set to get serious and big answers out. I thought I’d start by exploring an area of Wikipedia that was reasonably well maintained, I had a hunch that the video game community would keep their hobby and interest up to date and started there. While I keep saying Wikipedia, this data is from actually taken from the Dbpedia endpoint , which is a mirror of Wikipedia that structures its data in a way that I can query. There are some things I have noticed, the data is not mirrored exactly, for example the Wikipedia page for the game Hawken  clearly states that the game engine is Unreal Engine 3, DBpedia says it uses the Unreal Engine but does not tell me which one. I don’t know  why yet. Also the data set is frozen in time, this data set is frozen mid 2014. There is a live version of DBpedia at but it seems to break for me, or at least for my SPARQL R package.

I decide to start on one very easy question, what are the different game engines that games use. I started with a very simple query:

Basically this pulls out all video games  that have a game engine, it made me think. The way data is structured in Wikipedia is not always consistent, making it hard to write the perfect query. For example, User A creates a page for their new game studio and uses a certain vocabulary to describe the city that their game studio is based in. User B creates a similar page for a different game studio but uses different vocab. When I write a query to grab all the cities that have game studios based in them I have to know what vocab they both used. As articles get more popular these sort of problems get ironed out as people standardise the vocab used. The other problem is that people can have different ideas of what things are; its no good telling me off because XNA is a language and not a game engine, because Wikipedia is reporting it as a game engine. In this reguard the process might be a good way of examining and reflecting on how your hobby is represented in Wikipedia than an actual answer to the question. The other thing this doesn’t do is pull out the names of the games or the engines, I could pull them out using something like this:

But I started to notice all sorts of funny business in my results. If a game was described as using the Unreal Engine it would putt back the names for all of engines, that is Unreal Engine 1, 2, 2.5, and 4 but for other engines, such as Id Tech, it would just pull back the one correct name.

If you are interested, I went with the first query and a bit of regex to get the results, I counted them using plyrr in R, here are the top engines with more than 20 Games built on them.

Game Engine Games Built On
Unreal Engine 313
Havok 132
Unity 114
RenderWare 84
Source 67
Gamebryo 60
Z-machine 49
LithTech 43
CryEngine 33
Adobe Flash 31
Id Tech 3 29
Torque 28
Sierra’s Creative Interpreter 27
Ren’Py 26
Adventure Game Studio 24
PhysX 24
PopCap Games 24
Telltale Tool 24
GoldSrc 23
SCUMM 22

There is another thing fishy about this data. It is hard to believe that Id Tech is not in the list. It turns out that Id Tech IS on the list, but when a game lists Id Tech as it’s Engine it has the correct Id Tech version as the attribute, where as Unreal has a more general Unreal Engine attribute.

I revisited my original SPARQL query and added the collection of dates to the query. This time I get less results, this is because I have asked the database to only return answers when it knows a year that the game was published, if there is no year there is no result. Some games have different release dates, I told the database to give me a random year. Looking back this was a bad idea because now it looks like remakes use the original engine (or originals use the remake engine) I don’t think I can find a method that will pull back the correct year with every release date when it comes to titles that were remade with different engines. Another thing to remember when writing my queries. This is what I went with:

I’m finding it hard to do what I want with the results when they come back as a dataframe in R. This could be because I find tools like ggplot2 hard as nails or because I’m not familar with how I should be structuring my results. Perhaps a problem for part 2. Anyway, I counted the results per year and saved as CSV, which for future reference that I’m sure I’m sure I will need:

And just to see if it looked right, checked the popularity of the Unreal Engine per year:

Unreal Engine Popularity per year according to wikipedia

Unreal Engine Popularity per year according to wikipedia

Those that know your games might think that the numbers look low for such an engine. I think it does look low, and it is more to think about when trying to ask Wikipedia for answers. These are only articles with games that state they use Unreal and have a release date in my dataset. Because of this I thought I should look at trends rather than actual numbers. The trend shows a growth in the popularity of the Unreal engine up until 2014, most of this makes sense to me:

  • Unreal Engine usage growth has seen growth year on year
  • The dataset was frozen in the middle of 2014, I guess many entry’s for 2014 games aren’t as mature as older games, or the entry doesn’t exist yet
  • Games in the future are pages created about a game that hasn’t been released yet.

Still, I was curious as to why we saw some games in 2018, that seems a long way off for a developer to be releasing information about a game engine they intend to use for a future game. Intrigued I looked up the 2018 game only to find out that it was an old game, released a few years ago that Wikipedia has an incorrect release data for (at the time the data set was frozen). So.. Wikipedia can be wrong.

Time to carry on playing…

Posted in Data Analytics

The Forest

Early access games can go be a hit-or-miss affair; you can sometimes get a great game early in it’s life and sometimes you can get a game that looks like it’s going to be great, but for some reason it just doesn’t . I enjoy it when these early access games have a vibrant community based around consistent game updates that keep giving the game new life, developers listen to feedback from the audience to feed new developments and features and everybody wins; Minecraft, Project Zomboid and Prison Architect spring to mind. On the other hand the game might hit snags during its development, the team might quit or the game might not turn out like you had hoped, fans of the games Town were upset when lack of sales forced the developer to quit, you can read the forum post here. An even worse scenario, they implement a pay-to-win scheme, ugh. Because of this I am cautious when it comes to early access games. Still, a while ago I took a punt on one called ‘The Forest’ and I’m really glad I did.

The Forest is a survival game where the player finds themselves waking up in the wreck of a plane crash in the middle of an island. You have to survive by building shelter, making fires, cooking, eating animals, crafting weapons etc etc. As you change the landscape in the forest by cutting down trees and the such, the local inhabitants take interest in what you are up to. These inhabitants, called Mutants by the online community, react differently to you depending on how you interact with them and how you go about living on the island. Building big bases will cause them to send large numbers out to your base at night, perhaps even send the dreaded tank ‘spider mutant’ to knock your walls down. Personally, I like to build lots of small bases and switch between them in an effort stay undetected, building the odd trap on their patrol routes to spice things up.

The menu screen has a big timer counting down to the next release of the game – always a good sign that there are constant updates. There is also a community of people playing and discussing the game at http://www.reddit.com/r/TheForest/

I’m really enjoying it, but as always with alpha games, you should check what the general census is before dropping your hard earned cash. Some interesting things about the game I’ve found on the web for anybody thinking of getting it:

The natives on this island are not your friends! (Source: Physical Cores)

 

 

Posted in Computer Games Tagged with: , , ,

The birth place of every pro wrestler (according to wikipedia)

A few days ago I mapped out the death places of Monarchs of England. I wanted to try the same technique on a bigger dataset and keeping on trend with some other stuff I’ve done with Reddit I decided to map the birthplace of every wrestler in Wikipedia.

This is a *rough* guide. It maps wrestlers birthplace to a random place somewhere in the city that Wikipedia says is their birthplace. If Wikpedia doesn’t think that the data is a city then the data is missing. This data is from a snapshot taken a few months ago. Also, Wikipedia has been known to be wrong,

Click a dot to see who was born there and you can drag the map and such. There are some other bits of info I should add and some bits I should remove, but for now this is it.

 

There were a few problems/differences

1)Google Fusion Tables doesn’t like it if you give it two lat+long values that are the same, I used this idea to get around that problem.

2)I changed my SPARQL query to only include citys

code:

 

 

Posted in Data Analytics

Zombie Neighbours

I feel torn when it comes to production companies working with up and coming talent making a splash on social media sites such as YouTube. It feels to me like there are all these really intelligent talented people doing interesting things, then these big YouTube networks come along, sign them up and suck out all their creativity. You are never quite sure what is going on behind the scenes, but it doesn’t feel right that most of the most viewed ‘YouTubers’ are signed with social media agencies such as Maker Studios, a subsidiary of the Walt Disney Company. Once signed the social media agency gives them cookie cutter templates for both videos and promotional material, helps them with production, gives them a few deadlines, promotes their work and then takes a cut of the profits. While I do think that production companies should be working with up and coming talent, I guess I’d just like to see them help the talent grow rather than try to own it.

An interesting collaboration that popped up in my feed today is between one of the ‘signed to a social media agency’ YouTubers Louna Maroun (Looplady11) and FremantleMedia Australia the producers of Neighbours. The pitch is this, Louna has created a few 5 minute neighbours shorts where Ramsey Street is taken over by zombies, current cast members of the show battle it out with past members who have risen from their graves. She has access to the sets and a few of the stars both past and present.

I think the whole idea of giving up and coming talent access to the same sets, actors and audiences that the big boys like FremantleMedia have is a great idea, and is exactly what distributors should be doing! I don’t know where Lourna’s social media agency ‘Boom Video’ sits in this arrangement but it would be interesting to find out. It’s a win-win situation, with Fremantle getting access to Lournas fanbase, getting media coverage and most importantly helping homegrown talent.

If you are interested here is episode 1:

Posted in Television Tagged with: , , ,

Where the English monarchs died

(according to Wikipedia)

This was a quick and dirty experiment to see how easy it is to auto generate an interactive map from Wikipedia/DBpedia. There are some caveats and things I still need to iron out.

These are people that Wikipedia has described as a Monarch of England, they must have a place of death listed in Wikipedia and that place of death must have longitude and latitude in http://www.georss.org/georss/

There are loads more things I would like to do to the map, but I was pleased at how quick you can map things from Wikipedia. I could perhaps add how they died. There also seems to be some encoding problems and the description boxes seem to cut off.

Remember this is completely automated from Wikipedia data. Data might not be completely correct; worrying about that is for part two as I just need the process in place for another project.

Click the dots for details.

The process was quite easy but there are some steps I could do with removing:

1) Write SPARQL query that looks like this:

2)Grab CSV based on SPARQL, use Open Refine to do some housekeeping

3)Upload to Google Fusiontables

Here is the script I wrote in R to generate the CSV:

 

Posted in Data Analytics Tagged with: , ,

Getting to grips with unit testing Cordova applications

I have really enjoyed using Cordova to build Android apps because I can knock things up quick and easy. I do have a problem though, sometimes I knock something up quickly and decide that I want to take it further, I then wish that I had thought harder at the start of the project about things such as how I am going to structure and test my code.

One of the problems I have with my latest project, Trivia Quizzes, is that it is a basically bits of code I have pulled out of other projects while getting to grips with Cordova. I quite like Trivia Quizzes and want to go on to expand on it a little; I’m thinking it would be a good base project to learn some new Cordova techniques, I’m thinking of extending it to include Google Play Services scoreboards and achievements. Before I’ed expand I decided to go back and restructure the project properly.

The restructuring was not an easy task. I would move things around in the project only for them to break. I would remove code I thought was redundant only to notice quite quickly that it wasn’t. Since I am writing my application in Javascript I would forget that some things that work on a desktop don’t work so well on a Mobile Phone and I quite often reintroduce bugs in to the code; I think that the programming gods call this a regression. After a bit of searching I found that the the way to fight these is to add little tests to your code to check that units of code still work despite you fiddling with stuff, these tests, believe it or not are called Unit Tests.

Unit Testing in Cordova
I have heard of Unit Testing before, in fact a whole ago I had read ‘The art of Unit Testing‘ by Roy Osherove. I just hadn’t really implemented many tests in to my code. My experiences with Cordova, especially my reorganisation of the Trivia Quizzes project has taught me a lesson.  I’m also hoping it will help with my debugging as I have to admit that debugging in Cordova is not going well for me.  While I have been reading up on debugging on a mobile device or simulator device it is difficult to synchronize breakpoints or retrieve stack traces. Since Cordova is basically HTML/CSS/Javascript it can be debugged in a desktop web browser, but I have found that things such as JavaScript Performance and the phone API availability are difficult things to emulate in the browser. There are a few projects that attempt to get around that, projects such as emulate and gapdebug but it is hard to know what to go with.

I’ve had a poke around different frameworks for unit testing and debugging in Javascript and have come up with a way forward. I am going to create a series of unit tests that I can access from within both the application and when debugging on the desktop. I’m not sure how I am going to write tests for Phone specific activities, like accessing parts of the Phone API, but I am going use unit testing as a way of evaluating the debugging tools.

Creating the testing infrastructure

As far as unit testing in Javascript goes, I like the sound of QUnit as it appears to be regularly updated and as part of the hugely popular JQuery suite has a large user base. JQuery also seems quite simple to set; I created a new folder in my project that with an HTML page, this included QUnits CSS/JS files and  two div’s with specific ID’s in the page. I also included the javascript file that I wanted to start creating tests for (functions.js) and an empty file I was going to plonk my tests in (tests.js):

Heading to that page now gives you a rundown of your tests:

Screen Shot 2014-10-13 at 10.54.16

With that the framework all you need to do is write the tests, QUnit documentation has some great examples. The first thing that went through my head when writing the Unit tests was ‘What exactly is a Unit?’. While it sounds like a silly question ut I found it a great place to start as it makes you think about how your code is structured. This is particularly useful for somebody like me who likes to knock up ideas with bits of code from previous projects I’ve been working on, projects where I may have structured my code slightly differently between them. My first test simply checks that a string is returned from one of my functions:

The key thing I’m taking from this at present isn’t that the Unit Tests check if my code works, but it is making me think about how my code is structured. I was hoping at this point to be writing more tests, but the exercise has made me think about restructuring my code again. I’m going to give it a go and I am hoping that it will be easier this time I have Qunit to help me.

 

Posted in Uncategorized

New LAK dataset

I’ve been informed by Davide Taibi that the LAK dataset has been updated, this update includes some paper text that I reported on, but also has lots more data. As described by Davide:

This version includes papers from:
- EDM conferences (2008-2014)
- LAK conferences (2011-2014, 2014 only abstract since we are waiting for ACM agreement)
- Journal of Learning Analytics (2014)
- Journal of Educational Data Mining (2009 – 2014)
- Lak data challenge (2013-2014)
- Special Issue of JETS on Learning Analytics (2012)

In total we have 697 papers, 1214 distinct authors and 365 institutions represented in the Dataset.

Moreover, we have added interlinks with Semantic Web Dog Food, DBLP and DBpedia. The number of interlinks will be improved furthermore.

Sounds great!

You can download the RDF and NT dumps here dataset here . If you want the data in R format, you are best downloading the RDF and converting it yourself, this is because the R format hosted on crunch uses an old LAK dump. It is really easy to convert to R using a script I checked in to the LACE project github. I made a video of the process, this was recorded with the old dataset, but I have tried it with the latest one and it works fine.

Posted in Data Analytics Tagged with: , ,