Index ¦ Archives ¦ RSS > Category: misc

Exit Review: Python 2 (and some related thoughts)

Python 2 has come to an end. I ported the last of my personal scripts to Python 3 a few months ago.

Perhaps the greatest feature of Python 2 was that after the first few releases, it stayed stable. Code ran and worked. New releases didn't break anything. It was predictable. And existing Python 2 code won't break for a long time.

The end of Python 2 has led to the end of that stability, which isn't a bad thing. Python 3 is now competing across a broader ecosystem of languages and environments trying to improve developer and runtime efficiency. Great!

I did see a quote that Python is generally the second best solution to any problem. That is a good summary, and shows why Python is so useful when you need to solve many different problems. It is also my review of Python 2.

So let's have some musings ...

Python has had poor timing. The first Python release (1994) was when unicode was being developed, so the second major Python version (2000) had to bolt on unicode support. But if it had waited a few more years, then things could have been simpler by going straight to utf8 (see also PEP 0538).

Every language has been adding async with Python 3 (2008) increasing support with each minor release. However like most other languages, functions ended up coloured. This will end up solved, almost certainly by having the runtime automagically doing the right thing.

Python 3 made a big mistake with the 2to3 tool. It works exactly as described. But it had the unfortunate effect of maintainers keeping their code in Python 2, and using that to make releases that supported both Python 2 and 3. The counter-example is javascript where tools provide the most recent syntax and transpiling to support older versions. Hopefully future Python migration tools will follow the same pattern so that code can be maintained in the most recent release, and transpiled to support older versions. This should also be the case for using the C API.

The CPython C API is quite nice for a C based object API. Even the internal objects use it. It followed the standard pattern of the time with an object (structure) pointer and methods taking it as a parameter. There are also macros for "optimised access". But this style makes changing underlying implementation details difficult, as alternate Python interpeter implementations have found out. If for example a handle based API was used instead, then it would have been slower due to an indirection, but allow easier changing of implementation details.

Another mistake was not namespacing the third party package repository PyPI. Others have made the same mistake. For example when SourceForge was a thing, they did not use namespacing so the urls were sf.net/projectname - which then led to issues over who legitimately owned projectname. Github added namespaces so the urls are github.com/user/projectname. (user can also be an organization.) This means the same projectname can exist many times over. That makes forking really easy, and is perhaps one of the most important software freedoms.

Using NPM as an example, this is the only package that can be named database. It hasn't been updated in 6 years. On PyPI this is apsw and hasn't been updated in 5 years. (I am the apsw author updating it about quarterly but not the publisher on PyPI for reasons.) Go does use namespacing. A single namespace prevents forks (under the same name) and also makes name squatting very easy. Hopefully Python will figure out a nice solution.

Category: misc – Tags: exit review, python


Recommended: History of podcasts

I'm a fan of podcasts and especially longer form history podcasts. I've found that "History of" podcasts that cover various empires and locations seem to be rather good. The History of Rome podcast is a very good example, with many others following that format and principles. The format allows the shows to adapt over time, include listener feedback, and do experiments which often work well.

If you can't get enough, then Hardcore History has many good episodes and stories.

And at the meta level, there is a History of *History of podcast* podcasts

Category: misc – Tags: recommendation


My Casio Smartwatch WSD-F30 experience

Summary

The manual (pdf) is comprehensive and describes the non-WearOS functionality well. r/WearOS covers the WearOS side - check the sidebar too. It is also worth noting that current watches tend to use identical hardware (same qualcomm chipset, same screen resolution, same RAM, same storage etc) although extras like microphones, speakers, NFC differ.

Starting point

I've used Casio digital watches for as long as I can remember. Because they are water resistant, the watch can go anywhere I do, and I never take them off. My favourite over the last decade has been the Solar Atomic models. Solar means I never need to change the battery, and "atomic" means picking up the radio time signals that came from an atomic clock.

Smartwatch?

Watches provide two conveniences for me - it is always there, and I can look at it very quickly. Phones are in chargers, pockets, etc and take longer to extract and navigate to what you wanted to see.

Needing to be familiar with smartwatches, and to do development work I naturally picked the Casio offering which is upper mid-range in pricing.

First Time User Experience (software)

The FTUE is terrible. Android Wear WearOS watches are not mature yet, and require a lot of compromise to keep within available battery, cpu and software functionality. It feels a lot like being given a decade old phone and told to make it work now.

Simultaneously the watch will be doing system updates, installing or updating apps, and have some tutorial overlay you can't just dismiss. All the while you are learning the compromises you'll have to make.

To be clear - it is sluggish. There will be 5 seconds between taps and resulting actions. The screen will go black for several seconds while apps launch. You are never certain if touches or button presses registered, and often end up doubling them which makes things worse. I also found the onscreen keyboard useless since I could never touch the right spot.

Things do settle down over time, but that sluggishness still remains some of the time. What helped me the most was to enable developer options and turn on "Show Taps". That confirms a tap was registered and shows where is was, helping with feedback and making the keyboard more useful.

Charging

Charging is done with a magnetic attached cable. The box came with a small USB power brick, and the USB to round magnet charging cable. I have never used the supplied power brick, and have had no problem connecting to any USB power source. I also bought a third party USB C to magnet off Amazon, and use it the most of the time. In short the watch is not fussy about charging.

When sitting at my desk, the cable will stay in place providing there isn't too much unsupported cable length, so that is the main way I charge the watch.

Watch Display

  • A monochrome digital time display, easily readable in sunlight and difficult to read in low light. Uses a lot less power than the colour display. You can run in this mode for 30 days with WearOS turned off. When WearOS is running then only Casio apps can write to this screen (other apps just have the standard time display)
  • Ambient mode colour display (lowest brightness). Unreadable in direct or indirect sunlight. This is used when idle with power consumption based on how many pixels are not black.
  • Colour display which uses lots of power, is readable in indirect sunlight and generally impossible to read in direct sunlight.

If you have the full colour display on and are interacting with apps, a full battery will be drained in about an hour. Consequently much use of the watch is setting the display mode you want to trade off power consumption, readability, and response time.

You can have the display activated by touch, button press, and rotating your wrist. My experience of wrist activation is that it rarely works when you want it too, and often activates when you don't. Because it activates full brightness, the battery can be very quickly drained.

Thoughts

WearOS is a lot less mature than expected. It is unclear if Google is losing interest.

Most watch faces try to be pretty and based on analog hands. It is difficult to find dense digital displays.

The Casio apps do work well. I'm glad Casio used WearOS instead of doing their own operating system with limited apps etc. However the result including their gshock style case seems pricey. A few more years of new models should improve this.

Ultimately you figure out how to get the watch to work for you, requiring more administration than a non-smartwatch. For me the benefits outweigh the hassle. I use Theater Mode from quick settings to have the time showing most of the time.

Category: misc – Tags: review


On defaults

I've been wondering what best practise for handling defaults is. In software there are generally 3 values: zero, one, or many. As a consequence developers often pick a sensible number for "many", and allow configuration to change it.

Eventually defaults permeate the code, settings, user interfaces, product documentation, user forums, and search engine results. It spreads not from a single source of truth that tracks and propagates changes, but by being arbitrarily copied between systems.

As time passes, the default values need to change due to circumstances and experience. New features make existing values need refinement, while new interactions complicate matters.

The usual solution is to bump the major version and have humans, code, and documentation deal with changes. The effort of doing major version upgrades especially all the setting changes is what makes so many of us resistant to do major version upgrades.

Starting software after a version upgrade is always a pain. Sometimes you are pleasantly surprised that it just works, but usually the logs are full of complaints about settings, things that previously worked no longer working and general yak shaving.

Postfix has a compatibility level to help defer the effort after a major version upgrade, but you are still on the hook for the upgrade changes.

An anti-pattern is software that generates an initial config file for you. It does have a very short path between default settings and the generated config file, usually including comments and explanations in that file. This is fantastic to start with.

But it causes problems in time. The settings, comments and explanations become wrong. Looking at a config file that is a few years old is an exercise in archaeology and contradictions, requiring consulting the file, warning/error messages, logs, wikis, and other documentation.

So far the best I have is to prefer more 'automatic' settings, and keep the number of settings to a minimum.

Category: misc


Exit review: Emacs

A shocking time has come - I've given up Emacs, after using it for 20 years. When interviewing developers, one of the questions I ask is about their favourite editor. I don't care what the answer is, but I do very much care about why it is. An editor is a fundamental part of developer productivity, so I want to hear about the candidate caring about their own productivity and trying to improve it on an ongoing basis.

The irony is that I was using the same editor for decades. I did keep trying to find improvements, but never could. There are two sides to Emacs - one is as a competent & coherent editor, and the other is "living" in it. It has builtin web browsing, image viewing, email and news support, terminal emulators etc. I was never one of those.

Before Emacs I used vi. Its modal interface, small size, and availability on all systems make it a good tool. However it was text console only, and didn't do colour, menus, multiple files or other useful functionality. (It does now.) vi does have a learning curve - I estimate it takes about 4 years to be good with it, and 8 years to be an expert!

I had known about Emacs for a while, but it was text console only, and didn't do colour, or menus. Each attempt to use it left me frustrated with what amounts to another arbitrary set of keystrokes. (I've always been a cross platform person so I was also juggling keystrokes for other operating systems and applications.) A colleague (hi Jules) introduced several of us to XEmacs around 1995. It had a gui, and colour, and most importantly a menu system. It was no longer necessary to memorize a large set of new keystrokes, as the menus showed them. You could do everything without knowing any, and then pick up those you use often enough.

By the mid 2000s XEmacs was languishing, and Emacs was slowly catching up with the gui. More and more packages only worked with regular Emacs (there were small but growing incompatibilities). I eventually made the switch from XEmacs to regular Emacs.

There was an explosion in different file types I was editing: Python, C, Javascript, Java, Objective-C, HTML, HTML with Jinja Templates, JSON, matlab, CSS, build scripts, SQL, and many more I have forgotten. Emacs had support for most. Support means syntax highlighting, indenting, jumping around notable symbols etc. More packages were produced that did linting (looking for common errors), and various other useful productivity enhancements.

At the same time a new editor Sublime Text was introduced. It had fantastic new interaction (goto anything, projects, command palettes, multiple selections, distraction free) and a rich package system (written in Python - yay!) I kept trying it, but kept finding issues that affected me. Development also seemed to drastically slow, and since it was closed source there was no way for others to improve and update the core.

Meanwhile Emacs became more and more frustrating. The web (HTML, Javascript, CSS) is not a first class citizen. Not many packages were distributed with the core, and you had to copy crytic elisp code from various places, or use strange tools to try and get them installed and kept up to date. Then you had to do that on each machine. Heck the package repositories (eg MELPA) didn't even use SSL by default! My emacs configuration file kept getting longer and longer.

Ultimately tools these days are defined by their vibrant community, useful defaults, and easy to use extension mechanisms. Emacs has all those, especially in the past. But they are of a different era and different cadence.

I have switched to Atom. It had a rough initial exposure with performance problems, and the extremely dubious choice of being closed source. However both have been addressed. Just days before Atom 1.2 was released, I removed Emacs in favour of Atom 1.1. My configuration file is 10 lines long, and I get the same experience on every machine.

Category: misc – Tags: exit review


Developers should work in support

I am one of the many many people getting the completely useless response on trying to upgrade to Windows 10.

Windows 10 Error Message

Closing setup and trying again doesn't work. Nor did changing my language settings, .Net repair tools, freeing up huge amounts of space, rebooting, examining log files, or reading tea leaves.

A lesson I learned many years ago is that developers should work in support [1] (eg 3 weeks a year). Normally their experience of support issues is after they have been filtered through many layers of other people, and they don't see the ones that have been resolved even if fairly frequent. Essentially developers do not experience the friction that their customers or support staff regularly encounter.

I've done the support work myself, as well as seeing other developers doing it. There are immediate fixes such as tweaks to tools, asking for information in a different order, or new ideas for how to address common issues (hopefully eliminating them). Then they will go back and fix unhelpful messages in the product. For example a message like "couldn't find file" will be changed to say which file, and possibly detect if it is because the file doesn't exist or the directory containing it. Or make the code create a default file and parent directories.

From that point on, the developers produce substantially better diagnostics. They work out what information they would need on answering a support call and make the diagnostics provide it all. But as time passes, the memories fade, and shortcuts are taken. That is why working in support should be done regularly.

The Microsoft developers responsible for the screenshot "something happened" are likely a lost cause though.

As a side rant, Linux distributions are also distributed as ISO files. They can be used as is on optical media, network booted, dumped as is onto USB flash drives, and work on BIOS and EFI systems (even Apple's non-standard EFI). The Windows ISO is considerably more painful, especially if your machine doesn't have an optical drive, like pretty much all of them these days.

[1]This applies to larger companies. In small companies/startups you often end up with everyone doing support.

Category: misc – Tags: rants


Exploring two different battery wifi hubs

Fundamentals

I recently decided to get a multipurpose device. They can do all this:

  • Large battery to recharge other devices over USB (eg your phone and tablet)

  • Provide wifi access to a network in front of it

    A wifi network is provided behind the device with a name and password of your choosing. You connect one or more of your devices to that.

    In front of it you can have no network at all, a wifi network (unrelated to the one behind), or a wired ethernet network. You do have to configure access to the network, but only for this device. Your devices are behind it blissfully ignorant of the real network.

  • Exports attached storage (eg USB stick, USB hard drive, sdcard) via both SMB (aka "Windows network file sharing" supported by virtually everything these days) and DLNA (a multimedia network protocol, supported by many although the Apple ecosystem prefers "iTunes")

    On Android and desktop systems, you'll find that Kodi works for both SMB and DLNA, as does Android ES File Explorer (SMB only).

  • They are cheap ($40 - $60 depending on battery capacity)

  • Can run completely off the battery so no additional power is needed. They will run for many hours. They will also run while being charged.

  • Similar in size to a pack of cards

  • They use popular standards - eg they charge using standard micro-USB, provide power for devices with standard USB port, use existing filesystems, standard protocols etc. There is no need to carry different cables or chargers, and any software speaking SMB or DLNA works.

After some agonizing over Amazon reviews, and reading the manuals, I ended up with two.

Photo of both products

The left red one is a HT-TM05 TripMate Versatile Wireless N Travel Router (Amazon page) although the packaging and internal names say Tripmate Sith. The right white one is a RAVPower RP-WD02 Wireless Filehub / Portable Travel Router (Amazon page). They are sold by the same company, and the underlying products are substantially similar except for the hardware layout.

How they do all this turns out to be quite simple. The battery provides power, and there is a small Linux based computer attached. It is running a MIPS based processor (the manuals even tell you the exact manufacturer and model number), 32MB of RAM, and 8MB of builtin storage for their software. For some reason MIPS cores seem very popular in network access devices - if you have a box at home from the likes of Linksys, DLink, Netgear etc, it is almost certainly using MIPS.

Praise

They fundamentally do what they say. Both RAVPower and Hootoo provide Android and iOS apps to help access and configure the devices. However neither requires it and you can do all the configuration work in a web browser by going to the device address (default 10.10.10.254). It looks like the apps are really just some logic to find the device on the network, and then show the admin pages in a WebView. Note that I have never tried the apps.

Each device has some nice highlights the other doesn't. (If only someone made something combining the best of both.) The Hootoo has some lights on top to see battery level (they only light when you press the button as I did before taking the photo). The RAVPower has a micro-sdcard slot. The Hootoo can stand up. The RAVPower has a label giving default username, passwords and IP address. The Hootoo web admin pages are nicer, simpler and mobile optimised. The RAVPower ones tell me the device's external IP address. The Hootoo's lights go on or off in sequence during power on and power off so you have progress feedback.

As a test I left the HT-TM05 10,400mAh device on and connected to the wifi network. I didn't have anything connected to it, so this is a measure of the longest it can continuously run. After 45 hours (3 hours short of two full days) it had dropped to one battery led (out of four), and I decided to recharge it rather than deplete the battery completely. That is an impressive runtime. The RTP-WD02 has a 6,000mAh battery so you would expect a proportionate maximum runtime around 28 hours.

Suggested Improvements

The RAVPower has ports on 3 sides, which can lead to cables sticking out in all directions. The Hootoo is nicer with ports on two sides next to each other. Sadly the micro-USB for charging is right next to the USB for connecting storage. If the cables connecting either are anything but skinny heads then you can't have both connected. If you use an sdcard reader on the Hootoo then it will overlap the charging port. You get a choice of too dense ports (Hootoo) or not dense enough (RAVPower).

Hootoo really should have a builtin sdcard reader.

The web admin UIs have no help. When you want to safely remove attached storage, you'll end up at a page with a button labeled "Delete". It takes a lot of courage to press the button, to confirm that it really means "remove" or "eject" (it does). Firmware updates on both devices added an "auto jump service", you can enable or disable. Good luck on figuring out what that does!

Censure

Software versions

It didn't take me long to get access into the devices. Here is what the Hootoo said it is running:

$ cat /proc/version
Linux version 2.6.36 (gcc version 3.4.2) #8 Fri Jul 11 10:44:45 CST 2014
$ /usr/sbin/smbd --version
Version 3.0.24

RAVPower:

$ cat /proc/version
Linux version 2.6.21 (gcc version 3.4.2) #5 Fri Nov 1 13:36:46 CST 2013
$ /usr/sbin/smbd --version
Version 3.0.24

The Linux kernels date from 2007 and 2010. Neither version is long term supported, and both have various known security holes, although remote security holes are very rare.

smbd is the main component of Samba and provides networked file access. Version 3.0.24 was released in 2007, and there have been numerous releases since then, including 3.0.25 a few months later which fixed 3 security holes. Virtually all Samba security holes are remote since that is what it does.

I didn't check the versions of other accessible services (eg DLNA server, NTP), but this pattern of older versions with known problems is most likely. (The gcc version above is from 2008.)

Network exposed

Why do the versions matter? Both vendors (RAVPower update) made a very bad decision - all network services including the web admin pages, Samba, DLNA, and even a telnet server are accessible from in front of the device. If for example you are at an airport, campus, coffee shop, hotel or somewhere else with a network, and connect the device, then anyone on those networks can connect to the network services on the device. They do not need to connect to the wifi on it. A bad guy has more than 5 years of published security holes to choose from, and can have complete control over it. (The default usernames and passwords also make this a breeze.)

Complete control means they can extract your saved wifi password (eg if last on your home network, or for the current network), redirect or monitor your traffic, replace the firmware etc. To a certain extent this is no different than connecting to someone else's network which you have to assume is hostile, but this is something that goes around with you. (Both vendors use the word 'secure' in their Amazon descriptions.) While that kind of exploitation sounds far fetched, bad guys are already doing it.

Bridge mode

Both products' Amazon pages claim to support a bridge mode, but this marketing fluff and not the term as understood by networking people. They never bridge in the sense that those behind the device and the network in front are joined making a unified LAN. The devices always do network address translation (NAT) and never any form of bridging.

Admin Pages

As far as I can tell, Hootoo are the firmware developers. Their older products as well as the RAVPower use a fairly clunky web interface. It looks like a singe page application but doesn't do it well.

The Hootoo has a newer web interface where the URL changes as you navigate around pages, making it much easier to see what is going on, send links to others or other devices etc. It is also mobile centric giving the same pages that look good on a phone, as to a large monitor.

I had a quick look at authentication to see if there were any simple holes. Both use their own login screen, which means your browser can't prompt you nor remember the password. They set a session id cookie and require it to be present for other web accesses.

The pages are always over http, and not https, although there isn't much of an alternative. (Browsers are getting very hostile to self signed certificates.)

Both devices ended up with a second web server on port 81 (standard http is port 80), that appears to be related to the admin server. There is no need for it, and I'd be concerned about what it does.

Many changes cause the device to reboot and your browser to show a many minute "please wait" message. This gets very annoying. I understand why it is done (far simpler to code and test), but not doing it so much would be a more pleasant experience.

Firmware updates require storage to be connected as the devices don't have temporary storage. On both devices they also wiped out all settings.

RAVPower update

20 May, 2015

I sent an email to RAVPower support around the network exposing and GPL issues. There was no response. A few days later there was a comment on my Amazon reviewing asking me to email support, so I did a second time.

They claimed the issue had been fixed with new firmware, and a pointer to some source. I can confirm that the new firmware does indeed stop exposing network services to the public.

The source link was to Hootoo's website and looked like an effort had been made for some GPL awareness. It included a document outlining components, their version numbers, and license. It also included the kernel source code and Samba (including patches). I did verify the kernel and Samba versions matched, but did not verify they could be built or were exactly what was on the device (both GPL requirements). There didn't appear to be much other source present.

I did have more interaction with support, who didn't understand the difference between telling me about that source drop and actually complying with the GPL. It needs to be available to all users (without having to ask), requires copyright notices be present, be complete and more.

Hootoo update

28 May, 2015

Email to Hootoo support went unanswered. However I did see new firmware appear, which claimed to add exFAT support.

On the network exposed front, the telnet server was disabled, but another web admin server appeared on port 81.

Category: misc – Tags: review


Prisoners are people

The US penal system is despicable in many different ways. John Oliver covers some of it.

Category: misc – Tags: reality


Paying for incoming calls

When people find out that you pay for incoming cell phone calls in the US, it seems illogical and likely part of the various issues where the country is dysfunctional (eg politics, health care, rampant hypocrisy). However there are good reasons for this system and issues in your own country you may not be aware of.

Whenever a phone call is made there are termination rates - the amount paid to the operator on the receiving end [1]. In most countries the termination rates for cell phone calls are higher than for landlines (note the termination rate covers where the call ends up - it isn't related to where the call is from - cell, landline or other).

This means that when a call goes to a cell phone, it costs more than going to a landline. Somebody ends up paying that difference.

Calling rates

Screenshot from a random calling card site. Note that these costs include the costs to route the call to the country, in addition to the landline or cellular termination. ie the termination fees have an even larger disparity than shown here.

Look at the United Kingdom or France where the cellular rates are considerably higher than landline rates [2]. Most countries decided make the caller pay for these higher rates, but the caller had to know they were doing that. Consequently they had to allocate new area codes for cellular (these are the UK ones). Every person making a call has to know which codes are which, and some knowledge of the different rates [3]. Receiving parties can't port non-cellular numbers. Cellular carriers have no incentive to reduce their rates.

In the US the difference between landline and cellular is paid by the receiving party. The North American Numbering Plan means area codes can't change from 3 digits without massive disruption, and that there aren't enough spare area codes even if they wanted to assign them to cellular (eg 36 area codes would be needed to get one number for every American and Canadian, and have some pattern to recognise the codes).

So is the US system better? It has a number of advantages:

  • People don't have to remember even more rules about area codes and differing charges
  • There is greater incentive to reduce the cellular extra prices since you are affected by every minute of incoming and outgoing calls. With the model used in other countries, the termination rate is set by the receiving customer's carrier and that customer has far less visibility or bargaining power over the rate.
  • Cellular was easily rolled out without disruption. An existing number could switch to cellular and no one else would have to know or care. If you were a business (eg a plumber) you didn't have to worry about people calling you fretting over getting charged more for the call than your competitors.
  • Numbers are easily portable. You can switch any number to use cellular. Countries using the other scheme can have portability but usually it is limited to cellular area codes only.
  • The method used to connect the number with the recipient is only a concern of the recipient.

The US system does make sense, and does have a number of advantages. Someone is always paying for the difference between landline and cellular virtually everywhere.

The US carriers also charge for incoming SMS. This is more a case of being able to get away with it, mixed with some of the reasons above. However most plans these days are for unlimited voice and SMS, with the big charges for data consumption.

[1]There are various exceptions like freephone/toll free numbers, premium services etc.
[2]You can also tell which countries have a poor landline infrastructure and cellular competition.
[3]You already have to know which area codes have no extra charges, which have minimal charges and which are premium. For example UK folks have to be aware of this list.

Category: misc


Farewell: Mario Kart Wii

Last week Mario Kart Wii online services went away. I am a fan of driving games and have had Mario Kart Wii since it came out in 2008. Most driving games only allow for perfection. If you take a corner badly or have a crash, then that is it. You'll have lost several seconds and have to start again for the position or time [1].

Mario Kart is very forgiving, using goodie boxes scattered throughout the tracks, containing random items. The closer to the front you are the more pointless the goodies are, while being further back gives the good stuff especially items that let you get closer to the front.

That all leads to a nice balance. Mistakes that lose position get you better goodies that let you recover. Players who aren't as good drivers also get opportunities to move up. You get three laps of hectic racing with everyone having a reasonable chance of a good scoring position. And the best driver can come last too.

The single player game, racing against the computer is ok but not that much fun. You need to complete various cups to unlock vehicles and characters. It just isn't that hard nor does the computer offer much challenge.

Online on the other hand is spectacular fun. You race against other people. Unlike the computer they do all sorts of unexpected things as well as cool tactics. This makes every race unpredictable and lets you use your own nefarious tactics. Fortunately Nintendo means there is no interaction with other players unlike the swearing, racism and misogyny reported on other platforms. Heck you can't actually tell it is even people other than they don't behave like the computer does. When playing you'll assign all sorts of motives to actions, be lenient or get revenge.

The engineering is impressive too. The players can be all over the world (and often are) which means it takes a while for position and speed information to be transmitted to all other players. Until later correct information arrives it has to predict where karts are to show you them right now. This is why you can sometimes think you hit or shelled someone but then nothing happens.

Online is what makes this game.

The Wii doesn't do code storage or online code updates, so Nintendo's developers had to get everything right first time for the CD. (By contrast Gran Turismo 6 for PS3 had updates every few days after release.)

There are two areas that Nintendo didn't get right. The first is balance - you expect characters and vehicles to be approximately equal. There are different attributes - eg one vehicle may have a higher speed, but lower acceleration or vice versa. These can still be balanced. Sadly they gave the bikes too much advantage, with the consequence being virtually all highest scoring players using a big player (eg Donkey Kong) on a bike.

The second area is the waiting. Each race is a multi-step process waiting to join a race, waiting for everyone else to join, selecting a track, waiting for everyone else to track select, waiting for the system to pick the track, waiting for everyone to load the track and then finally you get to race. A lot of this waiting could be combined to make the whole process be quicker.

Sadly near the end the distasteful topic of cheats came up. Some people reverse engineered what was going on and could for example shell every player as the race started, or avoid having anything affect them. Despite these "impossible" things happening, Nintendo never seemed to do anything about it.

So what is next? Wii U has a new Mario Kart coming out, but it isn't that good. And it costs over $300 because you have to buy a relatively unpopular system. I'm going to pass, and just remember the several years of multiplayer Mario Kart Wii for the fun it was.

[1]A notable exception is Excite Truck where you lose a fraction of a second in a crash, and even get points for how spectacular the crash is. When the game resumes you are still in the thick of the action instead of seconds behind in the dust.

Category: misc – Tags: exit review

Contact me