The Audi AI:Trail

As part of a series of concept vehicles, Audi released the AI:Trail. There’s a number of automotive news outlets that have covered it detail, so I won’t go into specific details here. Long story short, it’s a concept vehicle designed for off-road use. It’s electric, and I think it looks really neat.

Like with other concept vehicles, there’s a few things about it that wouldn’t quite it to the real world, but as a design exercise, it’s still quite refreshing.

Because it has an electric drive train, there’s no need for the traditional “engine at the front” design like a pickup truck. In this case, the designers took some freedoms with that. They added the ability to flip the windshield up to store things in the front – almost like a hatchback, but on the front. Would that make it a hatchfront?

Some of the other concept’s interesting features:

  • “Drone headlights”. Rather than having forward facing bright lights fixed on the vehicle, drones take off from the roof rack, and shine on the road ahead. (This is definitely quite fanciful, but still a refreshing take on things)
  • Suicide doors. (What sort of concept wouldn’t be complete without doors that open in a different way?)
  • Rear seats that can be pulled out, and set up elsewhere, in a sling sort of way. This, I find quite interesting. In theory, it could also mean that the seats can be folded up, much smaller than if they were a solid object. Thing of scrunching up a piece of fabric vs. a rigid-backed seat.
  • As mentioned, the “storage in the front”, complete with some simple tie-down straps. (No doubt, so that when you stomp on the accelerator, things don’t fly into your face).
  • A clear panel out the front, for some really good forward visibility, as well as a very low belt line out the doors.
  • Lots of natural looking materials used in the cabin – rope handles, wood, etc.

I can definitely pick up the vibe that Audi was going for with this concept. I think there’s a core of something interesting here, even if it is just a concept.

If I were to make changes to it, I’d do the following:

  • Fix the windshield in place, and add some sort of wipers to it. Maybe bottom mounted in the corners, similar to a Honda Civic.
  • Add a few solar panels to the roof rack. Not for the sake of adding any sort of appreciable range to the vehicle, but to reduce any sort of vampire drain on the battery.
  • Swap out the suicide doors for something that folds upward, or pivots out of the way. Don’t necessarily have them cut into the roof line of the vehicle, but still have them fold up or out. If they folded up, it could provide shelter when raining. The downside to that is it restricts access to the roof rack.
  • Leave a portion of the running board attached to the vehicle, and not to the door. This could be used as a step to help get into the vehicle. It also could be used as a seat, if you are sitting there with the door open. It would also be used as a step to reach up further onto the roof. (Although reaching onto the roof if the doors fold up could be tricky).
  • Add headlights. They could be molded into the front little wings that protrude over the wheels.
  • Add fenders over the wheels that move with each wheel, independent from the rest of the vehicle. This would prevent a lot of road spray from happening in wet conditions. Make them detachable so that they can be removed if going over particularly rough terrain.
  • Add a winch and front/rear bumpers. Also, some sort of rear hatch, if it doesn’t have one already.

All-in-all, I really do like what Audi was going for. It makes me want to make my own version of it. I’d love to talk to the design team that worked on it, as well as the team that built the prototype to get an idea of how they did certain things.

Terrible data makes for terrible code

I recently ran across some code that was dealing with a python dictionary with approximately the following content:
[
{
"key": "Some Value",
"value": 1
},
{
"key": "Some Value",
"value": 2
},
{
"key": "Some Value",
"value": 3
}
]

The original author of the code had full control over the structure of the data. They were responsible for both creating and consuming it (within the same application). When consuming the code got the data, one of the first thing it did was get the very first object, and get the value of “key”. It then proceeded to ignore all the other keys, because they were all the same. The python snippet that it used to get the first element was… creative, to say the least.

With data like this, there’s inevitably someone that makes the dangerous assumption that there will always be at least one element in the array, and do something like:

someKey = data[0]['key']

This is dangerous. What happens if data doesn’t contain any elements? Sure, you can argue that if the person consuming the data is also the person creating the data, that they should know what is and isn’t there, but does that guarantee hold true for different environments? Will it work on dev, test, and prod? Will it work in a unit test? Will it work on another developer’s machine? Will it work if a network call fails to bring back and populate the data? Will it work when someone tries to reuse that same function somewhere else? There’s just too many unknowns to make assumptions like that.

You may notice something here. If the “key” always has the value of “Some Value”, why not format it something like this?

{
"key": "Some Value",
"values": [
1,
2,
3
]


Not only is it more compact, it’s also easier to read, and also easier to get the value of “key”.

Being organized with your data is like being organized with your desk, or your workshop. If you know exactly where things are (and they are in well thought out, labelled locations), it makes it easier for other people to help, and it makes it easier to deal with later on. You’ll be more productive in the long run.

Don’t be like Dennis Nedry (from Jurassic Park). Keep your data (and your desk), well organized.

A breeding ground for software bugs

Some time ago, I came across an odd situation. It was a case of things being built on things, built on top of other things, and a rat’s nest of unintended consequences resulting in unnecessary complexity and difficult to resolve bugs. The root cause was… well, I’ll let you be the judge of that.

A particular Django model tracked details about vehicles – details, such as the make, model, VIN, the current driver, etc. In addition to these fields, there were also two inconspicuous fields – latitude and longitude. A web endpoint that needed to display vehicles on a map would make a call to fetch all the vehicles and their locations (in one call), and would do exactly that. In the endpoint, the last known latitude/longitude was retrieved from another service, and the vehicle model Django model was saved with the latest latitude/longitude. This meant that the last known latitude/longitude wasn’t exactly real-time, but was “good enough” when the feature was originally written.

As things go, additional features were written, and related models were created. The size of the system grew and grew, and more and more data was added. One of the features added was tracking the history of each vehicle – not the physical location, but other historical details about the vehicle – e.g.: who was currently using it, was it towing a trailer, what office it was associated with. Things like that. The history was designed to only track particular changes. For example, if the owner/driver of a vehicle changed, it would need to trigger a call to an external service and update data there, but if the latitude or longitude changed, it the external service wasn’t notified.

The original endpoint for getting the list of vehicles and their locations slowly grew slower and slower. This wasn’t just a case of downloading more and more data – it was worse than that. Much worse. Instead of taking seconds, it started taking minutes.

Part of the problem was in the original endpoint. From a theoretical standpoint, any sort of GET request should be idempotent. It shouldn’t be calling a Django .save() on any sort of model. But this endpoint wasn’t like that. Instead, if it was fetching 10,000 vehicles, it would potentially be making 10,000 save calls, every single time a user called to fetch a list of vehicles. So if two users requested data at the same time, there would still be 20,000 potential save calls, even if the data was identical. This alone, was terrible. Eventually the endpoint was changed to just return the data, and a separate cron job would take care of updating the vehicle locations – but only if they were actually significantly changing.

The next rat’s nest came from the history tracking. Because updating the latitude/longitude shouldn’t trigger any sort of history tracking, a flag was added that would cause it to bypass any of the history tracking. So certain places (like the cron job) would bypass history tracking, while others would still have to go through the process of determining if something significant had actually changed. The logic in the pre-save and post-save on the original Django model grew uglier and uglier. It became responsible for creating new histories, closing off existing histories, pushing data to the external service, and so on.

This situation ended up being an almost perfect breeding ground for bugs. The external service wasn’t getting updated when it should have been. Histories were being created (or not updated) incorrectly. Performance was not as good as it could (or should!) have been. Attempts at resolving bugs with the system took much longer than they could have because of convoluted logic and interlaced behaviour.

The takeaway from this? Maybe the following:

  • Keep your transient data somewhere else. If you have frequently changing data (e.g.: changing every second), a standard Django model is probably not the best place for it. Consider something like a Redis cache, with ElasticSearch or CosmosDB for longer-term storage and retrieval.
  • Separate your concerns. Have functions that strictly handle one thing and one thing only.
  • Don’t ever call .save() in a GET endpoint.
  • Spend time up-front designing a proper solution. If the requirements change, the design may change too. Take time to periodically review existing features, and how new ones will be added, rather than just “bolting things on”.

The Dangers of Increasing Cognitive Load

Let’s suppose, that in order to perform a given task, you have 3 steps you need to do:

  1. Check the color of the incoming widget.
  2. If the widget is red, put it in a size-A box, if the widget is blue, put it in a size-B box.
  3. Put the box on the conveyor belt.

Simple enough, right? But what happens if we slowly add more steps? What if we add another color? Or what if we add a second conveyor belt that is only for size-B boxes? What if we add a condition that two widgets can share the same box as long as there are two red and one blue?

Sometimes I feel that software development is like this. We start out with a pretty basic workflow. Then, as the project demands more, we slowly add more steps to our environment setup, or deployment, or even development, it ends up asking more from the person performing the task. Before you know it, you’ve got software maintaining other software that maintains your build process. And what used to take <10 seconds now takes 15 minutes.

Here’s a few examples I’ve seen recently, which have frustrated me:

  • Finished code review on a ticket, and it’s merged? You’ll need to manually move the ticket to a new Jira status to let QA know it is available for testing. If the ticket is marked as not needing QA review, it can be marked as complete.
  • When generating a migration that touches a particular table, you’ll need to perform a separate 4-step process to generate a series of raw SQL statements.
  • When reviewing a ticket, be sure to run a linting tool on the code to ensure that it meets agreed upon coding standards.
  • When reviewing a ticket, be sure to run unit tests that test documentation, as well as the unit tests directly impacted by the changes being made.

What’s even more frustrating is that the majority of these could be resolved:

  • Moving ticket statuses based on GitHub pull requests statuses could be automated.
  • Migrations requiring custom SQL? This may be a code smell of a larger underlying issue.
  • Linting should automated. There’s GitHub plugins to do this sort of thing.
  • Unit tests should be run automatically as well, when a pull request is put up.

More cognitive load (asking someone to perform more and more tasks) will eventually result in failure. Mistakes will be made. Steps will be missed. It also takes longer to bring new team members up to speed. It results in frustration.

So how do you reduce this?

  1. Automate what you can. Yes, this will take time, but it will save time in the long run.
  2. Account for it in sprint planning. Allocate a little time to cleaning up your existing processes.
  3. Actively fight adding new steps to any process, unless it’s absolutely critical.

Things I learned by watching Star Trek: the Next Generation

Growing up, we never had cable TV, so I never really got to watch shows like Star Trek: the Next Generation.  Over the last few years, I’ve slowly been able to watch it via Netflix.  There’s been a number of things that I have picked up on, which I thought were interesting:

First, the funny observations:

  • No matter the alien race, they all seem to communicate using the same audio and video codecs.  Truly a miracle!  (I can’t remember where I heard this on Twitter, but it stuck!)
  • No one ever spends time going to the bathroom.  According to the behind-the-scenes snippets, there’s only one bathroom on the ship.  That’s got to be some galactic-level stench!
  • There must be an entire second (or third) crew that you never see.  Someone needs to fly the ship when the regular crew goes to bed, but you rarely hear about this.
  • A large portion of a star ship seems dedicated to sleeping quarters.  Either that, or the red-shirts end up hot-swapping bunks.  Compared to a ship or a submarine, there seems to be a lot of personal space, at least for higher-command personnel.
  • Almost any medical ailment, alien or otherwise, can be cured with a hypo-spray.
  • Security in the future is just as bad as security now days.  It seems like anyone can randomly hack into the ship’s systems.
  • Turbo-lifts must be brutally slow compared to modern elevators.  They seem to take an awful long time to get where they are going.
  • Tasks of the future must be much more difficult.  The futuristic star ships seem to lack any sort of auto-pilot.

And more serious observations:

Mental health is treated with a similar level of importance as physical health.  Everyone has access to counselling services without any sort of attached social stigma.  Even the captain and high ranking leadership rely on their counselor for advice.  In fact, it seems like everyone on the ship undergoes some sort of routine mental health check-up, just like they would a physical health check-up.

Everyone works for the common good, but in their own areas of expertise.  Those that aren’t willing to put in the effort are dropped off at the nearest star base.  In order to be on the ship, you have to pull your weight.  Everyone does their duties to the best of their abilities, and standards are set high.

Obesity and substance abuse are almost non-existent.  It seems like the crew is always in great shape.  There could be a few reasons for this:

  • There’s an importance placed on being physically well, and crew are expected to take time to be physically active.
  • Diet can be strictly controlled.  Maybe synthesized food doesn’t have the same caloric value?  Who knows.
  • Maybe we’ve cracked the mystery of gut flora, and are able to more strictly control how nutrients are absorbed.

People still have to learn to get along.  Even the future will have inter-personal conflicts.  Learning to get along with people you don’t like (“Shut up, Wesley!”) is still a critical skill, much needed skill.

While some of the nuances are entertaining, it’s still fun to occasionally dream of what the future may look like.  And perhaps more importantly, how we want to shape it.

 

The dangerous path of choices

Let’s suppose you are developing a piece of software.  You think that your users would like two different ways of accomplishing a specific task.  So your user’s workflow becomes something like this:

actions

(Excuse my terrible MS Paint skills)

They start with Action A.  Based on their preference, they can then do B or C, and then eventually end up at D.  This seems fine, and your users may like it.  Or they may not share your recommended workflow, and always use Action C (and never pick Action B).  There’s a dangerous game of pros and cons to be played here:

  • Do you try to anticipate your users needs, and provide both options  (at the expense of more development time)?
  • Do you try to simplify development, and allow just one set of actions (with the risk that your users will never complain loudly enough that a more desired workflow is wanted)?

Would you be willing to change your decision if it was a more complex workflow?  What if instead of two options, there were three?  What if Action B and Action C actually take several weeks on their own to develop?  What if you develop both, have decent metrics, and then can see which one they are choosing most often, and eventually remove the other (to save maintenance work)?

My experience generally leans toward this: keep it simple, then add as needed/wanted.

Like the rope bridge philosophy, or like Colin Chapman’s “Simplify, then add lightness”.  In a few short years, there’s a reasonable chance that you’ll end up throwing away A, B, C, and D, and end up doing something entirely different.  Yes, there is the risk that your users won’t complain that something is missing, but if you are spending enough time with your users (or are using your own product/eating your own dog food) such cases should quickly become apparent.

Building something of value

On occasion, I like to build stuff – usually out of wood.  To my credit, I’ve made a few pieces of furniture in my day (a baby change table, a curio cabinet, a chest of drawers, and a few other things).  But not everything I do in the garage results in a usable piece of furniture.  Sometimes it seems like I’m out there – “puttering”.  In same cases, it’s making a new tool to make the next job easier.  Other times, it’s tidying up after making a mess.  On the rare occasion, I’ll do some sort of experiment – like trying to polish a coconut or making a homemade frame saw.  These experiments don’t always result in some sort of usable product, but there is learning to be had.

Sometimes I wonder why people creating some products.  The “Juicero ” seems to be the latest Silicon Valley punching bag.  For those not familiar with it, it’s basically a machine that punches a small hole in a pouch of juice and pours it into a cup.  It’s easy to see how the idea came to be.  Coffee machines that use the little pods seem to be the latest rage, and a good money maker.  But not everyone drinks coffee.  So why not little pods of juice?  The problem is that there’s no work needed.  At least with a coffee machine, it has to mix the contents of the pod with water and heat it up.  With the juice… nothing.  It basically just pours you a cup of juice.  That’s not a problem that needed solving.  It offers little to no value.

So what happens if you are being forced by upper management to build a product that you see that has little to no value?

That’s a question I don’t have an answer for.