Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Joel 6:24 PM on July 23, 2014 Permalink | Reply
    Tags: git, horrible bugs, please don't make me use linux, this is why I like C#   

    One of the strangest bugs… 

    One task that I’m currently working on is a linux shell script that sets up a bunch of stuff for a deployment environment. Part of the setup involves creating a bash script that is used on a post-receive hook in git. In short, any time something is received by git it runs the script.

    Things with the overall bash script were going well, until I noticed something odd. One particular line in the bash script was getting changed somewhere between the machine I was writing it on and the server. There was a single line of the script that contained the following line:

    git checkout -f

    Somehow, when looking at the results from running the bash script on the server, it was instead producing this:

    git checkout 96f

    Not even close to being the same! At first I thought it was something that Ubuntu was doing to that particular line. Could it be somehow mangling things because I wasn’t properly escaping the dash symbol? Could it be that I was having to run dos2unix to alter line endings after getting it to the server? No matter how much digging I did, there was no clue to what was going on. That was when I noticed something.

    When I examined the file immediately after transferring it over (through FileZilla), I noticed that it still said “96f” – even before running dos2unix on it. This meant that somewhere in the transfer it was getting mangled. But why would FileZilla care what the file contents were? Especially this one character? Something still seemed fishy. Just to make sure I was looking at the right file, I threw in a few other characters on either side, so it looked something like this:

    git checkout adasdf-fasdfas

    I sent that across to the server, and it by the time it got to the server side it looked like this:

    git checkout adasdf96fasdfas

    What the heck?! Even trying different ftp modes – ascii, binary – showed the same result. It then occurred to me…

    That little particular snippet of code was copied from a GitHub wiki page. That dash wasn’t actually a dash – it was some other strange unicode character! I promptly deleted the offending line, re-wrote it, and sent it across to the server, and voila – it all worked!

    I think this still stands out as being one of the most bizarre bugs I have seen in a long time.

     
  • Joel 1:02 PM on June 25, 2014 Permalink | Reply
    Tags: Entity Framework, MVC,   

    Pylons, the latest MVC, and Entity Framework 

    One of my goals over the summer was to push hard on Pylons and get the iOS (iPhone) client out.  Right now its looking like that isn’t going to happen – at least not during the summer.  The reason for this is that there have been some very substantial updates to the underlying frameworks that Pylons uses. 

    Pylons is built on the ASP.Net MVC framework.  Last summer, when I was creating the first version of Pylons, MVC 3 was the latest version.  MVC 4 came out just at the end of the summer.  Between it, and MVC 5, there have been some pretty substantial changes.  One of these changes makes it potentially a lot easier to use OAuth authentication.  Currently, Pylons does this through a third-party plugin.  It’s rough, but it works.  Because of the way things work with MVC 5, it doesn’t make sense to keep using our third-party plugin. Ideally, we want everything to be as secure as possible – which means using the latest version of the MVC framework and not relying on third-party plugins.

    Unfortunately, this also means that there’s some other big changes that need to happen.  The underlying mechanisms of MVC are built using something called the Entity Framework.  The Entity Framework provides a way of automatically generating all the boilerplate code – the stuff that creates, reads, updates, and deletes things in the database.  I’m currently quite happy with how Pylons does this, but in order to use MVC 5 and the new authentication bits, it makes sense to at least looking into using the Entity Framework as well.  

    So far it seems to be working out well, but it still means that I’m spending all my time reworking the back-end, rather than adding new clients.  Hopefully there won’t be too many other major changes to the underlying frameworks, and I can get on with putting out a better product. 

     
  • Joel 3:12 PM on June 16, 2014 Permalink | Reply  

    It’s All About the Experience 

    This is something that keeps coming to mind, so I figured it is high-time that I wrote about it.

    With a product, it’s all about the entire experience – not just the product.  By making the experience of finding, purchasing, and using a product as positive as possible, a customer is much more likely to A) pay more or B) make a repeat purchase.  This is something that some companies understand, and that others are completely oblivious to.  You’d think that given companies are in the business of making money, that they’d have figured this out by now.  Why do so many struggle with it?  I don’t know.

    Is finding the product a positive experience?

    There’s a lot to “the experience”.  It first begins by the customer finding the product.  Was it easy or was it difficult to find?  For some things, this can really go either way.  Take, for example, a restaurant.  There’s definitely a difference between finding a burger joint at the next exit on the highway and finding a hole-in-the-wall restaurant.  In almost every case, you want to make your product as accessible as possible.  

    Here’s a related example.  A few months back I needed to purchase a new bike tire.  I asked a friend (who is into cycling) where I should buy one.  He replied with the name of a local store, and told me to explicitly ask about the cheaper tires they had in the back of the store.  Sure enough, I went to the store and saw what they had displayed.  The only tires they had displayed were quite expensive – at least double what I was willing to pay.  I found an employee, and asked them about the tires they had in the back.  They took me into the back, and showed me the selection they had.  The tire I ended up purchasing was less than half the cost of the ones I had seen earlier.  Had I not asked, I would have seen only the expensive tires, and walked out without making a sale.  In this case, the experience of finding what I was looking for wasn’t necessarily a positive one.  Maybe they were trying to get people to buy the more expensive tires, but either way, I bet they had a number of customers walk back out the door because what they saw was too expensive.  

    Once a customer has found the product, does the product appear appealing?  

    Here’s another example: Another thing I’ve been looking for is a set of tires for my car.  I’ve looked through a lot of online postings, and it’s interesting to note how certain people attempt to sell things.  Some people post blurry pictures of a set of dirty tires taken in a dark corner of a garage.  Even if the tires are in good condition, and it’s a good deal, I’m a lot less likely to give the person a call.  Really, for the 20 minutes of work it would take to hall them outside, hose them off, and take a decent picture, it’d be worth doing so.  If the product looks unappealing, the customer is less likely to buy it.   

    Once the customer makes the purchase, is the experience of opening the packaging a positive one?

    Just the other day, my wife and I had to buy some new light bulbs – not the standard Compact Florescent or 60 watt bulbs, but some sort of special halogen bulbs.  One, in particular, was packaged in that evil sort of clear plastic packaging – the kind that, when cut, becomes sharper than the scissors that cut it.  The same kind that is also incredibly difficult to cut, and is sealed the entire way around the package.  (Now maybe they use this kind of packaging because they are so dang expensive, and so that people won’t open the package in the store and steal them, but still!).  I think I counted no less than 17 bits of plastic and cardboard left over after hacking the package open.  The experience of bringing the product home and opening it up was not a positive one.  I’m much less inclined to buy such a product again.  (Of course, if I had my own way, we’d be using LED or Compact Florescent bulbs, but we rent, so the choice in light fixtures isn’t exactly ours).  

    Does the product provide a positive experience when being used?

    Here’s something that has me a bit baffled lately.  I’m a Windows guy.  I’ve been developing software for various Microsoft platforms for years.  There are things with Windows that annoy the living heck out of me.  They make me want to switch to something else.  I’m sure the alternatives also have problems, but there’s some things that Microsoft still hasn’t gotten right.  Take for example, plugging in a USB memory stick or SD card into a laptop.  Even with Windows 8, there’s a little pop-up that comes up indicating that your device is ready to be used.  OK.  Sometimes there’s also an accompanying dialog with a progress bar that pops up.  That’s not necessarily a bad thing – it’s letting me know that the computer has detected a new device, and it’s trying to set it up.  Fine.  But when I go to eject the device, and pull it out, another pop-up comes up indicating that the device can now safely be removed.  This pop-up stays up… even long after the device has been removed.  It’s like “Hey Windows, thanks for letting me know that I can safely remove that SD card… that I removed an hour ago.”.  If the device has been removed, why on earth is it necessary that the pop-up stays there?  As a programmer, I understand why.  They are using built-in mechanisms for displaying pop-up (system tray) notifications.  That’s fine.  But the end result is an experience which isn’t positive.  Yes, it’s a completely trivial thing, but it shows that not a lot of thought went into crafting the experience of using the operating system.

    To make a long story short: If your customers aren’t having a good experience – from start to finish, they are a lot less likely to give you money.  

     
  • Joel 10:05 AM on April 10, 2014 Permalink | Reply
    Tags: , marching cubes, marching tetrahedrons, ,   

    More Thesis related work 

    Here’s a quick snapshot of a 3D scanned Kleenex box, as shown in Maya:

    Image

    Comparing this to the previous sample, you can see that the actual surface seems quite a bit smoother.  This is because how I am generating the geometry.  In my last post, I was basically just dumping raw geometry out because I wanted to see something.  For each leaf node in the tree, I was checking if there was an adjacent leaf node on each side.  If there was no adjacent node, I’d put two triangles to plug/cover that side of the cube.  If there was an adjacent node, I wouldn’t worry about putting anything there.  This did create a watertight (manifold) 3D model, but the results were definitely very… chunky.  Not quite what I was going for.

    From the beginning, the goal has been to create a watertight 3D mesh.  I originally was going to use marching cubes to do this, but it turns out that there’s a few problems with that.  Because of the way that marching cubes works, it isn’t necessarily watertight.  There’s been a few smart people that have figure out the issues with it, but it is still a pain in the butt.  

    One of the other alternatives is marching tetrahedrons.  It works under the same basic principle as marching cubes, but divides each cube into 6 tetrahedrons.  The look-up table is considerably smaller, so in theory, it should be easier to implement.  There are downsides to it though – it does generate a lot more geometry.  Also, for whatever reason, I have yet to actually get it working properly.  There’s still a few cases where the geometry isn’t produced with the correct winding order, meaning that some triangles are facing the wrong way.  It also still doesn’t seem to produce a watertight mesh.  Here’s a comparison of the way things are right now, with marching cubes on the left and marching tetrahedrons on the right:

    Image

    The dark areas on the marching tetrahedrons are from triangles with a bad winding order.

     

     

     
  • Joel 10:01 AM on March 17, 2014 Permalink | Reply
    Tags: , Kinect, Octree, Optimization, Raspberry Pi, , U of L, Voxel   

    Thesis Related Thoughts 

    The following post is more for my own benefit.  Sometimes, by describing a problem to someone else, it becomes clearer.

     

    For my thesis, I’m working on a homemade low-cost 3D scanner.  The idea being that you should be able to create a water-tight 3D model that can be 3D printed directly from a 3D scan.  For example, here’s a real rough scan of a Kleenex box I did last week:

     

    Image

     

    Again, this is some very rough looking work.  The scanner isn’t fully calibrated, and I’m just dumping a lot of raw geometry out, just to see what things look like.  This particular scan came from 8 different captures of depth data from a Microsoft Kinect.  The data wasn’t filtered in any way.  Here’s a slightly better calibrated single scan:

    Image

     

    The issue that I’m currently having is performance.  To process a single capture of depth data takes about 8 seconds.  This seems quite slow to me, so last week, I did some digging around to see if I could ramp up the performance.  Before I can talk too much about performance, I’ll need to describe a few of the data structures that I’m using.

     

    At the core of the technique that I’m using, I’m treating small square regions of space as being either solid or empty.  I assume that they are empty until I find out otherwise.  I’m currently using a Microsoft Kinect, which is a depth sensing camera, to capture my depth data.  I’ve built a crude wood platform that uses a stepper motor to turn.  I grab a set of depth data from the Kinect then tell the platform to turn by, say, 45 degrees.  Once the platform has finished turning, I grab another set of depth data.  I do this until I’ve scanned all the sides of the object.  The internal workings are a little bit more complex than that, but that’s the basic idea.  Here’s a picture of what the setup looks like, complete with the Kleenex box I’ve been using to test the system:

    Image

    Once I have all the depth data captured, I begin the processing.  This is where the magic happens, and where things get more interesting.

    Internally, I’m using a cool data structure called an octree (technically an octrie, but whatever).  An octree starts with a root node that is basically a giant cube of 3D space.  It then subdivides that cube into 8 smaller, equally sized cubes.  It then takes each of those and divides that into 8 equally sized cubes… and so on, until the cubes hit a minimum size.  The size I’ve been playing with is about 2 mm.  This is about the maximum depth resolution of the Kinect, so there’s probably not too much point in going smaller.

    There’s also no point in allocating memory for regions of the 3D space that are empty.  Because of this, I only start to fill in the tree where I have data that I’ve scanned.  So rather than forcing each node in the tree to always have 8 children, I arranged it so that each node in the tree can have up to 8 children.  In a simple test, this brought the memory consumption down by 30%.

     

    Anyway, what I’m doing is this: the depth frames from the Kinect come in as basically a 640×480 image, but rather than a color for each pixel, I’m getting a depth value.  I make the assumption that if two adjacent pixels in the image are valid, and aren’t too different in depth values, then the object being scanned must be solid between those two points.  Extrapolating this, if I have three points that are all similar in depth value, and are adjacent to each other in the image, that I can form a triangle between those three points, and that triangle corresponds to some portion of the surface of the object being scanned.  

    So, from a single set of depth data, the end result is a bunch of triangles that fall somewhere on the surface of the object.  Great.  Next, what I do is run those triangles into the octree structure, and figure out which of the leaf nodes in the tree intersect each triangle.  That’s where the octree comes into play.  By starting at the root node, it’s really quite efficient at figuring out which subsequent nodes in the tree intersect the triangle.  When I do find the child nodes that intersect a triangle, I mark them as being solid.  

    (OK, so technically, I don’t mark them as solid.  I assume that if the recursive function I’m using has generated a child node of a minimum size, that it must have been the result of an intersection test, and therefore must be a child node).

    The end result of this, is a tree with a bunch of tiny child nodes that represent regions of 3D space in the scan that are solid.  I make another function call to the tree that gets me all these tiny child nodes in the tree.  I then generate geometry for each child node based on it’s neighbors.  If it has no neighbor on a given side, I create two triangles to cover that side of the tiny cube.  If there is a neighbor on that side, then I don’t generate any geometry.  (This is really just a huge, crude approximation, just so I can start seeing some results.  Eventually this will be replaced with something like marching cubes).

    The end result is a ton of geometry that shows an approximate surface contour of the object.  It’s crude, but it works.

     

    The problem that I ran into last week was this: performance.  The performance isn’t bad, but I figured there must be some way of speeding things up.  I did a bunch of reading about various voxel engines, and one interesting technique I discovered is this: rather than have each leaf in the tree only store a single solid region, why not have each leaf store a bunch of regions?  By widening and flattening the tree, this reduces a lot of the pointer chasing to get to a given child node.  Apparently heavy pointer chasing is a bad thing, as it results in cache misses.  Cache misses can kill performance (especially on platforms like the Xbox 360).  So, in theory, flattening the tree should help reduce cache misses, and increase performance, right?

     

    Not in my case.  As I mentioned, it was taking about 8 seconds to process a single frame of depth data.  Not great, but not unbearable.  When I ran a test with the flattened tree, performance was a LOT worse.  Like 10x worse.  I was a bit surprised at this, but I think the reason for it makes sense.

     

    When I flattened the tree, I tried making the final child node contain 4x4x4 chunk of 3D space.  For every triangle that I generate from the depth data, I need to check it against the child nodes to figure out which ones are intersecting.  The triangle-AABB (axis aligned bounding box) test that I’m using seems pretty quick, but despite this, calling it a million times or more will result in a performance hit.  When I was using the regular octree, each child node in the tree took at most about 9 triangle-AABB tests to get to.  So supposing that I had a tiny triangle that only spanned a single leaf node, this would take at least 9 of these tests to figure out exactly which leaf node it was that was being intersected by the triangle.  Comparing this with the flattened tree, it was much much worse.  Supposing that each leaf node in the flattened tree contained a grid of 4x4x4 children, then it would take at least 4x4x4 (=64) triangle-AABB intersection tests plus the few extra tests to get to that child node.  So yes, I was saving on cache misses, but I was doing a LOT more triangle-AABB tests.  I still haven’t fully profiled things to figure out if this is the case, but it does seem to be the most likely culprit.  

     

    To make a long story short, I learned the following:

    • It really doesn’t take long to prototype things.  
    • Prototyping can be worthwhile.  Even if you learned what doesn’t work, you still learned something valuable.
    • Sometimes the performance gain you think you’ll get will be offset by a loss somewhere else.
    • Looking for performance gains can be fun.

     

    Now if you’ll excuse me, I’ve got some triangle-AABB tests to run…

     

     
  • Joel 12:43 PM on January 27, 2014 Permalink | Reply
    Tags: Amazon EC2, HTTPS, I want my eight hours back, IIS 7.5, , SSL, This is why I'm not usually a web developer   

    Getting an HTTPS website to work on an Amazon EC2 instance… 

    Over the weekend I was attempting to push out a huge update to Pylons.  The new update reworks how the authentication works, because we want this version to be able to work with the mobile clients we are working on.  Because the reworked authentication involves passing some sensitive information around, we wanted to ensure that things were secure by using SSL.  Long story short, when you visit a website that starts with “https”, it’s using a secure layer to (in theory) prevent people from snooping on the traffic between your computer (or phone) and the website.

    A week ago I purchased an SSL certificate, and finally got around to pushing out the update and applying it all on Saturday.  Saturday evening, at around 5:00 pm, I hit a major snag: the HTTPS version of the site wasn’t working.  I went through a ton of different forum posts, blogs, and troubleshooting guides to figure out what was going on.  By the end of the evening, I still wasn’t able to figure it out.  Everything on the server looked fine.  The SSL certificate showed up fine on the server, the IIS bindings were set properly, the firewall was opened to traffic on port 443, but no matter what I did, it didn’t work.  I could view some portions of the site when accessing it through localhost on the server, but as soon as I would try hitting it by the actual URL, nothing would load.  I even stripped the site right down to the most basic ASP.Net page that ships as the default web page for IIS.  No matter what I did, I couldn’t ever seem to get it to work.

    This morning, I decided to take another crack at it.  I spent way too much time doing more digging through forums, re-installing IIS, restarting the server, etc.  Nothing seemed to work.  Everything seemed to look fine on the server, but nothing would work remotely.  Eventually, I thought that perhaps it wasn’t my problem.  Given that the site is currently being hosted on an Amazon instance, could it perhaps be some setting with Amazon that is blocking SSL traffic on port 443? 

    Yes.

    That’s exactly what it was.  I came across this helpful question on StackOverflow.  As it turns out, as soon as I enabled the traffic on port 443, via the EC2 Management Console, everything magically worked!

    I’m writing this up for anyone else who might have the same problem:

    • SSL Certificate is installed on the server
    • Correct bindings are allowing traffic on port 443, and a proper certificate is selected.
    • Server firewall settings allow traffic on port 443
    • You are able to access the most basic site on HTTP but the site times out on HTTPS
    • Viewing any port settings via the console shows that traffic should be coming through on port 443
    • Snooping network traffic with Netshark shows that the server is getting some HTTPS requests, but it seems like no response is being sent

    Try doing the following:

    1. In the Amazon web console (https://console.aws.amazon.com/ec2), click on the Security Groups link on the left
    2. Under the security group that your instance is running, set up a new Inbound rule to allow HTTPS traffic from any IP.
    3. Set up a new Outbound rule to allow HTTPS traffic to any IP.

    It wasn’t necessary to delete/recreate/restart the instance. As soon as I applied the rules, I tried hitting the https site in my browser on my local machine, and it worked.

    Now to actually getting the proper/real update pushed out…

     
  • Joel 10:11 AM on January 10, 2014 Permalink | Reply
    Tags: Moodle, Third normal form for the win, This is why I am not a huge fan of overly complicated systems   

    Fun bug in Moodle: Grader Report not showing grades 

    I can’t remember if I have mentioned this before, so I will, just in case anyone else out there runs across this same issue.

    The University of Lethbridge uses Moodle for most of its labs.  Last semester I was teaching a lab, as well as was the lab coordinator for several labs.  When it came to the end of the semester, the instructor I was working with wanted marks in a spreadsheet.  Easy enough, I figured.  I pulled the grades up on the Grader Report, and copied them into the spreadsheet.  

    It was around this time that I got a message from a student asking about his grade.  For several of his assignments, it wasn’t showing a mark despite him submitting an assignment, and despite it being marked.  Sure enough, I went in and checked, and it did appear that he had submitted the assignments, and that they were graded, but when I checked the Grader Report, it would just show a “-”.  

    This was a bit beyond me why it might do this, so I had to go pester the on-campus Moodle team.  They eventually figured it out:  If an assignment is uploaded but not “submitted” (i.e.: the student uploaded the files but never clicked the submit button), and the assignment gets graded before the assignment due date, the mark won’t show in the Grader Report.  

    I can only imagine how complicated the logic must be in Moodle, and how ‘interesting’ the schema must be…

     
  • Joel 5:44 PM on January 8, 2014 Permalink | Reply
    Tags: Perhaps I shouldnt blog when having a cold, , YARU   

    YARU – Yet Another Random Update 

    Ah, the joys of having a cold.  Or at least a scratchy throat.

    I noticed something strange with some of my WordPress traffic.  There was a decent number of people clicking on a link that looks like it should have been pointing somewhere else.  Someone had left a comment with a link to the creators.xna.com site, but misspelled XNA.  Oops.  And to think that comment has been there for several years.  On a related note, it’s amazing how long I’ve had this blog.

    There are some exciting things happening.  Not much in the way of game development, but with thesis work and with Pylons.  Of course, that being the case, this morning I had a rather good conversation that reminded me that I really do eventually want to make a remake of Trespasser.

     
  • Joel 12:54 PM on December 11, 2013 Permalink | Reply
    Tags: aerogel, are you out of milk, fridge, I should use more interesting hashtags more often, internet connected fridge, internet of things,   

    Why the “Internet of Things” Isn’t Taking off 

    There recently seems to be a lot of talk online about the “internet of things”.  The principle behind the “internet of things” is that all the devices in your household are connected to the internet.  The classic example of this is a refrigerator that lets you know when you need to buy more milk.  A lot of people seem to forget that we’ve had this technology around for a good 10-15 years.  This isn’t something new.  So why hasn’t it taken off?

    I’m going to go off on a bit of a tangent here, but stick with me.

    This is a Swiss army knife:

    Image 

    (Image courtesy of Wikipedia)

    A Swiss army knife is a pretty handy little thing.  As you can see, it’s not just a knife.  It’s a collection of small but usable tools – a knife, pair of scissors, saw, can opener, screw driver, etc.  It’s the kind of thing that any Boy Scout would be delighted to have.  A Swiss army knife allows a person to carry around a bunch of different tools with them, all in a nice, small form factor.  

    That being said, there are drawbacks.  Like most other things in life, ‘there ain’t no such thing as a free lunch’*.  You aren’t likely going to use the saw a Swiss army knife to cut down a tree (or even prune a tree!), or use the screw driver to fix your car – unless there’s no alternative available.  As great as it is, it’s no real substitute for the full-sized tool it replaces.  So why do people buy them?  It’s pretty simple:  It provides an advantage.  In this case, the advantage is in size and weight.  There’s no way your average Boy Scout is going to carry around a full sized knife, a full sized saw, a set of screwdrivers, a can opener, and a pair of scissors in his pockets.  Yes, it doesn’t work as well as the real sized tools, but it works well enough.  The advantages it provides are greater than the disadvantages.  As anyone who has ever been stranded on the side of the road can attest – a poor tool is better than no tool at all!

    So how does this tie into the “internet of things”?

    A lot of people seem to have the idea that everything is so much better when it is connected to the internet, like the example above of the fridge that tells you when you are low on milk.  It might be pretty cool (no pun intended) to have your fridge tell you when you are low on milk – but that also means that it has to know what is put into it, and what is taken out.  That could mean a sensor on the milk carton or jug, or it could mean that you have to swipe bar codes of things as you put them into or take them out the fridge.  Either way, that sounds like it is going to be more of an inconvenience than what people are currently used to.  It could also mean that the milk cartons are more expensive, if they have to include a sensor.  In order for people to change the way they do things, the new way must offer some sort of advantage over the current way.  That advantage must be greater than the disadvantages.  Let’s face it – in general, people are lazy, and don’t like change.  There are very few people who change things up for the sake of change.  I’ve never heard of anyone waking up in the morning, and deciding to buy a different car (of similar vintage) for the sake of change.  Usually, people switch products because the new product offers something better than the existing product.  Perhaps that advantage is an peace of mind of switching to use a more environmentally friendly product.  Maybe the new product is seen as being ‘cool’, and they subconsciously think that they will be held in higher regard if they use it.  Maybe the new product costs less.  Either way, there is almost always some sort of advantage.

    So again, how does this tie into the “internet of things”?  So far, it seems that almost all the internet connected devices offer little advantage over their existing counterparts.  There doesn’t seem to be a lot of thought going into the design and use of these devices, other than “let’s connect it to the internet because we can!”.

    Image

     

    So what needs to change before people start buying internet connected appliances?  Simple: prove that the advantages outweigh the disadvantages.  Show that life can somehow be better/easier/happier by using these networked devices.  For example:

    • Get your houseplants can now remind you that you need to water them.  (The advantage being that you kill less houseplants, the disadvantage being the low cost of a sensor in the plant and the inconvenience of having to replace a small battery once a year).
    • Have your clothes washer or drier text you when the wash is done, so you don’t forget you left it in there. (The advantage being that you don’t forget at 10:00 pm that your pajamas are still sopping wet, and are sitting in the washing machine.  The disadvantage being the increased cost of the machine, and potential security concerns).  

    I, for one, don’t ever plan on buying an internet connected toaster.  I see absolutely no advantage that such an appliance would have over a regular toaster.  

    To those of you who are considering creating an internet connected device, please ask yourself: Does this device provide some sort of advantage over the current device?  if so, does the advantage outweigh the disadvantage?  Do your (potential and current) customers agree with you?

     

     

    Also, fridge and freezer manufacturers: Why the heck haven’t you invested more in developing cheaper/safer ways of creating aerogels?  An aerogel fridge would be absolutely amazing with regards to efficiency and weight.  Heck, even something twice as dense as an aerogel would make an amazing insulator on a fridge.  I’d much rather spend an extra $500 on a more efficient fridge than one that reminds me when I need milk.

    *Or TANSTAAFL for short.

     
  • Joel 8:22 PM on November 26, 2013 Permalink | Reply
    Tags: , , Rating Systems, User Content   

    User ratings and community content 

    For some reason this morning I felt like doing a bit of writing.  Little did I know, it would be in response to a rather scandalous sounding article over at Ars Technica.  The discussion was partially about how Google has supposedly messed up comments on YouTube.  (Personally, I see them mostly as the cesspool of the internet, so I tend to stay away).  To make a long story short, it seems that a lot of people aren’t too happy with Google’s decision to require a Google+ account to comment (rather than just a YouTube account).  Part of the reason for doing this was to prevent/reduce inappropriate comments.

    This is the overly-long comment that I made regarding the subject:

    As someone who is very interested in launching an online system, I might say I’ve got a vested interested in how online communities work. 

    Obviously, there is potential value in allowing comments (and other types of feedback: upvotes/downvotes) in an online community. It helps the good content float to the surface. Unfortunately, if it isn’t implemented correctly, the poo floats to the surface instead. Some people (recently: Popular Science) have just dropped comments altogether. Maybe it’s just me, but this seems to be a really quick way of killing a community. Instead of a nice feedback loop, you’ve dropped half the party out of the conversation, and are instead stuck with a monologue, where the content creators are the only ones doing the talking, and everyone else is forced to listen. 

    Dropping an online commenting system seems like a really bad idea on two fronts:

    • It no longer allows community feedback, which can really be helpful in developing content that is catered toward the users. This also acts as a mechanism for letting the content creators know about upcoming trends. Ignoring these trends can end up being catastrophic, in some cases.
    • It restricts the social interactions between members of the community. You really can’t create much of a community without those sorts of social interactions. Half of the reason I frequent the sites that I do is to hear the feedback from the people, as often it is as good as, if not better than the original material. (Ars is included here!)

    So, how do you manage an online feedback (comments/upvotes/downvotes) system? Well, there’s a couple of ways of doing that:

    1. Decouple popularity and rating. Just because something is “popular” (meaning that it gets a lot of replies or a lot of votes) doesn’t mean it is good. Rather than trying to use a single value to determine whether or not content should float to the top, it really takes two values. Looking at how Ars does this, it allows comments to be both up-voted and down-voted. The total number of votes would be the popularity, and the ratio of up-votes to down-votes could be the rating. This means that it isn’t just controversial topics that float to the top. It can even be set up in a way that lets the users decide what kind of content they want to see – the highly rated items, or the controversial items. Sites like Reddit get around this by using a proper ranking algorithm, which brings me to point #2…
    2. Use a proper ranking algorithm. Supposing that you are running a service that relies on user content, you obviously want the best content to float to the top. There’s different approaches to this, some work better in certain situations. Take, for example, Amazon. With product ratings, you have the option of giving a product a rating out of 5 stars. The problem with Amazon is that it uses a very simple average. So, supposing that I want to sort products by rating, and I’ve got two of them: one with 499 five star ratings and one 1 star rating, and the other product has a single five star rating, with Amazon’s system, the one with one rating gets ranked higher. This is an example of a system that doesn’t work so well. The Reddit folks get around this by using a rating system called Wilson’s Lower Bound. Basically, it calculates the probability of a comment being good or not based on the first 40 or so ratings. (See: http://amix.dk/blog/post/19588
    3. Moderators and a clear line in the sand. At one point or another, there needs to be some way of removing offending content. This almost always comes down to having some sort of human deciding what should go, and what should stay. By allowing people to report comments (or other user generated content), this makes it fairly easy to bring offensive content to a moderator’s attention. There also needs to be a strict guideline of what is considered offensive and what isn’t. Apple’s app store was the focus of this a while ago: people were quite upset that certain apps with questionable content were getting through while others were not. Wherever you draw your line in the sand, it needs to be consistent and visible to the users of the community. 
    4. If you don’t have the option of removing offending content using a manual process, at least give the users a way of hiding offensive content, and let them set their own threshold. The particulars on this depend on how you have implemented your rating system. For example, if you are using just up-votes/down-votes, you could set a threshold such that anything receiving more than 20 down-votes and and having a negative overall rating (up-votes minus down-votes), doesn’t get shown by default.
    5. If possible, use the community to guide itself. The majority of sites that I go to tend to be self-regulating. In that sense, the community polices itself. I’m more than happy to report a post as being spam if I know that something will eventually be done about it. 

    Anyway, that’s more than enough rambling for now. As mentioned by another commenter, with a ship the size of Google, it is hard to change directions quickly. That being said, you’d think someone that large would also have a pretty good idea of how to do things.

    Just my $0.02.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 92 other followers