Updates from Joel Toggle Comment Threads | Keyboard Shortcuts

  • Joel 10:05 AM on April 10, 2014 Permalink | Reply
    Tags: , marching cubes, marching tetrahedrons, ,   

    More Thesis related work 

    Here’s a quick snapshot of a 3D scanned Kleenex box, as shown in Maya:

    Image

    Comparing this to the previous sample, you can see that the actual surface seems quite a bit smoother.  This is because how I am generating the geometry.  In my last post, I was basically just dumping raw geometry out because I wanted to see something.  For each leaf node in the tree, I was checking if there was an adjacent leaf node on each side.  If there was no adjacent node, I’d put two triangles to plug/cover that side of the cube.  If there was an adjacent node, I wouldn’t worry about putting anything there.  This did create a watertight (manifold) 3D model, but the results were definitely very… chunky.  Not quite what I was going for.

    From the beginning, the goal has been to create a watertight 3D mesh.  I originally was going to use marching cubes to do this, but it turns out that there’s a few problems with that.  Because of the way that marching cubes works, it isn’t necessarily watertight.  There’s been a few smart people that have figure out the issues with it, but it is still a pain in the butt.  

    One of the other alternatives is marching tetrahedrons.  It works under the same basic principle as marching cubes, but divides each cube into 6 tetrahedrons.  The look-up table is considerably smaller, so in theory, it should be easier to implement.  There are downsides to it though – it does generate a lot more geometry.  Also, for whatever reason, I have yet to actually get it working properly.  There’s still a few cases where the geometry isn’t produced with the correct winding order, meaning that some triangles are facing the wrong way.  It also still doesn’t seem to produce a watertight mesh.  Here’s a comparison of the way things are right now, with marching cubes on the left and marching tetrahedrons on the right:

    Image

    The dark areas on the marching tetrahedrons are from triangles with a bad winding order.

     

     

     
  • Joel 10:01 AM on March 17, 2014 Permalink | Reply
    Tags: , Kinect, Octree, Optimization, Raspberry Pi, , U of L, Voxel   

    Thesis Related Thoughts 

    The following post is more for my own benefit.  Sometimes, by describing a problem to someone else, it becomes clearer.

     

    For my thesis, I’m working on a homemade low-cost 3D scanner.  The idea being that you should be able to create a water-tight 3D model that can be 3D printed directly from a 3D scan.  For example, here’s a real rough scan of a Kleenex box I did last week:

     

    Image

     

    Again, this is some very rough looking work.  The scanner isn’t fully calibrated, and I’m just dumping a lot of raw geometry out, just to see what things look like.  This particular scan came from 8 different captures of depth data from a Microsoft Kinect.  The data wasn’t filtered in any way.  Here’s a slightly better calibrated single scan:

    Image

     

    The issue that I’m currently having is performance.  To process a single capture of depth data takes about 8 seconds.  This seems quite slow to me, so last week, I did some digging around to see if I could ramp up the performance.  Before I can talk too much about performance, I’ll need to describe a few of the data structures that I’m using.

     

    At the core of the technique that I’m using, I’m treating small square regions of space as being either solid or empty.  I assume that they are empty until I find out otherwise.  I’m currently using a Microsoft Kinect, which is a depth sensing camera, to capture my depth data.  I’ve built a crude wood platform that uses a stepper motor to turn.  I grab a set of depth data from the Kinect then tell the platform to turn by, say, 45 degrees.  Once the platform has finished turning, I grab another set of depth data.  I do this until I’ve scanned all the sides of the object.  The internal workings are a little bit more complex than that, but that’s the basic idea.  Here’s a picture of what the setup looks like, complete with the Kleenex box I’ve been using to test the system:

    Image

    Once I have all the depth data captured, I begin the processing.  This is where the magic happens, and where things get more interesting.

    Internally, I’m using a cool data structure called an octree (technically an octrie, but whatever).  An octree starts with a root node that is basically a giant cube of 3D space.  It then subdivides that cube into 8 smaller, equally sized cubes.  It then takes each of those and divides that into 8 equally sized cubes… and so on, until the cubes hit a minimum size.  The size I’ve been playing with is about 2 mm.  This is about the maximum depth resolution of the Kinect, so there’s probably not too much point in going smaller.

    There’s also no point in allocating memory for regions of the 3D space that are empty.  Because of this, I only start to fill in the tree where I have data that I’ve scanned.  So rather than forcing each node in the tree to always have 8 children, I arranged it so that each node in the tree can have up to 8 children.  In a simple test, this brought the memory consumption down by 30%.

     

    Anyway, what I’m doing is this: the depth frames from the Kinect come in as basically a 640×480 image, but rather than a color for each pixel, I’m getting a depth value.  I make the assumption that if two adjacent pixels in the image are valid, and aren’t too different in depth values, then the object being scanned must be solid between those two points.  Extrapolating this, if I have three points that are all similar in depth value, and are adjacent to each other in the image, that I can form a triangle between those three points, and that triangle corresponds to some portion of the surface of the object being scanned.  

    So, from a single set of depth data, the end result is a bunch of triangles that fall somewhere on the surface of the object.  Great.  Next, what I do is run those triangles into the octree structure, and figure out which of the leaf nodes in the tree intersect each triangle.  That’s where the octree comes into play.  By starting at the root node, it’s really quite efficient at figuring out which subsequent nodes in the tree intersect the triangle.  When I do find the child nodes that intersect a triangle, I mark them as being solid.  

    (OK, so technically, I don’t mark them as solid.  I assume that if the recursive function I’m using has generated a child node of a minimum size, that it must have been the result of an intersection test, and therefore must be a child node).

    The end result of this, is a tree with a bunch of tiny child nodes that represent regions of 3D space in the scan that are solid.  I make another function call to the tree that gets me all these tiny child nodes in the tree.  I then generate geometry for each child node based on it’s neighbors.  If it has no neighbor on a given side, I create two triangles to cover that side of the tiny cube.  If there is a neighbor on that side, then I don’t generate any geometry.  (This is really just a huge, crude approximation, just so I can start seeing some results.  Eventually this will be replaced with something like marching cubes).

    The end result is a ton of geometry that shows an approximate surface contour of the object.  It’s crude, but it works.

     

    The problem that I ran into last week was this: performance.  The performance isn’t bad, but I figured there must be some way of speeding things up.  I did a bunch of reading about various voxel engines, and one interesting technique I discovered is this: rather than have each leaf in the tree only store a single solid region, why not have each leaf store a bunch of regions?  By widening and flattening the tree, this reduces a lot of the pointer chasing to get to a given child node.  Apparently heavy pointer chasing is a bad thing, as it results in cache misses.  Cache misses can kill performance (especially on platforms like the Xbox 360).  So, in theory, flattening the tree should help reduce cache misses, and increase performance, right?

     

    Not in my case.  As I mentioned, it was taking about 8 seconds to process a single frame of depth data.  Not great, but not unbearable.  When I ran a test with the flattened tree, performance was a LOT worse.  Like 10x worse.  I was a bit surprised at this, but I think the reason for it makes sense.

     

    When I flattened the tree, I tried making the final child node contain 4x4x4 chunk of 3D space.  For every triangle that I generate from the depth data, I need to check it against the child nodes to figure out which ones are intersecting.  The triangle-AABB (axis aligned bounding box) test that I’m using seems pretty quick, but despite this, calling it a million times or more will result in a performance hit.  When I was using the regular octree, each child node in the tree took at most about 9 triangle-AABB tests to get to.  So supposing that I had a tiny triangle that only spanned a single leaf node, this would take at least 9 of these tests to figure out exactly which leaf node it was that was being intersected by the triangle.  Comparing this with the flattened tree, it was much much worse.  Supposing that each leaf node in the flattened tree contained a grid of 4x4x4 children, then it would take at least 4x4x4 (=64) triangle-AABB intersection tests plus the few extra tests to get to that child node.  So yes, I was saving on cache misses, but I was doing a LOT more triangle-AABB tests.  I still haven’t fully profiled things to figure out if this is the case, but it does seem to be the most likely culprit.  

     

    To make a long story short, I learned the following:

    • It really doesn’t take long to prototype things.  
    • Prototyping can be worthwhile.  Even if you learned what doesn’t work, you still learned something valuable.
    • Sometimes the performance gain you think you’ll get will be offset by a loss somewhere else.
    • Looking for performance gains can be fun.

     

    Now if you’ll excuse me, I’ve got some triangle-AABB tests to run…

     

     
  • Joel 12:43 PM on January 27, 2014 Permalink | Reply
    Tags: Amazon EC2, HTTPS, I want my eight hours back, IIS 7.5, , SSL, This is why I'm not usually a web developer   

    Getting an HTTPS website to work on an Amazon EC2 instance… 

    Over the weekend I was attempting to push out a huge update to Pylons.  The new update reworks how the authentication works, because we want this version to be able to work with the mobile clients we are working on.  Because the reworked authentication involves passing some sensitive information around, we wanted to ensure that things were secure by using SSL.  Long story short, when you visit a website that starts with “https”, it’s using a secure layer to (in theory) prevent people from snooping on the traffic between your computer (or phone) and the website.

    A week ago I purchased an SSL certificate, and finally got around to pushing out the update and applying it all on Saturday.  Saturday evening, at around 5:00 pm, I hit a major snag: the HTTPS version of the site wasn’t working.  I went through a ton of different forum posts, blogs, and troubleshooting guides to figure out what was going on.  By the end of the evening, I still wasn’t able to figure it out.  Everything on the server looked fine.  The SSL certificate showed up fine on the server, the IIS bindings were set properly, the firewall was opened to traffic on port 443, but no matter what I did, it didn’t work.  I could view some portions of the site when accessing it through localhost on the server, but as soon as I would try hitting it by the actual URL, nothing would load.  I even stripped the site right down to the most basic ASP.Net page that ships as the default web page for IIS.  No matter what I did, I couldn’t ever seem to get it to work.

    This morning, I decided to take another crack at it.  I spent way too much time doing more digging through forums, re-installing IIS, restarting the server, etc.  Nothing seemed to work.  Everything seemed to look fine on the server, but nothing would work remotely.  Eventually, I thought that perhaps it wasn’t my problem.  Given that the site is currently being hosted on an Amazon instance, could it perhaps be some setting with Amazon that is blocking SSL traffic on port 443? 

    Yes.

    That’s exactly what it was.  I came across this helpful question on StackOverflow.  As it turns out, as soon as I enabled the traffic on port 443, via the EC2 Management Console, everything magically worked!

    I’m writing this up for anyone else who might have the same problem:

    • SSL Certificate is installed on the server
    • Correct bindings are allowing traffic on port 443, and a proper certificate is selected.
    • Server firewall settings allow traffic on port 443
    • You are able to access the most basic site on HTTP but the site times out on HTTPS
    • Viewing any port settings via the console shows that traffic should be coming through on port 443
    • Snooping network traffic with Netshark shows that the server is getting some HTTPS requests, but it seems like no response is being sent

    Try doing the following:

    1. In the Amazon web console (https://console.aws.amazon.com/ec2), click on the Security Groups link on the left
    2. Under the security group that your instance is running, set up a new Inbound rule to allow HTTPS traffic from any IP.
    3. Set up a new Outbound rule to allow HTTPS traffic to any IP.

    It wasn’t necessary to delete/recreate/restart the instance. As soon as I applied the rules, I tried hitting the https site in my browser on my local machine, and it worked.

    Now to actually getting the proper/real update pushed out…

     
  • Joel 10:11 AM on January 10, 2014 Permalink | Reply
    Tags: Moodle, Third normal form for the win, This is why I am not a huge fan of overly complicated systems   

    Fun bug in Moodle: Grader Report not showing grades 

    I can’t remember if I have mentioned this before, so I will, just in case anyone else out there runs across this same issue.

    The University of Lethbridge uses Moodle for most of its labs.  Last semester I was teaching a lab, as well as was the lab coordinator for several labs.  When it came to the end of the semester, the instructor I was working with wanted marks in a spreadsheet.  Easy enough, I figured.  I pulled the grades up on the Grader Report, and copied them into the spreadsheet.  

    It was around this time that I got a message from a student asking about his grade.  For several of his assignments, it wasn’t showing a mark despite him submitting an assignment, and despite it being marked.  Sure enough, I went in and checked, and it did appear that he had submitted the assignments, and that they were graded, but when I checked the Grader Report, it would just show a “-”.  

    This was a bit beyond me why it might do this, so I had to go pester the on-campus Moodle team.  They eventually figured it out:  If an assignment is uploaded but not “submitted” (i.e.: the student uploaded the files but never clicked the submit button), and the assignment gets graded before the assignment due date, the mark won’t show in the Grader Report.  

    I can only imagine how complicated the logic must be in Moodle, and how ‘interesting’ the schema must be…

     
  • Joel 5:44 PM on January 8, 2014 Permalink | Reply
    Tags: Perhaps I shouldnt blog when having a cold, , YARU   

    YARU – Yet Another Random Update 

    Ah, the joys of having a cold.  Or at least a scratchy throat.

    I noticed something strange with some of my WordPress traffic.  There was a decent number of people clicking on a link that looks like it should have been pointing somewhere else.  Someone had left a comment with a link to the creators.xna.com site, but misspelled XNA.  Oops.  And to think that comment has been there for several years.  On a related note, it’s amazing how long I’ve had this blog.

    There are some exciting things happening.  Not much in the way of game development, but with thesis work and with Pylons.  Of course, that being the case, this morning I had a rather good conversation that reminded me that I really do eventually want to make a remake of Trespasser.

     
  • Joel 12:54 PM on December 11, 2013 Permalink | Reply
    Tags: aerogel, are you out of milk, fridge, I should use more interesting hashtags more often, internet connected fridge, internet of things,   

    Why the “Internet of Things” Isn’t Taking off 

    There recently seems to be a lot of talk online about the “internet of things”.  The principle behind the “internet of things” is that all the devices in your household are connected to the internet.  The classic example of this is a refrigerator that lets you know when you need to buy more milk.  A lot of people seem to forget that we’ve had this technology around for a good 10-15 years.  This isn’t something new.  So why hasn’t it taken off?

    I’m going to go off on a bit of a tangent here, but stick with me.

    This is a Swiss army knife:

    Image 

    (Image courtesy of Wikipedia)

    A Swiss army knife is a pretty handy little thing.  As you can see, it’s not just a knife.  It’s a collection of small but usable tools – a knife, pair of scissors, saw, can opener, screw driver, etc.  It’s the kind of thing that any Boy Scout would be delighted to have.  A Swiss army knife allows a person to carry around a bunch of different tools with them, all in a nice, small form factor.  

    That being said, there are drawbacks.  Like most other things in life, ‘there ain’t no such thing as a free lunch’*.  You aren’t likely going to use the saw a Swiss army knife to cut down a tree (or even prune a tree!), or use the screw driver to fix your car – unless there’s no alternative available.  As great as it is, it’s no real substitute for the full-sized tool it replaces.  So why do people buy them?  It’s pretty simple:  It provides an advantage.  In this case, the advantage is in size and weight.  There’s no way your average Boy Scout is going to carry around a full sized knife, a full sized saw, a set of screwdrivers, a can opener, and a pair of scissors in his pockets.  Yes, it doesn’t work as well as the real sized tools, but it works well enough.  The advantages it provides are greater than the disadvantages.  As anyone who has ever been stranded on the side of the road can attest – a poor tool is better than no tool at all!

    So how does this tie into the “internet of things”?

    A lot of people seem to have the idea that everything is so much better when it is connected to the internet, like the example above of the fridge that tells you when you are low on milk.  It might be pretty cool (no pun intended) to have your fridge tell you when you are low on milk – but that also means that it has to know what is put into it, and what is taken out.  That could mean a sensor on the milk carton or jug, or it could mean that you have to swipe bar codes of things as you put them into or take them out the fridge.  Either way, that sounds like it is going to be more of an inconvenience than what people are currently used to.  It could also mean that the milk cartons are more expensive, if they have to include a sensor.  In order for people to change the way they do things, the new way must offer some sort of advantage over the current way.  That advantage must be greater than the disadvantages.  Let’s face it – in general, people are lazy, and don’t like change.  There are very few people who change things up for the sake of change.  I’ve never heard of anyone waking up in the morning, and deciding to buy a different car (of similar vintage) for the sake of change.  Usually, people switch products because the new product offers something better than the existing product.  Perhaps that advantage is an peace of mind of switching to use a more environmentally friendly product.  Maybe the new product is seen as being ‘cool’, and they subconsciously think that they will be held in higher regard if they use it.  Maybe the new product costs less.  Either way, there is almost always some sort of advantage.

    So again, how does this tie into the “internet of things”?  So far, it seems that almost all the internet connected devices offer little advantage over their existing counterparts.  There doesn’t seem to be a lot of thought going into the design and use of these devices, other than “let’s connect it to the internet because we can!”.

    Image

     

    So what needs to change before people start buying internet connected appliances?  Simple: prove that the advantages outweigh the disadvantages.  Show that life can somehow be better/easier/happier by using these networked devices.  For example:

    • Get your houseplants can now remind you that you need to water them.  (The advantage being that you kill less houseplants, the disadvantage being the low cost of a sensor in the plant and the inconvenience of having to replace a small battery once a year).
    • Have your clothes washer or drier text you when the wash is done, so you don’t forget you left it in there. (The advantage being that you don’t forget at 10:00 pm that your pajamas are still sopping wet, and are sitting in the washing machine.  The disadvantage being the increased cost of the machine, and potential security concerns).  

    I, for one, don’t ever plan on buying an internet connected toaster.  I see absolutely no advantage that such an appliance would have over a regular toaster.  

    To those of you who are considering creating an internet connected device, please ask yourself: Does this device provide some sort of advantage over the current device?  if so, does the advantage outweigh the disadvantage?  Do your (potential and current) customers agree with you?

     

     

    Also, fridge and freezer manufacturers: Why the heck haven’t you invested more in developing cheaper/safer ways of creating aerogels?  An aerogel fridge would be absolutely amazing with regards to efficiency and weight.  Heck, even something twice as dense as an aerogel would make an amazing insulator on a fridge.  I’d much rather spend an extra $500 on a more efficient fridge than one that reminds me when I need milk.

    *Or TANSTAAFL for short.

     
  • Joel 8:22 PM on November 26, 2013 Permalink | Reply
    Tags: , , Rating Systems, User Content   

    User ratings and community content 

    For some reason this morning I felt like doing a bit of writing.  Little did I know, it would be in response to a rather scandalous sounding article over at Ars Technica.  The discussion was partially about how Google has supposedly messed up comments on YouTube.  (Personally, I see them mostly as the cesspool of the internet, so I tend to stay away).  To make a long story short, it seems that a lot of people aren’t too happy with Google’s decision to require a Google+ account to comment (rather than just a YouTube account).  Part of the reason for doing this was to prevent/reduce inappropriate comments.

    This is the overly-long comment that I made regarding the subject:

    As someone who is very interested in launching an online system, I might say I’ve got a vested interested in how online communities work. 

    Obviously, there is potential value in allowing comments (and other types of feedback: upvotes/downvotes) in an online community. It helps the good content float to the surface. Unfortunately, if it isn’t implemented correctly, the poo floats to the surface instead. Some people (recently: Popular Science) have just dropped comments altogether. Maybe it’s just me, but this seems to be a really quick way of killing a community. Instead of a nice feedback loop, you’ve dropped half the party out of the conversation, and are instead stuck with a monologue, where the content creators are the only ones doing the talking, and everyone else is forced to listen. 

    Dropping an online commenting system seems like a really bad idea on two fronts:

    • It no longer allows community feedback, which can really be helpful in developing content that is catered toward the users. This also acts as a mechanism for letting the content creators know about upcoming trends. Ignoring these trends can end up being catastrophic, in some cases.
    • It restricts the social interactions between members of the community. You really can’t create much of a community without those sorts of social interactions. Half of the reason I frequent the sites that I do is to hear the feedback from the people, as often it is as good as, if not better than the original material. (Ars is included here!)

    So, how do you manage an online feedback (comments/upvotes/downvotes) system? Well, there’s a couple of ways of doing that:

    1. Decouple popularity and rating. Just because something is “popular” (meaning that it gets a lot of replies or a lot of votes) doesn’t mean it is good. Rather than trying to use a single value to determine whether or not content should float to the top, it really takes two values. Looking at how Ars does this, it allows comments to be both up-voted and down-voted. The total number of votes would be the popularity, and the ratio of up-votes to down-votes could be the rating. This means that it isn’t just controversial topics that float to the top. It can even be set up in a way that lets the users decide what kind of content they want to see – the highly rated items, or the controversial items. Sites like Reddit get around this by using a proper ranking algorithm, which brings me to point #2…
    2. Use a proper ranking algorithm. Supposing that you are running a service that relies on user content, you obviously want the best content to float to the top. There’s different approaches to this, some work better in certain situations. Take, for example, Amazon. With product ratings, you have the option of giving a product a rating out of 5 stars. The problem with Amazon is that it uses a very simple average. So, supposing that I want to sort products by rating, and I’ve got two of them: one with 499 five star ratings and one 1 star rating, and the other product has a single five star rating, with Amazon’s system, the one with one rating gets ranked higher. This is an example of a system that doesn’t work so well. The Reddit folks get around this by using a rating system called Wilson’s Lower Bound. Basically, it calculates the probability of a comment being good or not based on the first 40 or so ratings. (See: http://amix.dk/blog/post/19588
    3. Moderators and a clear line in the sand. At one point or another, there needs to be some way of removing offending content. This almost always comes down to having some sort of human deciding what should go, and what should stay. By allowing people to report comments (or other user generated content), this makes it fairly easy to bring offensive content to a moderator’s attention. There also needs to be a strict guideline of what is considered offensive and what isn’t. Apple’s app store was the focus of this a while ago: people were quite upset that certain apps with questionable content were getting through while others were not. Wherever you draw your line in the sand, it needs to be consistent and visible to the users of the community. 
    4. If you don’t have the option of removing offending content using a manual process, at least give the users a way of hiding offensive content, and let them set their own threshold. The particulars on this depend on how you have implemented your rating system. For example, if you are using just up-votes/down-votes, you could set a threshold such that anything receiving more than 20 down-votes and and having a negative overall rating (up-votes minus down-votes), doesn’t get shown by default.
    5. If possible, use the community to guide itself. The majority of sites that I go to tend to be self-regulating. In that sense, the community polices itself. I’m more than happy to report a post as being spam if I know that something will eventually be done about it. 

    Anyway, that’s more than enough rambling for now. As mentioned by another commenter, with a ship the size of Google, it is hard to change directions quickly. That being said, you’d think someone that large would also have a pretty good idea of how to do things.

    Just my $0.02.

     
  • Joel 11:18 AM on November 12, 2013 Permalink | Reply
    Tags: I totally love what the Gaslamp Games guys do with hashtags, , This is why we need clear error messages,   

    Troubles deploying to Windows Phone 8 device… 

    So I got myself a Windows Phone 8 device, and have been playing around with a bit of development on it.  Things were working fine, and I was able to run the app on the device without any sort of issues.  Out of the blue, when trying to deploy the app once more to test another feature, I got the following error message:

    The application could not be launched for debugging. Ensure that the target device screen is unlocked and that the application is installed.

    So, what was causing this?  I have no idea.  The device was unlocked and ready to receive the deployment.  I hadn’t messed with any of the language settings on the phone or computer/SDK that was doing the deployment.  (Apparently having different language settings between the SDK and the device that the app is being deployed to can cause an error message like this.  I really need to write a proper post on why error messages should be detailed).  The app had previously been deployed to the phone without any issues.

    As a bit of a crap-shoot, I tried uninstalling the app from the phone, then deploying again.  Low and behold, it worked!  I have no idea why it worked, but it did.  Hopefully if anyone else comes across this issue, they won’t have to struggle for too long before trying this.

     
  • Joel 10:20 AM on November 6, 2013 Permalink | Reply
    Tags: , This is why we should have better error messages,   

    How to solve: Xde.exe has stopped responding… 

    I’m just staring to get into Windows Phone 8 development, and ran across the following issue:

    After installing Visual Studio 2012, the Windows Phone 8 SDK (and related updates), and enabling Hyper-V, I was getting the following error when attempting to to deploy any sort of basic application to the emulator:

    Xde.exe has stopped responding.

    The only error code that it gave was in Visual Studio was 0×80131500.  After a little bit of digging, the following solution was found:

    In Visual Studio, select the Tools->Extensions and Updates…, and make sure Visual Studio Update 3 has been installed.  Once this was installed, the Windows Phone 8 emulator ran just fine.  

    Shout out to this blog for having the solution.

     
  • Joel 4:38 PM on November 4, 2013 Permalink | Reply  

    Why I Dislike PayPal 

    The other day I was attempting to make a purchase on eBay.  It being eBay, I was more or less being forced to make my purchase through PayPal.  Any time I’ve ever used PayPal in the past, I’ve never had good luck with it.  This time, it went something like the following:

    • Not wanting to use a PayPal account that I may or may not have created in the past (heck, I can’t remember – I’ve probably tried to purge the previous terrible experience from my mind), I opted to try to pay with a credit card.
    • I enter my usual credit card details, making sure everything is correct. I then try to purchase the item in question, and get a completely useless error message. It basically says “This isn’t working”.
    • Since the first credit card doesn’t work, I try using a different credit card. I get more or less the same result as the first one.
    • I try yet again, and get a different error message that really has no details to it, so I still can’t figure out what exactly is wrong. An error message like the following is completely useless:

    “We are unable to completed the transaction at this time. Please try again with a different credit card.”

    So, there’s absolutely no details of why it has failed. None at all. All it has done by this point, is made me incredibly frustrated. I’m trying to give them my money, and it’s like they don’t want it. I’ve already done what the error message suggested, yet that didn’t help. Rather than tell me to do something else that doesn’t work, why not tell me why what I’m currently doing isn’t working?

    Rather than still being bent out of shape about this, I’ve learned the following:

    If you are going to show the user an error, at least make it a meaningful error. Tell the user exactly what they are doing wrong!

    There’s nothing more frustrating than having a problem, but not being told what that problem is. It’s like a student being given a grade, but not being told which questions they did correctly or incorrectly.

    There was a really good article and discussion on Ars Technica recently about the subject of learning to hate computers. It basically says that people don’t initially hate computers, but after repeatedly bad experiences trying to do things with them, they learn to hate them. The author of the article relates it to being pecked to death by ducks. It’s one little bite here, and another nibble there.

    While we are on the subject of financial services, can we also please get rid of security questions? They’ve proven to be insecure. (Don’t believe me? Ask Sarah Pallin and Yahoo). The people who breached her email account did so by guessing the answers to security questions. Rather than having security questions, why not force two-factor authentication instead? It’d be a heck of a lot more secure.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 92 other followers