Skip to main content

Program doing the impossible? Check your assumptions!

Have you ever had one of those days when you run your program and it does the impossible?

As in, you stare at the screen and say: "No way, that's impossible. My program couldn't have done that. Couldn't be doing that."

It's a funny thing for a programmer to say, when you think about it.

We are logical people - or at least we are expected to use logic in our jobs.

And that is flat out a most illogical statement - contradictory on the face of it.

Because it wasn't impossible. Obviously. It just happened.

So what's a programmer to do?

Check your assumptions!

You must have made an assumption that is not valid. Assumed that data was in the table that was not - or vice versa. Assumed that you saved a change to your program that you did not. Assumed that you recompiled your program, when you had not. Etc., etc.

Here's an embarrassing true story from my own long, sordid history of programs doing the impossible due to my stubborn refusal to go back and revisit my assumptions.

Hopefully you've heard of the PL/SQL Challenge, a website offering weekly quizzes on SQL, PL/SQL, Database Design and Logic.

I built the website with lots of help from John Scott, Dimitri Gielis and Paul Broughton (all with APEX Evangelists back then). We planned to launch the site on 1 April 2010. And I promoted the site heavily. There was lots of excitement in the Oracle developer community.

New project by Steven! Let's check it out!

We started the site with a daily PL/SQL quiz (that's right - a daily quiz, which went on for four years. Lots of quizzes!), and each day started one second after midnight UTC time (formerly known as Greenwich Mean Time). So John and Dimitri and Finn (my co-founder of the PL/SQL Challenge) were waiting at their machines at midnight. It was 6 PM in Chicago. The second hand swept past 12 and the PL/SQL Challenge was born!

Hundreds of eager developers clicked the Start button to take the quiz - and the entire website froze. Like glacial ice. Sweat broke out on my forehead. I was frantic. Skype messages and calls flew furiously back and forth across the Atlantic.

What the heck was going on? We checked this, we checked that. We agonized. For hours.

And how could it be that with all these Oracle experts on hand, especially the amazing Steven Feuerstein, that we couldn't sort out the problem?

Well, the main problem, it turned out, is that John and Dimitri and Finn all trusted me to not be an idiot.

And at 4 AM, I was able to confirm that I was an idiot. Because I had issued a command to insert a row into a table, a row that was necessary for the site to work properly.

But I had not issued a commit.

I executed "COMMIT;" and the website unfroze. Bottlenecks disappeared. Quizzes could be played. And life did go on. The PL/SQL Challenge went on to attract thousands of developers, with between 500 and 1200 people playing each day for four years.

[We've since "settled down" to an array of 10+ weekly quizzes.]

And looking back on it, it's hard to believe that we couldn't diagnose the problem more quickly. Hard to believe that I could sit there insisting that I'd done everything I needed to do. And that there was no possibility of a data error.

Hard to believe that it happened, except that it did.

And it was a great reminder that when we are under lots of pressure, we don't always think as clearly as we could and should.

So what's a developer to do?

Make a checklist of all the assumptions you have made, and then go over those assumptions: True? Valid?

And once you've gone through that list, without getting to a resolution, ask yourself: what other assumptions am I making that I don't even realize I am making?

Hey, who ever said that programming was easy?

Comments

Popular posts from this blog

Quick Guide to User-Defined Types in Oracle PL/SQL

A Twitter follower recently asked for more information on user-defined types in the PL/SQL language, and I figured the best way to answer is to offer up this blog post. PL/SQL is a strongly-typed language . Before you can work with a variable or constant, it must be declared with a type (yes, PL/SQL also supports lots of implicit conversions from one type to another, but still, everything must be declared with a type). PL/SQL offers a wide array of pre-defined data types , both in the language natively (such as VARCHAR2, PLS_INTEGER, BOOLEAN, etc.) and in a variety of supplied packages (e.g., the NUMBER_TABLE collection type in the DBMS_SQL package). Data types in PL/SQL can be scalars, such as strings and numbers, or composite (consisting of one or more scalars), such as record types, collection types and object types. You can't really declare your own "user-defined" scalars, though you can define subtypes  from those scalars, which can be very helpful from the p

The differences between deterministic and result cache features

 EVERY once in a while, a developer gets in touch with a question like this: I am confused about the exact difference between deterministic and result_cache. Do they have different application use cases? I have used deterministic feature in many functions which retrieve data from some lookup tables. Is it essential to replace these 'deterministic' key words with 'result_cache'?  So I thought I'd write a post about the differences between these two features. But first, let's make sure we all understand what it means for a function to be  deterministic. From Wikipedia : In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states.  Another way of putting this is that a deterministic subprogram (procedure or function) has no side-effects. If you pass a certain set of arguments for the parameters, you will always get

How to Pick the Limit for BULK COLLECT

This question rolled into my In Box today: In the case of using the LIMIT clause of BULK COLLECT, how do we decide what value to use for the limit? First I give the quick answer, then I provide support for that answer Quick Answer Start with 100. That's the default (and only) setting for cursor FOR loop optimizations. It offers a sweet spot of improved performance over row-by-row and not-too-much PGA memory consumption. Test to see if that's fast enough (likely will be for many cases). If not, try higher values until you reach the performance level you need - and you are not consuming too much PGA memory.  Don't hard-code the limit value: make it a parameter to your subprogram or a constant in a package specification. Don't put anything in the collection you don't need. [from Giulio Dottorini] Remember: each session that runs this code will use that amount of memory. Background When you use BULK COLLECT, you retrieve more than row with each fetch,