Tuesday, November 8, 2011

SalesForce Test Classes

I'm no expert on SalesForce, but I've been working with it for a year now, and I think I'm starting to get why things work the way they do.  One thing I figured out today was that it would sometimes be much more helpful if my test classes could fail with a descriptive message of what went wrong instead of failing on a generic assert.  The test classes I inherited from our consultants who wrote this do things like:
System.assert(result.isSuccess==true);
This is great on the surface, and we want to test this, but when the result fails, we really want to know why it failed.  To facilitate this, I came up with a new class, testException, that I can throw when I want a test to fail:
public class testException extends Exception {}
I can then make my tests fail with better information by using this exception.  For example, the System.assert example above, could be rewritten as:
if ( !result.isSuccess)
{
    throw new testException('test failed: reason is '
                            + result.reason);
}
Now, when the test fails, it will be obvious from the error why it failed, and I can include information about what led up to the failure.  (One note is that for some reason do you have to declare your own exception class.  You cannot throw new Exception().)

Admittedly, this isn't rocket science, but I thought it made a much better style for coding your tests.

UPDATE:
Did I mention that I'm still learning all this SalesForce stuff?  If it wasn't obvious, this'll help make it readily apparent. 

So I figured something out after writing this article and playing with the tests some more.  The System.assert() method actually has an additional form that allows you to do much the same thing:
System.assert(result.isSuccess==true, "operation failed: " +
                                       result.reason);
So I'm not sure if there's really any reason to define your own exception class just for unit tests.  It might be useful if you wanted to catch the exception in some case, but my test classes aren't doing that.

Tuesday, November 1, 2011

Attention tech support: we're not idiots!

I cut my teeth doing Tech Support.  My first real job out of high school was doing some low-level engineering and technical support.   I did this for a number of years, until I eventually got the opportunity to join the development team that was being put in place.  When I got the chance, I jumped at it.  About 6 or 7 years ago, that development team was downsized.  As a result, I joined a small team of four people whose goal was to maintain and keep alive the product that had come out of that office.  For the next 4 or so years, we maintained the product as best we could.  The other three guys each sort of took ownership of parts of the product, I pitched in where I could but the I also bore the brunt of the technical support requirements for that product.  Essentially, I became a one-man technical support department, and supported all of our customers with the exception of a few very large accounts who got one-on-one support from the other developers.

So I get the support mentality.  I understand the limitations that arise when multiple product interact, and the difficulties that arise from trying to balance your obligation to support your product while avoiding trying to troubleshoot someone else's -- or worse, having to teach your customers basic things like how to edit files.  I really do get it.  I also struggled with seemingly impossible problems that occur, but you have no idea how and no way to reproduce, and very few ideas on how to troubleshoot.  I have been there.

So, why is it that when I call technical support, they treat me like a moron.  The latest example?  Salesforce.com.  Now, don't get me wrong.  My experience with Salesforce.com over the past year has been largely trial-by-fire.  I took a training class that was enough to get my feet wet, but haven't taken the "real" developer class (yet).  Despite that, I think I'm doing pretty good and have worked out most of how the system works.  The point, I guess, is that I'm no expert, but I do know mostly what I'm doing.  So when I, out of the blue, started getting error reports from Salesforce last week that said "System.LimitException: the daily limit would be exceeded by the operation" and our Lead generation process ground to a halt, I was understandably concerned.

As an aside: that's a horrible error message.  While it does tell you what is happening, sort of, it would be SO simple to note the limit that would be exceeded.  Instead, we're left not having a clue where to look.

Salesforce.com calls itself a multitenant platform, meaning that there are many organizations hosted on a single server (or likely cluster).  I have no idea how exactly they've put the platform together, but that's basically it.  Because many organizations run on a single cluster (and, I suspect, to be able to fleece you for a bit more cash), they've implemented limits on various resources throughout the platform.  And they're generally not onerous limits.  For example, a single "request" can perform up to 150 DML (similar to SQL) queries.  If you need more than that, chances are you can refactor your code into a more efficient design that doesn't actually need that many.  By enforcing that you write efficient code, they keep the platform running smoothly for everyone.  And that is cool.

But as for daily limits, there are only a few that they document (most are, as above, per request).  Your organization is limited to 1,000 API requests per day per user.  We were nowhere near this limit.  There are limits on workflows, on e-mail messages, and on "site" usage.  But we aren't using those features.  So it's very unclear what daily limit we were exceeding.  Thus, I opened a support case, simply asking them to identify the limit for us.

We're a very small customer for Salesforce.com.  So it's not surprising that we were guaranteed a 2-day response time.  After three days, I got a call from Salesforce.com support.  They wanted to do a gotomeeting session for me to show them the problem.  I was happy to do this, although by that time, we were no longer getting the error from Salesforce, so I couldn't reproduce the problem to show them.

As a second aside: I'm not going to name names, but I think that the ability to speak the language without an accent that makes you unintelligible should be a requirement for this field.  I have nothing against other nationalities or languages.  But IMHO putting a super-thick accent on the phone to do your technical support is like hiring someone who can't smell to work your perfume counter.  All it does is frustrate your customers.

Unfortunately, the code that we wrote on the Salesforce.com platform generates demo licenses for our software.  Because there are symbols in the code like "LicenseController" and "createDemoLicense", the support person assumed that the limit we were exceeding was the number of Salesforce.com licenses our organization had.  I had to argue for 10 minutes to convince her that the "licenses" in this case had nothing to do with their licenses.  I think I finally convinced her by pointing out that Salesforce.com didn't have "daily" license limits.

So she says she is going to research the problem and get back to me, which is fine by me.  Today I get a call back again.  This time, she tries to tell me that the e-mails I got were not actual exceptions, they were only warnings that we were approaching the limits for our organization.  I somewhat lost my cool and yelled at her a bit, pointing out that the exception was a System.LimitException which aborts the code that is executing and cannot be handled.  This is no warning.

Which brings me up to where we're at now.  She is still investigating the issue, which is great.  And maybe we won't be able to find out what the limit was or why we were approaching it.  I'm ok with that, if that's truly the case.  I just wish that support people would stop treating me like I'm an idiot, because I'm not.  I can't image how people like my in-laws would deal with this sort of thing.  They wouldn't have any clue whether the support person was telling the truth or making it up as they went along.

That said, I'm not sure how to solve the problem.  I get the vague impression that we've brought this on ourselves, but the solution may be more than a single person such as myself can implement in his spare time. 

Friday, October 21, 2011

Linux vs. Windows... redux redux redux

A colleague sent me a link today to an article on a zdnet blog discussing a particular failing of Linux (and implying, without really supporting the argument, that Windows somehow accomplishes this better). The author seems to be trying to make the following points:
  1. Keeping up with all of the latest-and-greatest developments in Linux takes a lot of time, arguably more time than he can spend on it.
  2. "Bleeding edge" is highly dependent on exact versions of packages under developement, and getting those versions wrong breaks everything.
  3. Linux distributions are moving targets and commands (or command lines) that work on one version may not work on successive versions.
  4. Updating the operating system can break custom-compiled software that you install on the system.  He claims it makes the system unbootable.  I am skeptical about this.
  5. His ISP claims they never update their CentOS machines, because it breaks them.
First, let me say that perhaps Mr. Gewirtz is earnest about what he did and what the effects are, but describing things that he supposedly did like "recompiling the package manager" make no sense, so it's difficult to be certain what parts of his post are fact and what parts are exaggeration.  I've tried to give him the benefit of the doubt.

His first four points are absolutely true.  But they're not really "points" because they are obvious, and the solutions are equally obvious.  If you don't have time to keep up with all the latest developments in Linux, then don't!  I've arguably used Linux almost as long as anyone (since around 1992, my first kernel version was in the 0.96pl series, and I installed my first Linux on a 386SX from floppies -- it was the MCC distribution, which predates Slackware!) and I certainly don't have time to keep up with all the Linux trivia.  So I don't.  It doesn't stop me from running a few Linux boxes and knowing what I need to know to run them.  99% of the arcane details that might be interesting about Linux are not actually necessary to use it.

Likewise, there have always been "bleeding edge" versions of everything on Linux, and if you want to run them, there's generally some pain involved.  So if you don't want to put in the effort, don't run the bleeding edge!  Wait a bit for it to get stabilized and tested and sorted out, and you'll be in a much better position to have it "just work" like you're hoping it will.

The complaint that commands stop working between distribution versions is sort of silly to me.  It's true, but it's true of everything.  Solaris 10 doesn't support a lot of the commands that worked on SunOS 3, for obvious reasons (although admittedly, Sun does a remarkable job of making it work well, with the /usr/ucb tree of SunOS-type commands to compliment the /usr/bin SVR4 versions).  Even Windows doesn't solve this -- how many complaints have you ever heard about how Microsoft changed the UI in Windows?

Updating the OS definitely has the potential to break custom software.  This is equally true of Windows, IMHO, although admittedly I think Linux is a faster-moving vehicle so it's more likely that this happens more often.  Also, coming from the open-source paradigm, it's easy for Linux aficionados to feel that simply recompiling the software with the upgraded OS is easy since most things have available source.  I have a mail/web server that I originally built in 1998 that has been running RedHat 6 since it came online.  I custom compiled the mail server, and the web server, and the SSL libraries and the PHP modules and the Perl modules, etc., etc., ad nauseum.  I literally cannot upgrade this server because everything will break if I do.  I've lived with that for 10 years.  I've hardened it as much as I can, firewalled it, don't let many people log into it, and it's been okay for that long.  The operating system has outlasted 2 PCs and 2 hard disks.

One day, I will have to build a new server to replace that one, and when I do I will do it differently.  When I built this server, there was not really any such thing as Linux security updates.  If you wanted the latest SSL holes patched, you compiled your own SSL.  Today I'd never do this.  Every major distribution has a mechanism for distributing security (and other) updates, and if you update within the distribution's own software, it's not going to break.  If I had to rebuilt my server today, I'd put Debian on it.  I'd apt-get install apache and something for mail (I am a long time qmail user but I recognize that there are alternatives that didn't exist in 1998 when I chose qmail).  And I'd painlessly take updates from the vendor, easy peasy.

Finally, regarding updates for CentOS breaking the system, it's definitely unfair to paint all Linux distributions with the same brush because of something that happened on one.  I have been updating Debian and Ubuntu for years and years and while I've had some problems (trust me, trying to figure out why your apt-get dist-upgrade failed and what sort of messed up state it left you in is no picnic) it's gotten much better every time I've tried it and I've had no problems for several years.  I don't run CentOS or Fedora or Red Hat so I can't speak to them, but claiming you can't update Linux because CentOS sucks is like saying word processors are crap because you don't like Google Documents.

And, I must say that if an ISP told me that they never applied updates to their systems, I would find a new ISP.  The only exception to this would be if, as I suspect, they don't avoid applying updates because it's dangerous or risky, but rather they don't maintain the servers so of course they don't update them.

Either way, it sounds to me like the the author wants to use Windows rather than Linux, and I'm gracious enough to say that for a lot of things Windows is very capable.  But don't make the mistake he made and confuse the quality of an operating system with your personal measure of its ease of use.

Wednesday, October 19, 2011

Virtualization options

Although there are a plethora of virtualization options, there are really only two that I've used extensively.  VMware is the original virtualization platform and probably the one that most people know about.  To be clear, I've used many versions of VMware and even purchased a commercial license to use as a developer.  But most recently, I needed to set up a Windows Server 2008 VM for testing, and the Virtual PC stuff that comes Windows 7 doesn't do 64-bit, so I tried VMware.  After fighting with it for the better part of a day, I gave up.  I got cryptic messages about having to install drivers, I got errors saying that files that were needed for the installation weren't available.  It just plain didn't work.

So I installed Virtualbox.  I have used Virtualbox for a few years now when I needed a free virtualization option, and I've recommended it to a few friends and family.  It's always worked well for me, although to be fair my needs have never been extreme.  I was quite surprised to find that it installed the Windows Server 2008 VM on the first try with no hassles whatsoever.

I read somewhere that VMware was supposed to be faster than Virtualbox, but based on the experience I've had to today, Virtualbox works a million times better than VMware, and I'll take that over a bit of speed any day.

Tuesday, October 18, 2011

Content filtering for minors

I use DansGuardian as a content filter for our local network.  Much to my children's chagrin, they are not allowed to access sites that are rated above their pay grades, nor to sites that contain content that, via a set of weighted phraselists, is deemed to be too mature for them.  Finally, they're completely disallowed to access files based on filetype (e.g. exe, zip, rar, bz2) and mime type -- basically, they are not allowed to download executable files.

After having a few of my (now adult) children have their computers toasted by malware and whatnot (10 or so years ago), and after one of them accidentally fell into pop-up porn hell, I set this system up to try to protect them from themselves.  Since then, I am happy to say that nobody has had their computer lost to the bad stuff.  (But the credit for this obviously goes to DansGuardian.)

I use Shorewall as my firewall solution, and configure it to (transparently) redirect all outgoing traffic on port 80 to Dansguardian (on port 8080):
REDIRECT lan 8080  tcp www  # redirect LAN-www to local 8080
Dansguardian relays the requests to a proxy (originally I used Squid, but I have also configured Apache's proxy module), and you should probably block access from the LAN to the proxy ports, lest someone configure their computer to bypass your content filter.  I have not done this because at times I do exactly this and configure a computer to directly access the proxy.  So far, nobody has figured this out (I do check occasionally) so I haven't worried about it.

For computers that should bypass the content filter, like my Wife's, I define a variable in /etc/shorewall/params listing the MAC addresses of those computers:
RIKKI_IPAD=~ed-0d-59-b7-c7-5d
RIKKI_IPHONE=~24-ab-81-fd-71-c4
Then, I define a variable that includes all of the systems that should bypass the filter:
MACS_NOT_FILTERED=$RIKKI_IPAD,$RIKKI_IPHONE,...
Then, finally, in /etc/shorewall/rules I specify that these should bypass the filter:
ACCEPT+  lan:$MACS_NOT_FILTERED net tcp www 
The ACCEPT+ target is like ACCEPT, but it also prevents further rules from matching, so by placing this rule above the REDIRECT rule, we ensure that  $MACS_NOT_FILTERED will never reach the REDIRECT rule.

One final issue I've had is that DansGuardian allows me to "whitelist" sites using /etc/dansguardian/lists/exception{site,url}list files, but some of my [linux] systems try to get updates from one of any number of mirror sites, and I don't necessarily know all the mirror sites and even if I could be bothered to find out, I wouldn't want to manually maintain a list of exceptions.  So, instead I used /etc/dansguardian/lists/exceptionregexpurllist to allow access to any mirror (in this case, the CentOS 6.0 servers):
 ^.*centos/6.0/(os|extras|updates)/x86_64/.*$
Unfortunately there isn't a very good way to have the kids hit a blocked page, and allow them to have access for a limited time.  DansGuardian has some functionality to allow you to get "warned" but then continue onto the site, but it doesn't have a way to issue "tokens" that would expire after a period of time.  To try to help solve this issue, I've started playing with a form that adds exceptions to the DansGuardian configuration (the form only being shown to the adults by having them in their own filtergroup).  But this is a very immature solution so far.

That said, I think DansGuardian is an excellent tool for networks with children, and I highly recommend it.

Saturday, October 15, 2011

Web Cams, woot

I have set up a webcam page, mostly for fun, although my wife likes it because she can see if I'm working or watching TV or whatever.  I thought it might be fun to go over the technology involved, or at least the pieces that I've chosen for myself over the years.

Motion is a motion-detection and recording camera application for Linux.  This makes a great basis for a webcam.  You can have it generate recordings of activity on your camera that can be reviewed later with a simple PHP script.  MJPG-Streamer is another great tool for Linux.  It's a bit "raw" in that it's not packaged into system packages (that I know of) but it's actually a pretty slick lightweight program for having a live streaming camera.

On Windows, there's no shortage of webcam applications, but a simple free one that gets the job done is Yawcam (Yet Another Web CAM).  This is Java, but works reasonably well even on lower-powered systems (but definitely not as lightweight as mjpg-streamer).

I've tried all sorts of chat widgets for my webcam page, but none (for me) has ended up being better than CGI::IRC, which is a Perl-based IRC server in a web page.  I use this to allow people to join my private IRC server where I idle just in case anyone ever shows up.  They never do, but that's not the point. ;)

On my page, there are three cameras, each hosted on a separate computer.  A linux box (10.10.100.2) running motion records on the "main" camera with the widest angle.  Then, I have two laptops with integrated webcams that provide live streams from "side angles".  One of the laptops (10.10.100.19) runs mjpg-streamer, the other (10.10.100.201) runs Yawcam on Windows.

I've set it up so that the IP address (cam.akropolys.com) resolves to my firewall, both on internal DNS (as 10.10.100.1) and on external DNS.  I use Shorewall's DNAT rules to redirect external clients to the live camera streams:
DNAT net lan:10.10.100.2        tcp 8081     # webcam streaming
DNAT net lan:10.10.100.19:8082  tcp 8082     # webcam streaming
DNAT net lan:10.10.100.201:8081 tcp 8083     # webcam steraming
To allow internal clients to access the live streams, I use the rinetd utility to redirect request to the live video streams:
0.0.0.0         8081    10.10.100.2     8081 # cam
0.0.0.0         8082    10.10.100.19    8082 # cam
0.0.0.0         8083    10.10.100.201   8081 # cam 3
Of course, I use my reverse-proxy trick to redirect requests to the actual website.  This works for both internal and external clients:
RewriteCond %{HTTP_HOST} ^cam\.akropolys\.com$ [NC]
RewriteRule /(.*) http://10.10.100.2/~troy/$1 [P]
I also installed an ErrorHandler for error 503.  This error is thrown if Apache can't proxy requests to the camera page.  The error handler script checks the value of the $SERVER_NAME environment variable and if it's the camera server, it returns the camera down page.  This doesn't help if the webpage requests (on port 80) can be fulfilled but the live camera streams are down.  At some point I'm thinking I can use Javascript on the page itself to display an error image, but I haven't tried this yet.


Finally, I restrict access to the recordings to internal clients by checking the PHP $_SERVER['HTTP_X_FORWARDED_FOR'] variable and ensuring that the requesting client is on the 10.10.100.0/24 network.  This gives me a way to sort of secure parts of the page from prying eyes if I need to.