Tuesday, May 17, 2016

SeeAllData = Fail

According to Auntie Pat Tern, it takes 21 days to break a bad habit, but Salesforce developers have had about that many API versions to break the bad habit of writing unit tests with "SeeAllData=true".

In most cases, this clause alone can render tests completely useless. Unit tests should prove that code functions according to specifications by asserting proper database manipulations--create, read, update, delete operations--as well as Visualforce navigation.

I looked at unit tests in my org and found this example of a test that failed to achieve its purpose:

This test should be asserting that the data for ordering samples can be created with just a contact and sample type defined.  Unfortunately the test relies on existing, live, org data rather than test 'silo' data because of "SeeAllData=true" in the first line. There are easy methods for unit tests to create their own test data without relying on live org data.

We encountered the following problems because of using "SeeAllData=true":
  • It required us to maintain test data among our real data.
  • When the test data was changed during data cleanup (in one case, Pat Tern's Account was deleted), the tests failed even though functionality was unchanged.
  • Tests were not reliable between Sandboxes and Production orgs due to data differences rather than actual functionality.
  • Apex Hammer Tests may not have been automatically monitored in our org for each Salesforce release since Hammer Tests are blind to live org data.
In the rare case where a specific piece of data may be required for your code to behave, consider using custom metadata types instead of Salesforce objects. Trailhead can help you learn more about how they allow you to move metadata and records between orgs and test functionality without needing to see all data in the org.

Saturday, May 7, 2016

Pay Down That Technical Debt

Auntie Pat Tern says we should pay with cash, or at least pay our credit cards off every month because "debt," she says, "is like fresh fish, it seems great at first but gets old fast and then it really stinks."

It's Hammer Time!
So I looked at my Salesforce org to assess the level of technical debt we had accrued and to plan how we would start paying down that debt.

Sometimes technical debt comprises the shortcuts or mistakes that no one has time to correct when a project needs to be completed. Technical debt also develops over time.  As businesses mature and change, their technical solutions can fall behind and create technical debt.

A Salesforce org can be especially prone to this since Salesforce offers a wealth of new features for all orgs three times each year and not all companies make an effort to rewrite their technical solutions based on these new features. Luckily, Salesforce offers some tools to help us assess some of the technical debt associated with our code.

First, I checked our Hammer Test Status.  The data silo gauge revealed that we still had old (and sadly a few new) unit tests that were using "seealldata=true", which we needed to update.  It also revealed a couple of tests that were failing and so needed some attention.

Next I checked the API version on our Apex Classes.  Any class that is 10 or more versions behind the current API version needed a review, both on the code side and on the process side.

The third step for me was reviewing our org's documentation for outdated information on our code.

My org review also included configuration, managed packages and basic processes, as well.

We may not be able to pay down all of our technical debt right away, but a monthly review and reminder of how it accumulates will help us develop better in the future.  And as Auntie Pat Tern says, "It's not just pay me now or pay me later, you know.  It's pay me now or pay me later with additional compounded interest!"

Sunday, April 10, 2016

Putting Governor Limits To The Test

My Auntie Pat Tern is pretty accepting of cousin Tim Toady's behavior. "He's a teenager, after all, there's no better time for him to test his limits," she explained. So I decided to ask my developers to test their limits in Salesforce.

One Master object with 13 Detail objects, 
some of which are Masters in other relationships as well.
I wanted them to see for themselves how governor limits benefit the overall performance of the code they write. And I wanted them to experiment with ways to push those limits by trying to break things.

They built Processes, wrote triggers, and configured some unwieldy objects that I would never want to see in production all in an effort to push good performance to the very edge of being bad.

Some limits include object relationships.
A couple of these experiments proved that what they understood about limits was untrue. In one example, when it comes to Master-Detail relationships on custom objects, the documentation described a limit of 2^3. Firstly, 2^3 does not equal 8 here. Instead, it indicates that an object can have two M-D relationships and those can be three levels deep.

Take the example Parent <-- Child <-- GrandChild <-- GreatGrandChild where all relationships are Master <-- Detail. Some of the limits on this relationship structure are as follows:

  • Parent cannot have a new Master (eg GrandParent) because of the limit on how deep the relationship levels can be. 
  • GreatGrandChild does not show up as available to be a Master in other relationships because we are limited to three levels deep.
  • Child cannot have a new Master because it already has two M <-- D relationships (even though only one of those points to a Master).
  • Child can have new Details, that is, new GrandChild-level objects can be created as Details for Child even though Child already has two M <-- D relationships.
  • Many Child-level objects can be created as Details for Parent (we stopped at over 50). 
  • A Child-level object cannot be used as a junction object between records of a single Parent-level object. M <-- D relationships cannot be immediately self referencing like that.
  • GrandChild-level and GreatGrandChild-level objects can have the same Master object as their Master, eg. GrandChild can point to Child and Parent as Masters even when Child already points to Parent as its Master. We daisy-chained six objects this way before hitting limits on the depth of the relationships.
  • Child-level objects with two relationships pointing to Master objects cannot be Masters to new GrandChild-level objects. An object can have two Masters or it can have one Master and many Detail relationships or it can have no Master and many Detail relationships.

It was a fun exercise and demonstrated how limits benefit performance and how hard some of them can be to break. It gave the developers a chance to challenge their assumptions, be creative and gain a better understanding of the implications of limits when it comes to writing better code.

Secretly, Auntie Pattern believes that testing limits can help us appreciate why limits are important, but she wouldn't tell Tim Toady that.

Sunday, April 3, 2016

Painless Removal Of Boat Anchors

I asked my Auntie Pat Tern why she has a tattoo of a boat anchor on her forearm and she said, "Boat anchors symbolize hope, but I'm considering getting this one removed." So she didn't have it to enhance her resemblance to Popeye like cousin Tim Toady always says.

It may have been hope that also inspired the boat anchor I found in the code in my org. In technical circles, some people only think of boat anchors as that outside technology they got stuck with because their boss bought it without conducting a technical review. But if you aren't constantly conducting a technical review of your own code, you can get stuck with boat anchors there as well.

In my org, I found something like the following:

All code that had once handled business logic was commented out to the point where the code literally did nothing but returned a null value. I think this boat anchor represented the developer's hope that the original code might somehow be useful later. If that's the hope, the code can be stored outside of our production org.

To remove it, we just needed to correct any references and delete the class and its related unit test. Salesforce offers multiple ways to accomplish this using the IDE, the developer Workbench or the Migration Tool, all of which are much less painful than Auntie Pat Tern's tattoo removal.

With Workbench, for example, you can simply mark a class or trigger as deleted or as inactive to remove it from production using the following steps after downloading the class or trigger files:
  1. Edit the XML meta file for the class or trigger to change its status from <status>Active</status> to <status>Deleted</status> (for classes) or <status>Inactive</status> (for triggers)
  2. Put the both the .XML file and class or trigger file into a folder named "classes".
  3. Create a package.xml file to list the class or trigger to be deleted or made inactive.
  4. Place both the package.xml file and the classes folder into another folder inside a zip file.
  5. Specify this zip file after you select Migration>Deploy in Workbench.
A great way to generate a package.xml file to get you started is to use Setup>Create>Packages in Salesforce and add the classes or triggers that need to be deactivated or deleted. This package can then be specified for download in Workbench using Migration>Retrieve. The zip file will need to be changed as described above to deploy. (For step-by-step instructions, click here.)

Like Auntie Pat Tern, be on the lookout for boat anchors and consider removing them when they pop up during your org's ongoing technical review.

Friday, March 11, 2016

Static V. Instance, or, How Can A Variable Be Unchanging?

I asked my Auntie Pat Tern why she loves to watch Jeopardy! and she said, "if I watch it enough, I will see answers to all the questions in the world." So I looked through the questions that my developers ask to see if they could be answered by Jeopardy!

One of the questions that comes up frequently is whether use of the 'static' keyword is an antipattern or whether it is best practice. Of course, if it's used wrong, it is an antipattern, so understanding static versus instance methods, variables and initialization code is important.
Final Jeopardy! means there won't be
infinite answers this time.

Taking Jeopardy! as our example, the classic game show that contestants win by providing the most correct responses, we can see that some things don't change from week to week. The show's name, the show's host, these are constants and so of course are static because "static" means unchanging. A static response can be provided for questions about the host and show name.

Every show has three contestants and to create a show, its producers need to know who the three contestants will be. So they conduct tryouts to choose and schedule competitors. The names of the competitors change from night to night, so the list of competitors is variable, but show producers have to know who is scheduled before the show is created, so it is a static variable. A static response can be provided to the question of who will be the competitors for any specified date.

When the show is filmed, the responses that one contestant provides as part of the action of the show are defined by the instance of the show and depend on the inputs the contestant receives from the show and the interactions the contestant has with other contestants on that particular show. So these responses are non-static and depend on the instance of the show.

As an example let's consider three possible questions about the upcoming Jeopardy! college championship round:
  1. Who's the host? We expect one answer: 'Alex Trebek'. 
  2. Who are the competitors? We expect one answer: the three college students, depending on who is chosen for a specified day. 
  3. What is the contestant's 'question' and score for the first input in the first category? The response to this question depends on actually seeing the college championship round in action, knowing what the inputs are and seeing which contestant acts first and the result of that particular action.  
So the first two are static and the last one non-static and specific to its instance.  

In Apex, you can see an example of static vs. instance in the Date class. This class offers both static and instance methods. Let's consider three possible questions we could ask today:
  1. What is the date today? We expect one specific answer for this question and it doesn't need any additional information for us to ask it: Date.today().
  2. Is it a leap year? We expect only one answer for this question, depending on a specified year, for example: Date.isLeapYear(2016); or Date.isLeapYear(2023);.
  3. What is the year? The response depends on knowing the date in question -- we need an instance of Date in order to figure out the year of that specific date, for example: Date yearLater = Date.today().addYears(7); Integer nextYear = yearLater.year();.
Again, the first two are static methods while the third, determining the year, is specific to its instance. You might wonder, for example, what happens in Apex when you add a year to a leap day, will Apex give us February 28 or March 1 of the next year (or null)? You could run the following code to create an instance specific to a leap day and test it out:

The above code primarily uses instance methods, but using the 'static' keyword in Apex can be particularly useful when it comes to writing code called by Triggers. Since a single Trigger can result in a cascade of other Triggers firing, we may need to keep track of some information across all the business process automation associated with the cascade of Triggers. Having a separate class with a static method or variable for all of the Triggers allows us to share specific data across the code executing for multiple Triggers.

For example, a separate class that contains a static variable can indicate the number of times a particular automation has run to help avoid an infinite recursion

Infinite recursions need to be avoided in code, but Auntie Pat Tern may be correct that infinite episodes of Jeopardy! will contain all the answers to all the questions in the world.

Friday, March 4, 2016

Start Documenting Rather Than Accepting "No Comment"

My Auntie Pat Tern tried to convince my cousin Tim Toady that he should be using the 3/5 rule of essay writing he'd been taught -- three ideas, five paragraphs -- but, as usual, Tim insisted there is more than one way to do it (TIMTOWTDI). So I looked through the code in my Salesforce org to see if my developers were following the 3/5 rule for comments or if TIMTOWTDI had gotten into our code.  Unfortunately, I mostly found code with no comments at all.

Code without comments is like a research paper without a thesis statement to detail its purpose.  It's like doing something over and over without understanding why because comments should always clarify why it does what it does.  It's like burying treasure without leaving a proper map because good comments point to valuable resources the code uses or makes available.

Code comments should follow the 3/5 rule of comments:  there are three places where comments should occur and five specific topics that need to be addressed.

The three places where comments should be found in Apex code are:
  1. Block comments at the beginning of classes.
  2. Block comments at the beginning of methods.
  3. Line level comments for constants, loops, conditions, and where clarification is possible.
The five topics that should always be found in block comments are:
  1. Description of purpose and assumptions.
  2. Author/date for creation and modification.
  3. Parameters for methods that accept values, or none.
  4. Return values for methods that pass values, or none.
  5. References and dependencies.
Here's an example of code that follows the 3/5 rule of comments:

In this example, block comments appear at the start of the class and at the start of the methods while inline comments appear within methods and for named constants. The comments use Javadoc tags like "@description" to make the important topics easy to locate.  These comments even include "TBD" to indicate code that is incomplete and point out the limitation imposed by the choice of Integer as a data type rather than Long. Comments should always include information that may be reconsidered during the next phase of development.

We ask that developers and admins both be responsible for code comments because the admin should know how code effects data integrity and user experience.  Without that level of cooperation and understanding, admins have been known to implement validation rules that cause code to fail and developers have been known to implement code that causes user experience to decline.

Tim Toady is right, there is more than one way to do it with code.  That's why we ask for comments that explain why the code was written the way it was.  And we follow Auntie Pat Tern's 3/5 rule to make sure we have comments in three areas of code and covering 5 required topics.  Better comments lead to better collaboration and easier maintenance of the code and the org where it runs.

Wednesday, February 24, 2016

Using Trailhead To Resolve Unknown Unknowns

According to my Auntie Pat Tern, the only trouble with my cousin Tim Toady is that "like every teenager, he don't know what he don't know.  And that wouldn't be so bad if his teachers were better at knowing what he don't know."  So I took a look at the code in my org to see if I could pinpoint what it is that the developers don't know.

Luckily, the Trailhead team is great at foreseeing what it is that people don't know about Salesforce and have created a Trail for that, no matter what that is.  And while the developers might find the Apex trails on their own, they might not recognize that there is more about Salesforce they need to understand.  In fact, there is an entire Trail on the Salesforce Advantage, the core technology that differentiates Salesforce from other CRM systems and other development platforms.

Learn, or review, Salesforce Technology Basics
with the new Trailhead module.
One developer expressed concerns about "the Governor's limits", which indicates to me that they need to better understand multitenancy and performance.  Another expressed concern about Salesforce firewalls and security breaches, which indicates they need to learn about Salesforce security standards.  A third developer suggested we build custom objects for storing employee contact data and build content management and customer service solutions from scratch, which meant they need to learn more about fast app development with Salesforce customizations and third-party apps.

Happily, Salesforce has a new Trailhead module to help my group know more about all these topics, Salesforce Technology Basics.  Of course my developers are a lot like my cousin Tim Toady, they know they are a smart bunch but they often don't realize that there are things they don't know.  It's the Dunning-Kruger Effect, the unknown unknowns.

For those who are deeply familiar with Salesforce, Trailhead can help you avoid the "curse of knowledge" tendency to believe that if you know it then most people must know it as well.  Trailhead modules are thorough, starting with the basics and moving to more challenging information.  And they are entertaining, so anyone with a smattering of knowledge will find familiar topics fun to review and new topics informative.

With Trailhead, you can avoid the problem Auntie Pat Tern described for cousin Tim Toady, the unknown unknowns as well as the curse of knowledge.  Just encourage your team to earn Trailhead badges, which you can review in their Salesforce Community Profile pages, to bring the team up to speed even when you, or they, think they know what they need to know already.