Wednesday, September 28, 2016

Eliminate Cargo Cult Programming and Build a Stronger Team

My Auntie Pat Tern is one of those people who refuses to let family members wear cargo pants when we go out to dinner.  She says she doesn't want to participate in the 'cargo cult'.  She inspired me to look at how Cargo Cult Programming has affected our Salesforce org.

One of the best things about programming on the Salesforce platform is the wealth of resources online to help problem solve. It can be one of the worst things about programming on the platform as well if you aren't careful about the resources you choose to follow.  Some code samples use outdated practices, others may be snippets of code taken out of context, which may not behave as expected in a different context.

In one case, the Apex documentation team included System.assert statements in a code snippet.  Our new developer later copied the snippet into a utility class.  The developer found a great resource but didn't fully understand the difference between System.assert and System.debug.  The former builds in a fatal error, appropriate for test code but not for our essential platform automations. Sniffing out this cargo cult programming helped us know what additional training could help our developers.

A more frequently found cargo cult is the factory class to build basic objects. A factory class is still far better than using the very outdated SeeAllData in your test code.  But test data factory classes add unnecessary work for coders. Salesforce offers a newer and better technique with Test.loadData, which I wrote about in July.  It lets us shift the work of creating and maintaining sample data from the developer to the administrator.  

Other cargo cult programming that pops up in our Salesforce org relates to data sharing and security rules.  We teach our developers early on to understand the difference between code that runs only in Triggers vs code that may be run by end-users via UI customizations like pages, flows and actions. And we encourage our administrators to monitor org security in our code during code reviews and test executions.

Hunting out and eliminating cargo cult programming helps us build cohesion among the team that maintains our Salesforce platform and better understanding between administrators and developers who now work more closely together. This helps us spread the effort of org maintenance across a team and make use of clicks and code for our projects. End users benefit and our org is more reliable and easier to maintain as well.

Friday, September 9, 2016

So Many Resources, So Little Time

I found Auntie Pat Tern burying a hammer in the flower bed. When I asked why, she said it was Tim Toady's favorite tool and he needed to learn something new. When all you have is a hammer, everything looks like a nail, so she wanted to make sure he found new ways of doing things. I like to make sure my developers are constantly learning new skills as well, so they don't rely on "golden hammer" practices that may have become outdated.
Become a specialist and then learn even more.

Learning to develop on the Salesforce platform is like finding a magpie's nest full of shiny objects. Where do you start and how do you figure out which gems are most valuable? Here's the path I take:
  1. Attend Dreamforce. The keynote and product demos will help you know what you want to learn. The sessions will help you get started on that learning. It offers the chance to talk to people who are on the same learning journey as you are and share tips and interests. I am presenting two sessions on developing with Apex this year: 8 Essential Apex Tips for Admins and Apex Trigger Essentials for Admins.
  2. Watch more Dreamforce sessions online. After the event, Salesforce makes sessions available by video online. Sessions you wish you had attended, topics you didn't know you were interested in until after Dreamforce, all are free to watch online.
  3. Follow up your learning in the Success Community.  Find your local user group and developer group and follow the online conversations to learn how other people are using Salesforce and the challenges they are overcoming. Even if you don't have a local group, you can join groups online to be part of the discussions.
  4. Keep up the learning with Trailhead. Salesforce offers online learning opportunities through Trailhead. Don't be intimidated by the SuperBadges, they offer a well defined path to learning about a particular skill in Salesforce. And if you get stuck the community is there to help.
  5. Two places to turn for online help from the community when you get stuck on any of your coding projects are the Developer Community and Stack Exchange. Search the boards to see if someone has already asked about the question you have, and if you can't find a discussion, go ahead and post your question. Once you gain more skills, you should find that you are answering more questions than you are asking.
The resources are there to help you along. Learning to develop, or learning to develop even better, can be fun with all of the resources Salesforce offers. You don't have to rely on golden hammers when there are opportunities to learn new and better ways to work with Salesforce.

Sunday, August 7, 2016

CSV Data: Commonly Surfacing Vexations in Data

According to Auntie Pat Tern, "Trying the same thing over and over without getting the results you want is enough to make anyone crazy." Unfortunately, that's the excuse Tim Toady uses when he doesn't do his homework because he's tired of not getting the results he wants from the effort.

When it comes to creating CSV files for use as unit test data, what should be an easy process can make you a bit crazy if you wind up getting unfamiliar errors. The following steps and potential errors may help:

Step 1: Export some production data.
Step 2: Delete all but a reasonable selection of data.
Step 3: Remove system fields.
Step 4: Make sure the date fields are in the form YYYY-MM-DD
Step 5: Make sure date/time fields have a "T" between the date and time, with no space, such as 2016-08-07T01:40:39
Step 6: Make sure the commas that appear in text fields haven't thrown off the number of columns.
Step 7: Make sure there are no blank rows with only commas and no values
Step 8: Renumber IDs starting with 1
Step 9: Remove any invalid IDs for related records (Account ID of 000000000000000AAA appears for all top-level accounts, and should be removed, for example)
Step 10: Upload the CSV file as a Static Resource for your code to reference

If you are creating a set of child records for an object that is related to another object, you can sort the parent and child data by the IDs of the parent to make sure you get records that match in both data sets. Use search and replace to redo the parent IDs to match the new IDs in the CSV file of parent records. For example, if you have a CSV for Accounts with IDs numbered 1-200, the related contact records must use 1-200 for the Account ID as well.

Bad CSV files might result in the following errors:

Potential Error Likely Solution
Invalid ID value on line 2: 000000000000000 Remove invalid IDs
Too many DML rows: 10001 Load fewer records
CSV Parse error: '8/20/1959' is not a valid value for the type xsd:date Format dates as YYYY-MM-DD
Duplicate ID value on line 81: null Remove empty rows from CSV file
CSV Parse error: '2011-09-07 01:00:31' is not a valid value for the type xsd:dateTime Format date/time fields with "T" rather than space between date and time
Validation Errors While Saving Record(s) Erroneous data or IDs not starting with 1
System.UnexpectedException: Salesforce System Error Remove stray commas throwing off columns
Static Resource not found Make sure code refers to Static Resource by name

Start with a small number of records using the fewest fields for testing the code or configuration changes you need to test. That way, you won't wind up like Tim Toady, who falls back on bad habits when errors occur with his first attempts to follow best practices.

Sunday, July 31, 2016

What A Load Of Business Data

Auntie Pat Tern thinks Superman's x-ray vision is stupid because "how would he ever know where he's supposed to look and where he should not bother focusing?" That got me wondering about the data and tests in my Salesforce org.  Even clicks-not-code developers should have automated tests that validate configuration changes with existing and expected data. So what's the best way to know which data those automated tests should use?

Some Salesforce developers like to write automated tests with "SeeAllData=True", an outdated and bad practice. A test that can see all data is not the same as a test that is smart enough to see the data that needs testing. For example, a good test will test positive scenarios, negative scenarios and extremes. In other words:
  • What if the data is exactly what we planned for -- records with all the right data in all the right places?
  • What if the data falls outside of expected norms -- records with missing fields or invalid data, for example?
  • What if the data is coming in from a data import operation or an integration and many records need to be processed?
Admins can follow a few simple steps to create automated tests to run against their configuration changes, and their efforts can be used by developers for better unit tests as well. Simply create sample data representative of the expected inputs. Then use code like the following to load that data in a unit test:

Note that you will want to use data that corresponds to the configuration changes that you are testing. In this example, configuration changes and business process automation around Account, Contact and Opportunity record creation will be tested. You can create a similar test class for other related objects, including custom objects.

This code can be run as a unit test by itself to validate configuration changes, or it can be called by other test code to set up data for more complex unit testing related to other code in the org.

Next week, I will look into some of the errors that might occur when you create CSV files of sample data and how to avoid those errors.

But for now, Auntie Pat Tern has a point, who wants Superman looking at just any old thing with his x-ray vision when he ought to be focusing on information that's most helpful.

Monday, July 11, 2016

Enough Is As Good As A Feast

Auntie Pat Tern loves Sir Thomas Mallory and Mary Poppins, and quotes both when she reminds us "enough is as good as a feast", especially when Tim Toady's eyes are bigger than his stomach at the buffet. So I thought I would look at Salesforce storage limits to see if we have a feast in our Enterprise Edition org (which is the entry-level org for many businesses and the basis for the nonprofit license grant, even those with the nonprofit starter pack pre-installed).

Salesforce currently allocates 20MB of data storage per user in every EE org. But wait, there's more. Every org starts with a minimum of 1GB of data storage. So whether you are a nonprofit with a grant of 10 free licenses or a small business with 50 licenses, you have a minimum of 1GB for data.

More than enough is too much.
But how many records is that? Salesforce allocates 2KB for most records. Articles require 4KB, Campaigns require a whopping 8KB (4 times the size of most other records). That means a basic Salesforce org (Enterprise Edition or better) will have data storage capacity for about 500,000 records (excluding Campaigns and Articles). If you have 100,000 Accounts and 400,000 Contacts for 1-50 users, you are going to need more storage.

If you are using Person Accounts, or if you create a unique Account for each Contact, 250,000 individuals would result in 500,000 records since each individual would require both an Account and Contact record.

Custom objects behave the same as typical standard objects taking 2KB per record, regardless of the number or type of fields associated with those records. Even if you use a lot of rich text fields with large image files, a custom object record still requires only 2KB of data storage. The trick with rich text fields is that they actually are stored as files and so impact file storage rather than data storage.

Consider your storage needs carefully when you create sandboxes. Partial Copy sandboxes currently provide 5GB of data storage, that's more than you have in your production org if you have 50 or fewer users! But they don't offer much in terms of file storage. You may need a Full Copy sandbox for the convenience of copying all of your data at once and accommodating larger amounts of file storage and files from rich text fields.

But for standard data needs, use these basic formulas to calculate your data needs in GB:

( # of records X 2KB ) X 1/1024 X 1/1024
( # of Campaigns X 8KB ) X 1/1024 X 1/1024
( # of Articles X 4KB ) X 1/1024 X 1/1024

In the first formula, we multiply the number of records you have times 2KB, the data storage typically needed for records to get the required number of kilobytes of storage. Then we multiply by 1/1024 to convert from kilobytes to megabytes and again to convert from megabytes to gigabytes.

In the second formula, we multiple the number of Campaign records times 8KB because they consume a lot of storage compared to typical standard and custom objects. Then we do the same calculations to convert from KB to GB as described above.

The third formula is for calculating the storage needs of Articles since their records require 4KB rather than the standard 2KB. Otherwise it works like the first two, with the result expressed in gigabytes.

Keep in mind that Person Accounts and 1x1 Contact to Account data models will create a Contact and an Account record, so every individual is represented by two records rather than just one.

Salesforce offers options for purchasing additional storage space. You can also upgrade to Performance, Unlimited or similar Editions that offer six times as much data storage per user compared to Enterprise Edition (orgs with fewer than 10 licenses may not see the benefits of the additional storage).

But, as Auntie Pat Tern would say, it is good to be grateful for what you have and know when you've got enough for your share. Salesforce makes it easy to calculate your needs.

Saturday, June 11, 2016

More Better Bots

You know Auntie Pat Tern loves Jeopardy!, the tv show with all the answers, but we have an old friend named Eliza who believes we can find all the answers we need by asking ourselves the right questions.  My developers have been asking some of the right questions since their recent trip to the TrailheaDX conference (a conference for Salesforce Developers, similar to past conferences in 2007 and 2010).

First of all, the developers asked what we should do about all of our existing Visualforce pages now that users are craving Lightning Experience.  "What can we do to give them that look and feel without a wholesale rewrite of our UI?" they asked.  Their questions naturally led them to the answer that we should start simply with applying the Lightning Design System to what we have in Visualforce, as a first step.

Next, the developers asked what was going on with Salesforce and AI.  "What's the deal with bots?" they wondered.  While they mulled this over, I reached out to Eliza to see what she had to say:
> Hello. My name is Eliza.  How may I help you?
* Eliza, what's the difference between AI and machine learning?
> Ahh.  Please continue...
* Really, I have three issues: Artificial Intelligence, Machine Learning and Natural Language Processing.
> What does that suggest to you?
* Like Steve Wozniak said at TrailheaDX, Artificial Intelligence is nothing without Machine Learning.  Natural Language Processing benefits from the ability to learn better responses.
> I'm not sure I understand you fully.
* Well, GA Tech has done some cool stuff with bots.  One learned to answer frequently asked questions for students.
> Can you elaborate on that?
* For a class that prompted over 10,000 online questions per semester, the professor used Machine Learning and trained a bot using questions and answers from previous semesters.  The bot was set to respond to those familiar, frequently asked questions and to only respond to new questions if it calculated a 97 percent or higher confidence level for its answer.
> What does that suggest to you?
* It suggests that I can blend Natural Language Processing and Machine Learning to provide automated, interactive responses to FAQs.
> Say, do you have any psychological problems?
* Um, I do sometimes chat with bots.
Having a bot that can help customer service agents provide the best responses to Cases would help our service team reply quickly and consistently.  We also might want a bot to help provide strategies for the sales team, following the department's best practices. Bots, with the ability to learn from our org's data and best practices can also help us guide new employees and reduce their frustration.

Auntie Pat Tern doesn't really like the term "Artificial Intelligence" because she believes that using code to process our human questions and requests is genuine intelligence.  And our old friend Eliza is curious to know how you feel about that.

Tuesday, May 17, 2016

SeeAllData = Fail

According to Auntie Pat Tern, it takes 21 days to break a bad habit, but Salesforce developers have had about that many API versions to break the bad habit of writing unit tests with "SeeAllData=true".

In most cases, this clause alone can render tests completely useless. Unit tests should prove that code functions according to specifications by asserting proper database manipulations--create, read, update, delete operations--as well as Visualforce navigation.

I looked at unit tests in my org and found this example of a test that failed to achieve its purpose:

This test should be asserting that the data for ordering samples can be created with just a contact and sample type defined.  Unfortunately the test relies on existing, live, org data rather than test 'silo' data because of "SeeAllData=true" in the first line. There are easy methods for unit tests to create their own test data without relying on live org data.

We encountered the following problems because of using "SeeAllData=true":
  • It required us to maintain test data among our real data.
  • When the test data was changed during data cleanup (in one case, Pat Tern's Account was deleted), the tests failed even though functionality was unchanged.
  • Tests were not reliable between Sandboxes and Production orgs due to data differences rather than actual functionality.
  • Apex Hammer Tests may not have been automatically monitored in our org for each Salesforce release since Hammer Tests are blind to live org data.
In the rare case where a specific piece of data may be required for your code to behave, consider using custom metadata types instead of Salesforce objects. Trailhead can help you learn more about how they allow you to move metadata and records between orgs and test functionality without needing to see all data in the org.

Saturday, May 7, 2016

Pay Down That Technical Debt

Auntie Pat Tern says we should pay with cash, or at least pay our credit cards off every month because "debt," she says, "is like fresh fish, it seems great at first but gets old fast and then it really stinks."

It's Hammer Time!
So I looked at my Salesforce org to assess the level of technical debt we had accrued and to plan how we would start paying down that debt.

Sometimes technical debt comprises the shortcuts or mistakes that no one has time to correct when a project needs to be completed. Technical debt also develops over time.  As businesses mature and change, their technical solutions can fall behind and create technical debt.

A Salesforce org can be especially prone to this since Salesforce offers a wealth of new features for all orgs three times each year and not all companies make an effort to rewrite their technical solutions based on these new features. Luckily, Salesforce offers some tools to help us assess some of the technical debt associated with our code.

First, I checked our Hammer Test Status.  The data silo gauge revealed that we still had old (and sadly a few new) unit tests that were using "seealldata=true", which we needed to update.  It also revealed a couple of tests that were failing and so needed some attention.

Next I checked the API version on our Apex Classes.  Any class that is 10 or more versions behind the current API version needed a review, both on the code side and on the process side.

The third step for me was reviewing our org's documentation for outdated information on our code.

My org review also included configuration, managed packages and basic processes, as well.

We may not be able to pay down all of our technical debt right away, but a monthly review and reminder of how it accumulates will help us develop better in the future.  And as Auntie Pat Tern says, "It's not just pay me now or pay me later, you know.  It's pay me now or pay me later with additional compounded interest!"

Sunday, April 10, 2016

Putting Governor Limits To The Test

My Auntie Pat Tern is pretty accepting of cousin Tim Toady's behavior. "He's a teenager, after all, there's no better time for him to test his limits," she explained. So I decided to ask my developers to test their limits in Salesforce.

One Master object with 13 Detail objects, 
some of which are Masters in other relationships as well.
I wanted them to see for themselves how governor limits benefit the overall performance of the code they write. And I wanted them to experiment with ways to push those limits by trying to break things.

They built Processes, wrote triggers, and configured some unwieldy objects that I would never want to see in production all in an effort to push good performance to the very edge of being bad.

Some limits include object relationships.
A couple of these experiments proved that what they understood about limits was untrue. In one example, when it comes to Master-Detail relationships on custom objects, the documentation described a limit of 2^3. Firstly, 2^3 does not equal 8 here. Instead, it indicates that an object can have two M-D relationships and those can be three levels deep.

Take the example Parent <-- Child <-- GrandChild <-- GreatGrandChild where all relationships are Master <-- Detail. Some of the limits on this relationship structure are as follows:

  • Parent cannot have a new Master (eg GrandParent) because of the limit on how deep the relationship levels can be. 
  • GreatGrandChild does not show up as available to be a Master in other relationships because we are limited to three levels deep.
  • Child cannot have a new Master because it already has two M <-- D relationships (even though only one of those points to a Master).
  • Child can have new Details, that is, new GrandChild-level objects can be created as Details for Child even though Child already has two M <-- D relationships.
  • Many Child-level objects can be created as Details for Parent (we stopped at over 50). 
  • A Child-level object cannot be used as a junction object between records of a single Parent-level object. M <-- D relationships cannot be immediately self referencing like that.
  • GrandChild-level and GreatGrandChild-level objects can have the same Master object as their Master, eg. GrandChild can point to Child and Parent as Masters even when Child already points to Parent as its Master. We daisy-chained six objects this way before hitting limits on the depth of the relationships.
  • Child-level objects with two relationships pointing to Master objects cannot be Masters to new GrandChild-level objects. An object can have two Masters or it can have one Master and many Detail relationships or it can have no Master and many Detail relationships.

It was a fun exercise and demonstrated how limits benefit performance and how hard some of them can be to break. It gave the developers a chance to challenge their assumptions, be creative and gain a better understanding of the implications of limits when it comes to writing better code.

Secretly, Auntie Pattern believes that testing limits can help us appreciate why limits are important, but she wouldn't tell Tim Toady that.

Sunday, April 3, 2016

Painless Removal Of Boat Anchors

I asked my Auntie Pat Tern why she has a tattoo of a boat anchor on her forearm and she said, "Boat anchors symbolize hope, but I'm considering getting this one removed." So she didn't have it to enhance her resemblance to Popeye like cousin Tim Toady always says.

It may have been hope that also inspired the boat anchor I found in the code in my org. In technical circles, some people only think of boat anchors as that outside technology they got stuck with because their boss bought it without conducting a technical review. But if you aren't constantly conducting a technical review of your own code, you can get stuck with boat anchors there as well.

In my org, I found something like the following:

All code that had once handled business logic was commented out to the point where the code literally did nothing but returned a null value. I think this boat anchor represented the developer's hope that the original code might somehow be useful later. If that's the hope, the code can be stored outside of our production org.

To remove it, we just needed to correct any references and delete the class and its related unit test. Salesforce offers multiple ways to accomplish this using the IDE, the developer Workbench or the Migration Tool, all of which are much less painful than Auntie Pat Tern's tattoo removal.

With Workbench, for example, you can simply mark a class or trigger as deleted or as inactive to remove it from production using the following steps after downloading the class or trigger files:
  1. Edit the XML meta file for the class or trigger to change its status from <status>Active</status> to <status>Deleted</status> (for classes) or <status>Inactive</status> (for triggers)
  2. Put the both the .XML file and class or trigger file into a folder named "classes".
  3. Create a package.xml file to list the class or trigger to be deleted or made inactive.
  4. Place both the package.xml file and the classes folder into another folder inside a zip file.
  5. Specify this zip file after you select Migration>Deploy in Workbench.
A great way to generate a package.xml file to get you started is to use Setup>Create>Packages in Salesforce and add the classes or triggers that need to be deactivated or deleted. This package can then be specified for download in Workbench using Migration>Retrieve. The zip file will need to be changed as described above to deploy. (For step-by-step instructions, click here.)

Like Auntie Pat Tern, be on the lookout for boat anchors and consider removing them when they pop up during your org's ongoing technical review.

Friday, March 11, 2016

Static V. Instance, or, How Can A Variable Be Unchanging?

I asked my Auntie Pat Tern why she loves to watch Jeopardy! and she said, "if I watch it enough, I will see answers to all the questions in the world." So I looked through the questions that my developers ask to see if they could be answered by Jeopardy!

One of the questions that comes up frequently is whether use of the 'static' keyword is an antipattern or whether it is best practice. Of course, if it's used wrong, it is an antipattern, so understanding static versus instance methods, variables and initialization code is important.
Final Jeopardy! means there won't be
infinite answers this time.

Taking Jeopardy! as our example, the classic game show that contestants win by providing the most correct responses, we can see that some things don't change from week to week. The show's name, the show's host, these are constants and so of course are static because "static" means unchanging. A static response can be provided for questions about the host and show name.

Every show has three contestants and to create a show, its producers need to know who the three contestants will be. So they conduct tryouts to choose and schedule competitors. The names of the competitors change from night to night, so the list of competitors is variable, but show producers have to know who is scheduled before the show is created, so it is a static variable. A static response can be provided to the question of who will be the competitors for any specified date.

When the show is filmed, the responses that one contestant provides as part of the action of the show are defined by the instance of the show and depend on the inputs the contestant receives from the show and the interactions the contestant has with other contestants on that particular show. So these responses are non-static and depend on the instance of the show.

As an example let's consider three possible questions about the upcoming Jeopardy! college championship round:
  1. Who's the host? We expect one answer: 'Alex Trebek'. 
  2. Who are the competitors? We expect one answer: the three college students, depending on who is chosen for a specified day. 
  3. What is the contestant's 'question' and score for the first input in the first category? The response to this question depends on actually seeing the college championship round in action, knowing what the inputs are and seeing which contestant acts first and the result of that particular action.  
So the first two are static and the last one non-static and specific to its instance.  

In Apex, you can see an example of static vs. instance in the Date class. This class offers both static and instance methods. Let's consider three possible questions we could ask today:
  1. What is the date today? We expect one specific answer for this question and it doesn't need any additional information for us to ask it:
  2. Is it a leap year? We expect only one answer for this question, depending on a specified year, for example: Date.isLeapYear(2016); or Date.isLeapYear(2023);.
  3. What is the year? The response depends on knowing the date in question -- we need an instance of Date in order to figure out the year of that specific date, for example: Date yearLater =; Integer nextYear = yearLater.year();.
Again, the first two are static methods while the third, determining the year, is specific to its instance. You might wonder, for example, what happens in Apex when you add a year to a leap day, will Apex give us February 28 or March 1 of the next year (or null)? You could run the following code to create an instance specific to a leap day and test it out:

The above code primarily uses instance methods, but using the 'static' keyword in Apex can be particularly useful when it comes to writing code called by Triggers. Since a single Trigger can result in a cascade of other Triggers firing, we may need to keep track of some information across all the business process automation associated with the cascade of Triggers. Having a separate class with a static method or variable for all of the Triggers allows us to share specific data across the code executing for multiple Triggers.

For example, a separate class that contains a static variable can indicate the number of times a particular automation has run to help avoid an infinite recursion

Infinite recursions need to be avoided in code, but Auntie Pat Tern may be correct that infinite episodes of Jeopardy! will contain all the answers to all the questions in the world.

Friday, March 4, 2016

Start Documenting Rather Than Accepting "No Comment"

My Auntie Pat Tern tried to convince my cousin Tim Toady that he should be using the 3/5 rule of essay writing he'd been taught -- three ideas, five paragraphs -- but, as usual, Tim insisted there is more than one way to do it (TIMTOWTDI). So I looked through the code in my Salesforce org to see if my developers were following the 3/5 rule for comments or if TIMTOWTDI had gotten into our code.  Unfortunately, I mostly found code with no comments at all.

Code without comments is like a research paper without a thesis statement to detail its purpose.  It's like doing something over and over without understanding why because comments should always clarify why it does what it does.  It's like burying treasure without leaving a proper map because good comments point to valuable resources the code uses or makes available.

Code comments should follow the 3/5 rule of comments:  there are three places where comments should occur and five specific topics that need to be addressed.

The three places where comments should be found in Apex code are:
  1. Block comments at the beginning of classes.
  2. Block comments at the beginning of methods.
  3. Line level comments for constants, loops, conditions, and where clarification is possible.
The five topics that should always be found in block comments are:
  1. Description of purpose and assumptions.
  2. Author/date for creation and modification.
  3. Parameters for methods that accept values, or none.
  4. Return values for methods that pass values, or none.
  5. References and dependencies.
Here's an example of code that follows the 3/5 rule of comments:

In this example, block comments appear at the start of the class and at the start of the methods while inline comments appear within methods and for named constants. The comments use Javadoc tags like "@description" to make the important topics easy to locate.  These comments even include "TBD" to indicate code that is incomplete and point out the limitation imposed by the choice of Integer as a data type rather than Long. Comments should always include information that may be reconsidered during the next phase of development.

We ask that developers and admins both be responsible for code comments because the admin should know how code effects data integrity and user experience.  Without that level of cooperation and understanding, admins have been known to implement validation rules that cause code to fail and developers have been known to implement code that causes user experience to decline.

Tim Toady is right, there is more than one way to do it with code.  That's why we ask for comments that explain why the code was written the way it was.  And we follow Auntie Pat Tern's 3/5 rule to make sure we have comments in three areas of code and covering 5 required topics.  Better comments lead to better collaboration and easier maintenance of the code and the org where it runs.

Wednesday, February 24, 2016

Using Trailhead To Resolve Unknown Unknowns

According to my Auntie Pat Tern, the only trouble with my cousin Tim Toady is that "like every teenager, he don't know what he don't know.  And that wouldn't be so bad if his teachers were better at knowing what he don't know."  So I took a look at the code in my org to see if I could pinpoint what it is that the developers don't know.

Luckily, the Trailhead team is great at foreseeing what it is that people don't know about Salesforce and have created a Trail for that, no matter what that is.  And while the developers might find the Apex trails on their own, they might not recognize that there is more about Salesforce they need to understand.  In fact, there is an entire Trail on the Salesforce Advantage, the core technology that differentiates Salesforce from other CRM systems and other development platforms.

Learn, or review, Salesforce Technology Basics
with the new Trailhead module.
One developer expressed concerns about "the Governor's limits", which indicates to me that they need to better understand multitenancy and performance.  Another expressed concern about Salesforce firewalls and security breaches, which indicates they need to learn about Salesforce security standards.  A third developer suggested we build custom objects for storing employee contact data and build content management and customer service solutions from scratch, which meant they need to learn more about fast app development with Salesforce customizations and third-party apps.

Happily, Salesforce has a new Trailhead module to help my group know more about all these topics, Salesforce Technology Basics.  Of course my developers are a lot like my cousin Tim Toady, they know they are a smart bunch but they often don't realize that there are things they don't know.  It's the Dunning-Kruger Effect, the unknown unknowns.

For those who are deeply familiar with Salesforce, Trailhead can help you avoid the "curse of knowledge" tendency to believe that if you know it then most people must know it as well.  Trailhead modules are thorough, starting with the basics and moving to more challenging information.  And they are entertaining, so anyone with a smattering of knowledge will find familiar topics fun to review and new topics informative.

With Trailhead, you can avoid the problem Auntie Pat Tern described for cousin Tim Toady, the unknown unknowns as well as the curse of knowledge.  Just encourage your team to earn Trailhead badges, which you can review in their Salesforce Community Profile pages, to bring the team up to speed even when you, or they, think they know what they need to know already.

Sunday, February 21, 2016

Code Smell Leads To Improved Interprocess Communications

My Auntie Pat Tern said she always knows when her teenage kids have guests over and her friend wondered how.  Is it the noise, the mess, the smell? "No," said Auntie Pat Tern, "I just ask them and they tell me."  So I thought I would look through the code in my org and pass this straight-forward idea for communication on to my developers.

When code review uncovers a lot of small problems, this can often bring larger problems to the surface. "Code Smell" refers to small problems that may reveal bigger concerns in code. I previously gave an example of code that uses a 'poltergeist' object to pass data from an external system into a custom Salesforce object before creating the data in the required Salesforce object.  The same code also uses 'hard coded' values that are not named constants.  And that "Code Smell" reveals the use of a database field for interprocess communications, setting a field value to 'success' when the record is created rather than relying on more direct means of communication.

Using the Apex Database class, as shown in the second code block, provides a direct means of communication that would make Auntie Pat Tern proud.

Sunday, February 14, 2016

Poltergeist Busters Get A Call

Auntie Pat Tern mentioned that she loves a good ghost story, so I looked through my code and told her about the Poltergeist object I found there.

Poltergeist objects exist simply for the purpose of passing data from one place to another and are essentially useless.  These objects don't serve a clear function relative to business or technology rules, instead they just take up space.  Poltergeists are unlike Leads, which are required by business rules to hold data until it can be verified and converted to Contacts.  Leads aren't Poltergeists because they service the business rules related to data verification.

The object I found in my code was no Lead, it was a troublesome Poltergeist.
In the above code, the Order object serves no purpose other than passing data from an external system into Salesforce.  Since there is no verification process defined, there doesn't need to be a holding bin for data, and it could be passed directly into the Sales Order object that already existed in our Salesforce org. We can simply refactor the code to write directly to the third-party object rather than the Poltergeist.

Auntie Pat Tern said she's seen a lot of weird stuff, but the code in my org might be the scariest of all.

Thursday, February 11, 2016

When To Soft Code Or Not To Soft Code

My Auntie Pat Tern got mad at my no good cousin when he told her he changes phone numbers every time he gets a new burner phone; she warned us to make sure we do business by the rules.  So I looked through my code and shared her suggestion with my developers.

It turns out they tried to correct a problem with badly implemented hard-coded values by soft-coding some values that represented software architecture decisions and similar business rules.

Previously, I showed you the example of a hard-coded username being assigned as record owner, which caused two problems in the org.  The first problem was that the username needed to be an API-only user rather than an employee who might leave the company whose username would be deactivated.  The second problem was that the username was hard-coded deep in the code rather than being represented as a constant at the beginning of the class.

The developer who tried to fix these problems decided to soft-code the username as follows:

Using Custom Settings allows the username to be changed outside of code at any time, multiple times.  The above example does not solve the first problem we had, though. Unfortunately, soft-coding the username defeats our business rule requiring data coming from integrations to be owned by a user or Queue specific to that integration.  In other words, our business rule required the use of a constant rather than a soft-coded value.

We had to take that code back to the drawing board one more time because, like Auntie Pat Tern demonstrates, code should behave according to the rules of business, and enforce those rules automatically.

Monday, February 8, 2016

Magic Numbers and Hard Coded Values

My Auntie Pat Tern recently borrowed her friend's phone and noticed the recently called phone numbers didn't have names assigned, so she called everyone to remind them to enter contact names with their phone numbers.  I looked through some of the code in my org and decided to share her advice with my developers.  I found code that made use of Magic Numbers and hard-coded values, both of which make the code difficult to maintain.

Imagine Auntie Pat Tern's frustration when she wanted to find a particular number and couldn't because it wasn't listed under any of the names she expected to see.  And the contact list was empty.  She couldn't find the number she wanted and she couldn't tell what any number was for without a contact list.  In programming, the variable and constant declarations are like the contact list and allow us to give understandable names to numbers we use in the code. Numbers that appear without names and descriptions are known as "Magic Numbers" because they appear and seem to work by magic, such as the following example:

Imagine trying to maintain code like this, would you know where to change a value and how often that value may be repeated in the code for the same use?  Does 0.43 need to be changed to 0.435 every time it occurs in this class, or just in a single line of code.  These values should be named according to the business logic being automated and should be declared as constants for more readable and maintainable code.

Magic Numbers aren't the only hard-coded values I found in the code. Deep within the bowels of one utility class I found a user name hard-coded as the owner of all records generated by our integration with another database.

In the example above, you can see the hard-coded user name, but you can't see the fact that this was a real user, and happened to be someone who had left the company and so was deactivated.  A better approach for this would be to assign an owner that is an Apex-only user, a bot-user specific to this integration, rather than a real person, and define that user name as a constant.

Static variables, at the beginning of a class, provide an ideal location for declaring variables and constants associated with business logic.  At the beginning of the class, they can be near the class description, the comment that describes the business case being automated by the class, and make maintenance a breeze.

Naming values and putting them where others expect to find them will help you avoid problems like what happened with my Auntie Pat Tern.

Monday, February 1, 2016

Avoiding the Negative for Clear Code

My Auntie Pat Tern recently reminded me to avoid double negatives and use positive language as often as possible. So I looked through some of the code in my org and decided to share her advice with my developers. When we write conditional statements using NOT (or "!"), we are using negative syntax which is more difficult to understand than positive syntax. Here's an example written in negative syntax, then rewritten in positive syntax:

The conditions evaluate the same, but I find the negative syntax more difficult to read during code review and maintenance. Note that it can be improved upon further with a basic knowledge of String methods.

The first time I encountered super negative syntax, I thought the developers who used it were making a game of being abstruse.  But knowing that the two programmers were friends led me to discover that this was in fact a case of "cargo cult programming" where they were sharing and reusing a bad pattern because they hadn't taken the time to pick apart what the code needed to accomplish. Once they better understood what the code was doing, they learned to write it more clearly.