Salesforce and Amazon SQS


A couple of us are working on a new project that involves Salesforce CRM. We recently came upon a few technical challenges related to our need to keep a subset of the data stored in the cloud in-sync with our internal application servers. These include:

  • Limited number of API calls we can make per 24 hour period. Salesforce charges a per-user license fee, and each license gives you a certain number of API calls. If our project scales as we expect it to, we could easily exceed the number of API calls.
  • Keeping development data in sync with our internal development server and test data with the test server.

We came up with two initial solutions:

  1. Have our internal systems poll the cloud for change.
  2. Have the cloud send a message to our systems informing us of a change.

The first approach requires a balancing act: how up-to-date does the information need to be on our side, vs. how many API calls we can make. If we poll every second, we would consume 86,400 calls per day – more than we will probably have allotted when we launch.  We also can’t consume 100% of our API calls on polls, as once we have detected that something needs to sync, we need to make calls to download the changed objects, and also need calls to periodically send data to the cloud as well.

The second approach seems to be the better one, as outbound messages don’t apply towards daily API limits. Also, we only anticipate the synced object types would only ever incur a few hundred changes per day, far fewer than the thousands of polling calls we would have to make. The problem then becomes how to implement the sync in a way that would work in our production ‘org’, our ‘sandbox’, and the ‘developer accounts’ that we developers are using. The way our web stack is structured, however, only allows for communication from third parties to see our production web environment. We could come up with our own way to queue messages in production intended for other levels, but we would need to be even more concerned about security and we would likely be duplicating something that the marketplace already provides.

It turns out someone does: Amazon.

I’ve been familiar with Amazon’s cloud computing offerings for some time now, just have never been able to utilize them with previous employers. Amazon has a service known as SQS, or Simple Queue Service, that “offers a reliable, highly scalable, hosted queue for storing messages as they travel between computers.”

With SQS you can:

  • Send up to 64KB of text per message
  • Persist messages for up to 14 days
  • Create unlimited queues (ie: one queue for each of our environments)
  • And a lot more

Furthermore, SQS is cheap: $0.01 per 10,000 requests (send and receive are considered separate), and about $0.1 per GB of transfer. Far cheaper than buying additional Salesforce licenses.

Salesforce has its own language known as Apex, which runs on their servers and has a syntax very similar to Java’s. SQS messages are fairly simple, with Query and SOAP based API’s available. The one complexity is the means of signing a message. SQS messages include HMAC signatures using a private key you establish with Amazon that prevents messages from being forged.

The SQS implementation is quite simple. An Apex Trigger exists on object that we need to sync.  That trigger enqueues a message containing the record type, the ID of the changed record, and a timestamp. This message goes to a SQS queue that corresponds to the environment (dev, test, prod, etc…).  A scheduled task on our end polls SQS every few seconds for changes.

How do you sign a message in Apex that conforms to SQS specs? Apex does have some good built in libraries, including a Crypto class that even has an AWS example in their documentation (though for a different service using a much simpler authentication scheme). Here is the solution I came up with:

<pre>public class AmazonSqsSender

	private String getCurrentDate() {

	public void sendMessage(String message) {
		//AmazonAws__c is a custom setting object that stores our keys, an Amazon Host, and a queue name
		//You can just put your keys, host and queue below as strings
		AmazonAws__c aws = AmazonAws__c.getOrgDefaults();

		String accessKey =aws.accessKey__c;
		String secretKey = aws.secretKey__c;
		String host = aws.host__c;
		String queue = aws.queue__c;

		Map<String,String> params = new Map<String,String>();


		//The string to sign has to be sorted by keys
		List<String> sortedKeys = new List<String>();

		String toSign = 'GET\n' + host +'\n'+queue+'\n';
		Integer p = 0;
		for (String key : sortedKeys) {
			String value = params.get(key);
			if (p > 0) {
				toSign += '&';
			toSign += key+'='+value;

		String url = 'https://'+ host+queue+'?';
		p = 0;
		for (String key : params.keySet()) {
			if (p > 0) {
				url += '&';
			url += key+'='+params.get(key);

		HttpRequest req = new HttpRequest();
		Http http = new Http();
		try {
			//System.debug('Signed string: ' + toSign);
			//System.debug('Url: ' + url);
			HttpResponse res = http.send(req);
			//System.debug('Status: ' + res.getStatus());
			//System.debug('Code  : ' + res.getStatusCode());
			//System.debug('Body  : ' + res.getBody());
		catch (System.CalloutException e) {
			System.debug('ERROR: ' + e);

//Amazon wants + and * to be escaped, but not ~
	private String encode(String message){
		return EncodingUtil.urlEncode(message,'UTF-8').replace('+', '%20').replace('*', '%2A').replace('%7E','~');

	private String getMac(String RequestString, String secretkey) {
		String algorithmName = 'hmacSHA1';
		Blob input = Blob.valueOf(RequestString);
		Blob key = Blob.valueOf(secretkey);
		Blob signing =Crypto.generateMac(algorithmName, input, key);
		return EncodingUtil.urlEncode(EncodingUtil.base64Encode(signing), 'UTF-8');

	public static void sendTest() {
		AmazonSqsSender t = new AmazonSqsSender();
		t.sendMessage('Hello from Salesforce ' + Math.random());

Using the System Log, it is possible to call AmazonSqsSender.sendTest() to send a random message. Some Java code was running on my workstation that proved messages were being sent.

For the time being, we are going to poll Salesforce directly to keep the overall complexity down, but at least we know that Amazon SQS is an option if we need it.


Tags: , , ,

2 Responses to “Salesforce and Amazon SQS”

  1. Salesforce and SQS « Says:

    […] href="">Posted at my development team's blog</a>. Filed under: Amazon AWS, Salesforce […]

  2. Paweł Says:

    Thanks a ton for this snippet. Saved me some time.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: