Archive for March, 2011

Implementing Modular Web Design: Part 2 – How?

March 29, 2011

Modular Web Design by Nathan Curtis and Web Anatomy: Interaction Design Frameworks that Work by Robert Hoekman Jr. and Jared Spool are great resources for UX developers. They focus on the theory, philosophy and process of designing modularly. Unfortunately, these books don’t go into any detail about how to actually implement, in code, a modular web design system in a web app. Some excerpts in Modular Web Design hint at what other companies have done, but there are no concrete steps to guide the process of implementation.

In this post I hope to outline a global strategy that is programming-language agnostic, easy for the end-user (in this case other developers) to implement, and hopefully elegant in its abstract design.

As a UX Engineer I approach code architecture the same way I approach a wireframe design. I start with what I want the end use case to look like and work my way back through the details. In this case my end users are my fellow developers and the goal for them is a super-easy implementation of reusable components that borders on the magical. Think jQuery’s “write less. do more.” motto on steroids.

I want developers to be able to write one line of code anywhere in the view tier of the web app and have that magically transformed into a fully functional component complete with CSS and JS wirings when the page is actually rendered by the server.

The end goal is one line code snippets that transform into HTML, CSS, and JS at runtime.

In order to achieve this, you need something in the request cycle that’s looking for these one-liners and triggering the component engine that renders the HTML, CSS, and JS to spring into action. Ideally this is some kind of string parser connected to your view composition engine, preferably pretty late in the composition cycle so the components will be one of the last things rendered by the server before the user sees the page.

The high level request diagram looks something like this:

Once the ability to detect snippets and return components in their place has been enabled, you can begin to break down the input and output processes further.

At some point in the input process the snippet that the output parser is scanning for will need to be broken down into arguments that can be passed to the component engine.  The syntax of the snippet will be discussed in a later post, but for now let’s assume that snippet is transformed into something the component engine can understand. We may also want to add rules at the input stage to check for things like component dependencies, or to ensure that there aren’t too many of one type of component on the page, (i.e. more than one site wide navigation component, probably not a good idea).

The component engine receives the appropriate arguments, works its magic, and returns HTML and paths to static assets. The HTML will need to be inserted in the same location that the snippet was previously. The static assets will require some more advanced routing.

Best practices are to have CSS references in the head and any static JS files referenced immediately before the closing body tag. There will need to be something at the output stage that can manage the creation and injection of the necessary <link> and <script> tags.

That’s pretty much it for the high-level overview. Obviously there’s more complexity that can be added like preventing the addition of duplicate JS and CSS files or combining those static assets into one file to lower the number of http requests, but that’s my vision for a basic implementation. A one line snippet becomes HTML, CSS, and JS all wired together. It’s modular, reusable, and wicked fast for implementation.

In my next post I’ll discuss the syntax of the invocation snippets. Specifically, strategies to prevent naming collisions, how to efficiently pass configuration parameters, and more. What are your thoughts on this high-level implementation? Would you do something different? Have you implemented something similar? If so, post a comment below.

Salesforce and Amazon SQS

March 8, 2011

A couple of us are working on a new project that involves Salesforce CRM. We recently came upon a few technical challenges related to our need to keep a subset of the data stored in the cloud in-sync with our internal application servers. These include:

  • Limited number of API calls we can make per 24 hour period. Salesforce charges a per-user license fee, and each license gives you a certain number of API calls. If our project scales as we expect it to, we could easily exceed the number of API calls.
  • Keeping development data in sync with our internal development server and test data with the test server.

We came up with two initial solutions:

  1. Have our internal systems poll the cloud for change.
  2. Have the cloud send a message to our systems informing us of a change.

The first approach requires a balancing act: how up-to-date does the information need to be on our side, vs. how many API calls we can make. If we poll every second, we would consume 86,400 calls per day – more than we will probably have allotted when we launch.  We also can’t consume 100% of our API calls on polls, as once we have detected that something needs to sync, we need to make calls to download the changed objects, and also need calls to periodically send data to the cloud as well.

The second approach seems to be the better one, as outbound messages don’t apply towards daily API limits. Also, we only anticipate the synced object types would only ever incur a few hundred changes per day, far fewer than the thousands of polling calls we would have to make. The problem then becomes how to implement the sync in a way that would work in our production ‘org’, our ‘sandbox’, and the ‘developer accounts’ that we developers are using. The way our web stack is structured, however, only allows for communication from third parties to see our production web environment. We could come up with our own way to queue messages in production intended for other levels, but we would need to be even more concerned about security and we would likely be duplicating something that the marketplace already provides.

It turns out someone does: Amazon.

I’ve been familiar with Amazon’s cloud computing offerings for some time now, just have never been able to utilize them with previous employers. Amazon has a service known as SQS, or Simple Queue Service, that “offers a reliable, highly scalable, hosted queue for storing messages as they travel between computers.”

With SQS you can:

  • Send up to 64KB of text per message
  • Persist messages for up to 14 days
  • Create unlimited queues (ie: one queue for each of our environments)
  • And a lot more

Furthermore, SQS is cheap: $0.01 per 10,000 requests (send and receive are considered separate), and about $0.1 per GB of transfer. Far cheaper than buying additional Salesforce licenses.

Salesforce has its own language known as Apex, which runs on their servers and has a syntax very similar to Java’s. SQS messages are fairly simple, with Query and SOAP based API’s available. The one complexity is the means of signing a message. SQS messages include HMAC signatures using a private key you establish with Amazon that prevents messages from being forged.

The SQS implementation is quite simple. An Apex Trigger exists on object that we need to sync.  That trigger enqueues a message containing the record type, the ID of the changed record, and a timestamp. This message goes to a SQS queue that corresponds to the environment (dev, test, prod, etc…).  A scheduled task on our end polls SQS every few seconds for changes.

How do you sign a message in Apex that conforms to SQS specs? Apex does have some good built in libraries, including a Crypto class that even has an AWS example in their documentation (though for a different service using a much simpler authentication scheme). Here is the solution I came up with:

<pre>public class AmazonSqsSender
{

	private String getCurrentDate() {
		return DateTime.now().formatGmt('yyyy-MM-dd\'T\'HH:mm:ss.SSS\'Z\'');
	}

	public void sendMessage(String message) {
		//AmazonAws__c is a custom setting object that stores our keys, an Amazon Host, and a queue name
		//You can just put your keys, host and queue below as strings
		AmazonAws__c aws = AmazonAws__c.getOrgDefaults();

		String accessKey =aws.accessKey__c;
		String secretKey = aws.secretKey__c;
		String host = aws.host__c;
		String queue = aws.queue__c;

		Map<String,String> params = new Map<String,String>();

		params.put('AWSAccessKeyId',encode(accessKey));
		params.put('Action','SendMessage');
		params.put('MessageBody',encode(message));
		params.put('Timestamp',encode(getCurrentDate()));
		params.put('SignatureMethod','HmacSHA1');
		params.put('SignatureVersion','2');
		params.put('Version','2009-02-01');

		//The string to sign has to be sorted by keys
		List<String> sortedKeys = new List<String>();
		sortedKeys.addAll(params.keySet());
		sortedKeys.sort();

		String toSign = 'GET\n' + host +'\n'+queue+'\n';
		Integer p = 0;
		for (String key : sortedKeys) {
			String value = params.get(key);
			if (p > 0) {
				toSign += '&';
			}
			p++;
			toSign += key+'='+value;
		}
		params.put('Signature',getMac(toSign,secretKey));

		String url = 'https://'+ host+queue+'?';
		p = 0;
		for (String key : params.keySet()) {
			if (p > 0) {
				url += '&';
			}
			p++;
			url += key+'='+params.get(key);
		}

		HttpRequest req = new HttpRequest();
		req.setEndPoint(url);
		req.setMethod('GET');
		Http http = new Http();
		try {
			//System.debug('Signed string: ' + toSign);
			//System.debug('Url: ' + url);
			HttpResponse res = http.send(req);
			//System.debug('Status: ' + res.getStatus());
			//System.debug('Code  : ' + res.getStatusCode());
			//System.debug('Body  : ' + res.getBody());
		}
		catch (System.CalloutException e) {
			System.debug('ERROR: ' + e);
		}

	}
//Amazon wants + and * to be escaped, but not ~
	private String encode(String message){
		return EncodingUtil.urlEncode(message,'UTF-8').replace('+', '%20').replace('*', '%2A').replace('%7E','~');
	}

	private String getMac(String RequestString, String secretkey) {
		String algorithmName = 'hmacSHA1';
		Blob input = Blob.valueOf(RequestString);
		Blob key = Blob.valueOf(secretkey);
		Blob signing =Crypto.generateMac(algorithmName, input, key);
		return EncodingUtil.urlEncode(EncodingUtil.base64Encode(signing), 'UTF-8');
	}

	public static void sendTest() {
		AmazonSqsSender t = new AmazonSqsSender();
		t.sendMessage('Hello from Salesforce ' + Math.random());
	}
}

Using the System Log, it is possible to call AmazonSqsSender.sendTest() to send a random message. Some Java code was running on my workstation that proved messages were being sent.

For the time being, we are going to poll Salesforce directly to keep the overall complexity down, but at least we know that Amazon SQS is an option if we need it.