Windows 7 Slates: Touch-First UIs

When it comes to building apps with “Touch-First” user interfaces for Windows 7 Slates, there are a few principles you need to follow.

Instead of talking about gestures, swiping, pinching or receiving multi-touch Windows messages, I’m going to stick to the basics in this article.  I’m not an artist or UX guru, but I have been designing user interfaces for mobile and embedded devices with small screens since the late 90’s.

Runtime

If you remember the last time I talked about Windows 7 Slate development, I mentioned that plain-old .NET WinForms are actually a great choice.  Since every copy of Windows 7 includes .NET 3.51 as part of the image, you might consider targeting that runtime or C++ for friction-free deployment and best performance if it gives you the functionality you’re looking for.  Performance-tuned, Xaml-based apps are also an option as long as you’re not targeting Tablets with low-end CPUs and poor-performing, integrated graphics.  The same advice goes for HTML5 as long as you’re running IE9.

Immersive

Creating an immersive app that takes over the entire screen is step one.  This means that when you design your WinForms in Visual Studio, set them to be Maximized and get rid of the Control Box, Minimize/Maximize buttons, and the Border.

In the Spash Screen below, you can see it completely takes up the entire screen of the Tablet:

 

 

 

 

 

 

 

 

 

 

UI Element Size

Since some people have been known to have fingers as large as 80 pixels wide, you can no longer get by with the default sizes of UI elements when you drag them from the toolbox.  You need to increase the size of UI elements to be 40+ pixels wide/high as appropriate to users a large hit target for their fingers.

In the Login Screen below, you can see a giant power button icon in the upper-right side as well as a large User selection combo box, Password text box and Login button:

 

 

 

 

 

 

 

 

 

 

UI Element Spacing

Besides making UI elements larger, you also need to make them farther apart.  I know this flies in the face of you desire to cram as many things on a screen as possible.  Each screen should provide just a single function or idea so keep it simple, uncluttered, and elegant.  The term “fat finger” exists for a reason.  In order to prevent accidentally tapping on the wrong button, space all UI elements at least 20+ pixels apart.

In the Reservations Screen below, you can see a date picker, combo boxes, a button, and check boxes that are easy to touch and give each other breathing room:

 

 

 

 

 

 

 

 

 

 

Go Big!

Get accustomed to making everything bigger because the presentation paradigm for a Tablet is fundamentally different.  You don’t have the precision of a mouse to click on small hit targets.  You should also pump up the size of things you don’t touch, like the font of text on a screen so it’s clearly visible to the user from any angle.  Large, beautiful, typography as well as iconography is a good thing.

In the Schedule Screen below, you can see a large grid with big cells and text with large font.  The screen title is big and so is the arrow icon used to close and return to the previous screen.

 

 

 

 

 

 

 

 

 

 

One More Thing

In the touch-first world of mobile devices, you can never assume the use of a keyboard.  That’s why it’s important not to throw lots of empty text boxes at users to fill in.  It’s cumbersome and slow and a user may choose to not use your app.  Give users finger-friendly choices via UI elements like radio buttons, combo boxes and checkboxes where possible.

Now go start building those Touch-First UIs for today’s Windows 7 Slates!

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Consumerization of IT Collides with MEAP: Windows > Cloud

In my Consumerization of IT Collides with MEAP article last week, I described how to connect a Windows 7 device to Microsoft’s On-Premises servers.

Whether you’re talking about a Windows 7 tablet or laptop, I showed that you can follow the Garter MEAP Critical Capabilities to integrate with our stack in a consistent manner.  Remember, the ability to support multiple mobile apps across multiple mobile platforms, using the same software stack is a key tenant to MEAP.  It’s all about avoiding point solutions.

If you need a refresher on the Gartner MEAP Critical Capabilities, check out: http://robtiffany.com/meap/consumerization-of-it-collides-with-meap-windows-on-premises

In this week’s scenario, I’ll use the picture below to illustrate how Mobile versions of Windows 7 in the form of slates, laptops, and tablets utilize some or all of Gartner’s Critical Capabilities to connect to Microsoft’s Cloud infrastructure:

image

As you can see from the picture above:

  1. For the Management Tools Critical Capability, Windows 7 uses Windows Intune for Cloud-based device management and software distribution.
  2. For both the Client and Server Integrated Development Environment (IDE) and Multichannel Tool Critical Capability, Windows 7 uses Visual Studio. The Windows Azure SDK plugs into Visual Studio and provides developers with everything they need to build Cloud applications.  It even includes a Cloud emulator to simulate all aspects of Windows Azure on their development computer.
  3. For the cross-platform Application Client Runtime Critical Capability, Windows 7 uses .NET (Silverlight/WPF/WinForms) for thick clients. For thin clients, it uses Internet Explorer 9 to provide HTML5 + CSS3 + ECMAScript5 capabilities. Offline storage is important to keep potentially disconnected mobile clients working and this is facilitated by SQL Server Compact + Isolated Storage for thick clients and Web Storage for thin clients.
  4. For the Security Critical Capability, Windows 7 provides security for data at rest via Bitlocker, data in transit via SSL, & Authorization/Authentication via the Windows Azure AppFabric Access Control Serivce (ACS).
  5. For the Enterprise Application Integration Tools Critical Capability, Windows 7 can reach out to servers directly via Web Services or indirectly through the Cloud via the Windows Azure AppFabric Service Bus to connect to other enterprise packages.
  6. The Multichannel Server Critical Capability to support any open protocol is handled automatically by Windows Azure. Crosss-Platform wire protocols riding on top of HTTP are exposed by Windows Communication Foundation (WCF) and include SOAP, REST and Atompub. Cross-Platform data serialization is also provided by WCF including XML, JSON, and OData. Cross-Platform data synchronization if provided by the Sync Framework. These Multichannel capabilities support thick clients making web service calls as well as thin web clients making Ajax calls. Distributed caching to dramatically boost the performance of any client is provided by Windows Azure AppFabric Caching.
  7. As you might imagine, the Hosting Critical Capability is knocked out of the park with Windows Azure.  Beyond providing the most complete solution of any Cloud provider, Windows Azure Connect provides an IPSec-protected connection with your On-Premises network and SQL Azure Data Sync can be used to move data between SQL Server and SQL Azure.  This gives you the Hybrid Cloud solution you might be looking for.
  8. For the Packaged Mobile Apps or Components Critical Capability, Windows 7 runs cross-platform mobile apps include Office/Lync/IE/Outlook/Bing.

As you can see from this and last week’s article, Windows 7 meets all of Gartner’s Critical Capabilities whether it’s connecting to Microsoft’s On-Premises or Cloud servers and infrastructure.  They great takeaway from the picture above, is Windows 7 only needs to know how to integrate its apps with WCF in the exact same way as is does in the On-Premises scenario.  Windows developers can focus on Windows without having to concern themselves with the various options provided by Windows Azure.  Cloud developers just need to provide a WCF interface to the mobile clients.

When an employee walks in the door with a wireless Windows 7 Slate device, you can rest assured that you can make them productive via Windows Azure without sacrificing any of the Gartner Critical Capabilities.

Next week, I’ll cover how Windows Phone connects to an On-Premises Microsoft infrastructure.

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Consumerization of IT Collides with MEAP: Windows > On-Premises

The Consumerization of IT is an unstoppable force where employees are bringing every kind of mobile device imaginable into the office expecting to be productive.

Over the course of the next 20 articles, I’ll describe how IT professionals can use the principles of Gartner MEAP to connect any type of mobile device to Microsoft’s On-Premises and Cloud servers.

Gartner specifies the following Critical Capabilities that must be addressed in order for a given product or stack of products to be considered a Mobile Enterprise Application Platform (MEAP):

  • Integrated Development Environment

    A dedicated environment or plug-in for composing backend server and client side logic, including UI and UX

  • Application Client Runtime

    The client runtime logic for the application, either in native format or packaged within a container.

  • Enterprise Application Integration Tools

    Tools for integration of mobile server with back end systems, both bespoke and purchased apps or application suites.

  • Packaged Mobile Apps or Components

    Self standing mobile applications or components.

  • Multichannel Tools or Servers

    Tools that allow for “write once, run anywhere” thick or rich mobile clients, cross compilers or environments or platforms that allow business logic to be supported across thin, thick, and rich mobile architectures.

  • Management Tools

    Tools for provisioning, supporting, debugging, updating or decommissioning mobile applications.

  • Security 

    Tools for ensuring the security and privacy of enterprise data on board the device, while transiting through wired or wireless networks, through peripherals, and with backend systems and integration packages.

  • Hosting

    The ability to host all development, provisioning, management functions, and optionally corporate data.

    In this first scenario, I’ll use the picture below to illustrate how Mobile versions of Windows 7 in the form of slates, laptops, and tablets utilize some or all of Gartner’s Critical Capabilities to connect to an On-Premise Microsoft infrastructure:

image

As you can see from the picture above, Windows 7:

  1. For the Management Tools Critical Capability, Windows 7 uses System Center Configuration Manager (SCCM) 2007 for on-premises device management and software distribution.
  2. For both the Client and Server Integrated Development Environment (IDE) and Multichannel Tool Critical Capability, Windows 7 uses Visual Studio.
  3. For the cross-platform Application Client Runtime Critical Capability, Windows 7 uses .NET (Silverlight/WPF/WinForms) for thick clients.  For thin clients, it uses Internet Explorer 9 to provide HTML5 + CSS3 + ECMAScript5 capabilities.  Offline storage is important to keep potentially disconnected mobile clients working and this is facilitated by SQL Server Compact + Isolated Storage for thick clients and Web Storage for thin clients.
  4. For the Security Critical Capability, Windows 7 provides security for data at rest via Bitlocker, data in transit via SSL+VPN, data in the database via RSA/AES, & Authorization/Authentication via Active Directory.
  5. For the Enterprise Application Integration Tools Critical Capability, Windows 7 can reach out to servers directly via Web Services or indirectly via SQL Server or BizTalk using SSIS/Adapters/Sync to connect to other enterprise packages.
  6. The Multichannel Server Critical Capability to support any open protocol directly, via Reverse Proxy, or VPN is facilitated by ISA/TMG/UAG/IIS.  Crosss-Platform wire protocols riding on top of HTTP are exposed by Windows Communication Foundation (WCF) and include SOAP, REST and Atompub. Cross-Platform data serialization is also provided by WCF including XML, JSON, and OData. Cross-Platform data synchronization if provided by the Sync Framework.  These Multichannel capabilities support thick clients making web service calls as well as thin web clients making Ajax calls.  Distributed caching to dramatically boost the performance of any client is provided by Windows Server AppFabric Caching.
  7. While the Hosting Critical Capability may not be as relevant in an on-premises scenario, Windows Azure Connect provide an IPSec-protected connection to the Cloud and SQL Azure Data Sync can be used to move data between SQL Server and SQL Azure.
  8. For the Packaged Mobile Apps or Components Critical Capability, Windows 7 runs cross-platform mobile apps include Office/Lync/IE/Outlook/Bing.

It should come as no surprise that Windows 7 has a compelling and complete MEAP story to address the issues surrounding the Consumerization of IT (CoIT) when an employee walks in the door with a wireless Windows 7 Slate device.

Next week, I’ll cover how Windows 7 connects to the Cloud.

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Windows 7 Slates: Rapid App Deployment

Windows 7 Tablet

The tablet craze is really in full swing and new Windows 7 devices are being released almost every week.

Consumers and business users alike are snapping up these tablets and they want to be productive with apps right away.  Yes, they’re treating these wireless devices just like they’re Smartphones or iPads.  As well they should!  This means that aside from Microsoft Office, they’re expecting to quickly download apps from the Internet or various marketplaces without a lot of fuss.  This means no DVDs.

One way to ensure fast, seamless downloads of your apps is to not take dependencies on large runtimes, installation media, or plugins that may not already be installed on Windows 7 by default.  Silverlight isn’t a big deal because the plugin is very small and should download and install quickly along with your app.  If you’re delivering an app that requires all the power of .NET though, you might consider using version 3.51 since it’s part of the Windows 7 OS image.

Don’t flame me for not recommending .NET 4.0, I use it all the time.  That being said, if the requirments of your WinForm app are met by the features and base class libraries found in .NET 3.51, then don’t unnecessarily take dependency on 4.  Your app will download, install and run without any extra steps.  This is not unlike how most Windows Mobile enterprise customers over the last decade targeted the version of the .NET Compact Framework found in ROM instead of taking advantage of the extra features found in version 3.5.  It frustrated me for a while, but I finally got it.

The Consumerization of IT must drive new behaviors by corporations.  Speed and simplicity of app deployment to employees is one of those behaviors.

Keep Coding,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

The ThinkPad x120e is a Pocket Rocket!

ThinkPad x120e

I just picked up a Lenovo ThinkPad x120e for around $500 last week and have been using it as my only computer for the last 7 days.

It features the new AMD Fusion E350 dual-core processor with integrated Radeon HD graphics and Windows 7 Pro x64.  This ultraportable has 4 GB of RAM, an 11.6″ screen, a 7200 RPM hard drive, built-in webcam, 3 USB 2.0 ports, and a full-sized HDMI port.

ThinkPad x120e

While it may appear to be competing with high-end Netbooks, there’s actually no comparison. The AMD Fusion processor blows the latest Intel Atoms out of the water.  The snappy performance leads you to believe you’ve got a Core i3 under the hood, but the 5+ hours of battery life reminds you that this is something different.  This laptop has the fit and finish that you would expect from a $3,000 ThinkPad.  It’s super-sturdy and comes with an unparalleled scalloped / chiclet keyboard that feels like you’re typing on a Selectric typewriter.  This thing is absolutely tiny and only weighs around 3 lbs with the 6 cell battery, so I don’t feel it in my backpack when I running through airports.

I’ve been using Word, PowerPoint, Outlook, IE9 (with 8 tabs open), Tweetdeck, Visio, Live Writer, Chrome, and Lync without issue.  In fact, they were all very responsive.  HD videos were great.  Since I also write software, I installed Visual Studio 2010, plus the Windows Phone 7 tools and emulator.  This is where things ran out of gas.  I totally get that Visual Studio is a giant piece of software, but the x120 just wasn’t up to the task when it came to fast compiles and having a responsive IDE.  Running Windows Phone 7 apps in the emulator was also a very sluggish experience.  Yes, all the tools worked as expected, but you might lose patience.

To be fair, this tiny laptop was not designed to be a developer workstation but I thought I’d see how far I could push it.  Actually, it might be possible to make the x120e fast enough for giant tools like Visual Studio and Eclipse by bumping up the DDR3 RAM to 8 GB and swapping out the spinning hard disk with a fast SSD.  I’ll probably give it a try.

To sum up, the ThinkPad x120e is a pocket rocket of a business productivity and I highly recommend it.

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Windows Phone 7 Line of Business App Dev :: Uploading Data back to Azure

Submarine

Looking back over the last 6 months of this series of articles, you’ve created wireless-efficient WCF REST + JSON Web Services in Azure to download data from SQL Azure tables to Windows Phone.

You’ve maintained in-memory collections of objects in your own local NoSQL object cache.  You’ve used LINQ to query those collections and bind results to various Silverlight UI elements.  You’ve even serialized those collections to Isolated Storage using memory-efficient JSON.  So what’s left to do?
Oh yeah, I guess you might want to know how to upload an object full to data back to a WCF Web Service in Azure.  In order to keep this article simple and to-the-point, I’m going to work with a basic Submarine object and show you how to fill it with data and upload it from a Windows Phone or Slate to a WCF REST + JSON Web Service.  Let’s take a look at this object:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.Serialization;
namespace Models
{
[DataContract()]
public class Submarine
{
[DataMember()]
public int Id { get; set; }
[DataMember()]
public string Name { get; set; }
}
}
It includes just an integer data type called Id, and a string called Name.  As in previous articles before, its decorated with a [DataContract()] and two [DataMember()]s to allow .NET serialization to do its thing.  So the next thing we need to do is create and populate this Submarine object with data, serialize it as JSON, and send it on its way using WebClient.
Below is the method and its callback that accomplishes this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net;
using System.Windows;
using Microsoft.Phone.Controls;
using System.IO;
using System.Runtime.Serialization.Json;
using System.Text;
private void AddSubmarine()
{
Uri uri = new Uri(“
http://127.0.0.1:81/SubService.svc/AddSubmarine”);
Models.Submarine submarine = new Models.Submarine() { Id = 3, Name = “Seawolf” };
DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(Models.Submarine));
MemoryStream mem = new MemoryStream();
ser.WriteObject(mem, submarine);
string data = Encoding.UTF8.GetString(mem.ToArray(), 0, (int)mem.Length);
WebClient webClient = new WebClient();
webClient.UploadStringCompleted += new UploadStringCompletedEventHandler(webClient _UploadStringCompleted);
webClient.Headers[“Content-type”] = “application/json”;
webClient.Encoding = Encoding.UTF8;
webClient.UploadStringAsync(uri, “POST”, data);
}
void webClient_UploadStringCompleted(object sender, UploadStringCompletedEventArgs e)
{
var x = e.Result;
}
As you can see above, I point the URI at a WCF Service called SubService.svc/AddSubmarine.  How RESTful.  Next, I create an instance of the Submarine object, give it an Id of 3 and the Name Seawolf.  I then use the same DataContractJsonSerializer I’ve been using in all the other articles to serialize the Submarine object to a JSON representation.  Using the MemoryStream, I write the JSON to a stream and then artfully turn it into a string.  Last but not least, I instantiate a new WebClient object, create an event handler for a callback, and upload the stringified Submarine object to the WCF Service.
So where did I upload the Submarine object to?
It takes two to Mango, so let’s take a look.  For starters, it goes without saying that every WCF Service starts with an Interface.  This one is called ISubService.cs:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
namespace DataSync
{
[ServiceContract]
public interface ISubService
{
[OperationContract]
[WebInvoke(UriTemplate = “/AddSubmarine”, BodyStyle = WebMessageBodyStyle.Bare, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json, Method = “POST”)]
bool AddSubmarine(Models.Submarine sub);
}
}
Unlike previous articles where I had you download data with WebGet, this time I’m using WebInvoke to denote that a PUT, POST, or DELETE HTTP Verb is being used with our REST service.  The UriTemplate gives you the RESTful /AddSubmarine, and I added the Method = “POST” for good measure.  Keep in mind that you’ll need the exact same Submarine class on the server that you had on your Windows Phone to make all this work.
Let’s see what we get when we Implement this Interface:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.Diagnostics;
using Microsoft.WindowsAzure.ServiceRuntime;
using Microsoft.WindowsAzure.StorageClient;
using System.Configuration;
using System.Xml.Serialization;
using System.IO;
namespace DataSync
{
public class SubService : ISubService
{
public SubService()
{

 

}

 

public bool AddSubmarine(Models.Submarine submarine)
{
try
{
if (submarine != null)
{
//Do something with your Deserialized .NET Submarine Object
//… = submarine.Id
//… = submarine.Name
return true;
}
else
{
return false;
}
}
catch
{
return false;
}
}
}
}
Here we end up with SubService.svc with the simple AddSubmarine method where you pass in a Submarine object as a parameter.  What you do with this object, I’ll leave to you.  Some might be tempted to INSERT it into SQL Azure.  I’d prefer that you drop it into an Azure Queue and have a Worker Role do the INSERTing later so you can stay loosely-coupled.  Just in case you need a refresher on a REST-based Web.config file, here’s one below:
<?xml version=”1.0″?>
<configuration>
<!–  To collect diagnostic traces, uncomment the section below.
To persist the traces to storage, update the DiagnosticsConnectionString setting with your storage credentials.
To avoid performance degradation, remember to disable tracing on production deployments.
<system.diagnostics>
<sharedListeners>
<add name=”AzureLocalStorage” type=”DataSync.AzureLocalStorageTraceListener, DataSync”/>
</sharedListeners>
<sources>
<source name=”System.ServiceModel” switchValue=”Verbose, ActivityTracing”>
<listeners>
<add name=”AzureLocalStorage”/>
</listeners>
</source>
<source name=”System.ServiceModel.MessageLogging” switchValue=”Verbose”>
<listeners>
<add name=”AzureLocalStorage”/>
</listeners>
</source>
</sources>
</system.diagnostics> –>
<system.diagnostics>
<trace>
<listeners>
<add type=”Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35″
name=”AzureDiagnostics”>
<filter type=”” />
</add>
</listeners>
</trace>
</system.diagnostics>
<system.web>
<compilation debug=”true” targetFramework=”4.0″ />
</system.web>
<!–Add Connection Strings–>
<connectionStrings>

 

</connectionStrings>

 

<system.serviceModel>
<behaviors>
<serviceBehaviors>
<behavior>
<!– To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment –>
<serviceMetadata httpGetEnabled=”true”/>
<!– To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information –>
<serviceDebug includeExceptionDetailInFaults=”false”/>
</behavior>
</serviceBehaviors>
<!–Add REST Endpoint Behavior–>
<endpointBehaviors>
<behavior name=”REST”>
<webHttp />
</behavior>
</endpointBehaviors>
</behaviors>
<!–Add Service with webHttpBinding–>
<services>
<service name=”DataSync.SubService”>
<endpoint address=”” behaviorConfiguration=”REST” binding=”webHttpBinding”
contract=”DataSync.ISubService” />
</service>
</services>
<serviceHostingEnvironment aspNetCompatibilityEnabled=”true” multipleSiteBindingsEnabled=”true” />
<!–<serviceHostingEnvironment multipleSiteBindingsEnabled=”true” />–>
</system.serviceModel>
<system.webServer>
<modules runAllManagedModulesForAllRequests=”true”/>
</system.webServer>
</configuration>
This Web.Config gives you the webHttpBinding you’re looking for to do a REST service.  I even left you a spot to add your own database or Azure storage connection strings.
This article wraps up the Windows Phone 7 Line of Business App Dev series that I’ve been delivering to you since last September.  Who knew I would make fun of OData or have you create your own NoSQL database to run on your phone along the way?  I think I actually wrote the first article in this series from a hotel room in Nantes, France.
But have no fear, this isn’t the end.
In preparation for Tech Ed 2010 North America coming up on May 16th in Atlanta, I’ve been building the next-gen, super-fast, super-scalable Azure architecture designed for mobile devices roaming on wireless data networks.  I’ve spent the last decade building the world’s largest and most scalable mobile infrastructures for Microsoft’s wonderful global customers.  Now it’s time to make the jump from supporting enterprise-level scalability to the much bigger consumer-level scalability.
Yes, I’m talking millions of devices.
No, you won’t have to recreate Facebook’s servers, NoSQL, Memcache, or Hadoop infrastructure to make it happen.  I’m going to show you how to make it happen with Azure in just two weeks so I’m looking forward to seeing everyone in Atlanta in two weeks.
Keep coding,
Rob

 

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Microsoft by the Numbers

Microsoft

With Microsoft Windows 7 selling more than 600,000 per day, it’s interesting to look at all our numbers and see how they stack up against the competition:

150,000,000
Number of Windows 7 licenses sold, making Windows 7 by far the fastest growing operating system in history.[source]


7.1 million
Projected iPad sales for 2010. [source]

58 million
Projected netbook sales in 2010. [source]

355 million
Projected PC sales in 2010. [source]


<10
Percentage of US netbooks running Windows in 2008. [source]

96
Percentage of US netbooks running Windows in 2009. [source]


0
Number of paying customers running on Windows Azure in November 2009.

10,000
Number of paying customers running on Windows Azure in June 2010. [source]

700,000
Number of students, teachers and staff using Microsoft’s cloud productivity tools in Kentucky public schools, the largest cloud deployment in the US.[source]


16 million
Total subscribers to largest 25 US daily newspapers. [source]

14 Million
Total number of Netflix subscribers. [source]

23 million
Total number of Xbox Live subscribers. [source]


9,000,000
Number of customer downloads of the Office 2010 beta prior to launch, the largest Microsoft beta program in history. [source]


21.4 million
Number of new Bing search users in one year. [Comscore report – requires subscription]


24%
Linux Server market share in 2005. [source]

33%
Predicted Linux Server market share for 2007 (made in 2005). [source]

21.2%
Actual Linux Server market share, Q4 2009. [source]


8.8 million
Global iPhone sales in Q1 2010. [source]

21.5 million
Nokia smartphone sales in Q1 2010. [source]

55 million
Total smartphone sales globally in Q1 2010. [source]

439 million
Projected global smartphone sales in 2014. [source]


9
Number of years it took Salesforce.com to reach 1 million paid user milestone. [source]

6
Number of years it took Microsoft Dynamics CRM to reach 1 million paid user milestone. [source]

100%
Percent chance that Salesforce.com CEO will mention Microsoft in a speech, panel, interview, or blog post.


173 million
Global Gmail users. [source]

284 million
Global Yahoo! Mail users.[source]

360 million
Global Windows Live Hotmail users.[source]

299 million
Active Windows Live Messenger Accounts worldwide. [Comscore MyMetrix, WW, March 2010 – requires subscription]

1
Rank of Windows Live Messenger globally compared to all other instant messaging services. [Comscore MyMetrix, WW, March 2010 – requires subscription]


$8.2 Billion
Apple Net income for fiscal year ending  Sep 2009. [source]

$6.5 Billion
Google Net income for fiscal year ending Dec 2009. [source]

$14.5 Billion
Microsoft Net Income for fiscal year ending June 2009. [source]

$23.0 billion
Total Microsoft revenue, FY2000. [source]

$58.4 billion
Total Microsoft revenue, FY2009. [source]

 

Good stuff!

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Mobile Merge Replication Performance and Scalability Cheat Sheet

SQL Server Compact

If your Mobile Enterprise Application Platform (MEAP) is using SQL Server Merge Replication to provide the mobile middleware and reliable wireless wire protocol for SQL Server Compact (SSCE) running on Windows Mobile and Windows Embedded Handheld devices + Windows XP/Vista/7 tablets, laptops, desktops, slates, and netbooks; below is a guide to help you build the fastest, most scalable systems:

Active Directory

  • Since your clients will be passing in their Domain\username + password credentials when they sync, both IIS and SQL Server will make auth requests of the Domain Controller. Ensure that you have at least a primary and backup Domain Controller, that the NTDS.dit disk drives are big enough to handle the creation of a large number of new AD DS objects (mobile users and groups), and that your servers have enough RAM to cache all those objects in memory.

Database Schema

  • Ensure your schema is sufficiently de-normalized so that you never have to perform more than a 4-way JOIN across tables. This affects server-side JOIN filters as well as SSCE performance.
  • To ensure uniqueness across all nodes participating in the sync infrastructure, use GUIDs for your primary keys so that SQL Server doesn’t have to deal with the overhead of managing Identity ranges. Make sure to mark your GUIDs as ROWGUIDCOL for that table so that Merge won’t try to add an additional Uniqueidentifier column to the table.  Don’t create clustered indexes when using GUIDs as primary keys because they will suffer horrible fragmentation that will rapidly degrade performance.  Just use a normal index.
  • Create clustered indexes for your primary keys when using Indentity columns, Datetime, or other natural keys.  Ensure that every column in every table that participates in a WHERE clause is indexed.

Distributor

  • If your network connection is fast and reliable like Wi-Fi or Ethernet, your SSCE client has more than 32 MB of free RAM, and SQL Server isn’t experiencing any deadlocks due to contention with ETL operations or too many concurrent Merge Agents, create a new Merge Agent Profile based on the High Volume Server-to-Server Profile so that SQL Server will perform more work per round-trip and speed up your synchronizations.
  • If you’re using a 2G/3G Wireless Wide Area Network connection, create a Merge Agent Profile based on the Default Profile so that SQL Server will perform less work and use fewer threads per round-trip during synchronization than the High Volume Server to Server Profile which will help to reduce server locking contention and perform less work per round trip which will make your synchronizations more likely to succeed.
  • In order to prevent SQL Server from performing Metadata Cleanup every time a Subscriber synchronizes, set the –MetadataRetentionCleanup parameter to 0.
  • As SQL Server has to scale up to handle a higher number of concurrent users in the future, locking contention will increase due to more Merge Agents trying to perform work at the same time.  When this happens, adjust the parameters of the Default Profile so that both  –SrcThreads and –DestThreads are equal to 1.

Publication

  • When defining the Articles you’re going to sync, only check the minimum tables and columns needed by the Subscriber to successfully perform its work.
  • For Lookup/Reference tables that aren’t modified by the Subscriber, mark those as Download-only to prevent change-tracking metadata from being sent to the Subscriber.
  • Despite the fact the column-level tracking sends less data over the air, stick with row-level tracking so SQL Server won’t have to do as much work to track the changes.
  • Use the default conflict resolver where the “Server wins” unless you absolutely need a different manner of picking a winner during a conflict.
  • Use Static Filters to reduce the amount of server data going out to all Subscribers.
  • Make limited use of Parameterized Filters which are designed to reduce and further specify the subset of data going out to a particular Subscriber based on a HOST_NAME() which creates data partitions.  This powerful feature slows performance and reduces scalability with each additional filter, so it must be used sparingly.
  • Keep filter queries simple and don’t use IN clauses, sub-selects or any kind of circular logic.
  • Strive to always create “well-partitioned” Articles where all changes that are uploaded/downloaded are mapped to only the single partition ID for best performance and scalability.
    • When using Parameterized Filters, always create non-overlapping data partitions where each row from a filtered table only goes to a single Subscriber instead of more than one which will avoid the use of certain Merge metadata tables.
    • Each Article in this scenario can only be pubished to a single Publication
    • A Subscriber cannot insert rows that do not belong to its partition ID.
    • A Subscriber cannot update columns that are involved in filtering.
    • In a join filter hierarchy, a regular article cannot be the parent of a “well-partitioned” article.
    • The join filter in which a well-partitioned article is the child must have the join_unique_key set to a value of 1 which relates to the Unique key check box of the Add Join dialog.  This means there’s a one-to-one or one-to-many relationship with the foreign key.
    • Each “well-partitioned” Article can have only one subset or join filter. The article can have a subset filter and be the parent of a join filter, but cannot have a subset filter and be the child of a join filter.
  • Never extend a filter out to more than 4 joined tables.
  • Do not filter tables that are primarily lookup/reference tables, small tables, and tables with data that does not change.
  • Schedule the Snapshot Agent to run once per day to create an unfiltered schema Snapshot.
  • Set your Subscriptions to expire as soon as possible to keep the amount change-tracking metadata SQL Server has to manage to an absolute minimum. Normally, set the value to 2 to accommodate 3-day weekends since 24 hours are automatically added to the time to account for multiple time zones. If server-side change tracking isn’t needed and Subscribers are pulling down a new database every day and aren’t uploading data, then set the expiration value to 1.
  • Set Allow parameterized filters equal to True.
  • Set Validate Subscribers equal to HOST_NAME().
  • Set Precompute partitions equal to True to allow SQL Server to optimize synchronization by computing in advance which data rows belong in which partitions.
  • Set Optimize synchronization equal to False if Precompute partitions is equal to True.  Otherwise set it to True to optimize filtered Subscriptions by storing more metadata at the Publisher.
  • Set Limit concurrent processes equal to True.
  • Set Maximum concurrent processes equal to the number of SQL Server processor cores.  If exceesive locking contention occurs, reduce the number of concurrent processes until the problem is fixed.
  • Set Replicate schema changes equal to True.
  • Check Automatically define a partition and generate a snapshot if needed when a new Subscriber tries to synchronize. This will reduce Initialization times since SQL Server creates and applies snapshots using the fast BCP utility instead of a series of slower SELECT and INSERT statements.
  • Add data partitions based on unique HOST_NAMEs and schedule the Snapshot Agent to create those filtered Snapshots nightly or on the weekend so they’ll be built using the fast BCP utility and waiting for new Subscribers to download in the morning.
  • Ensure that SQL Server has 1 processor core and 2 GB of RAM for every 100 concurrent Subscribers utilizing bi-directional sync. Add 1 core and 2 GB of RAM server for every additional 100 concurrent Subscribers you want to add to the system.  Never add more Subscribers and/or IIS servers without also adding new cores and RAM to the Publisher.
  • Turn off Hyperthreading in the BIOS of the SQL Server as it has been known to degrade SQL Server performance.
  • Do not add your own user-defined triggers to tables on a Published database since Merge places 3 triggers on each table already.
  • Add one or more Filegroups to your database to contain multiple, secondary database files spread out across many physical disks.
  • Limit use of large object types such as text, ntext, image, varchar(max), nvarchar(max) or varbinary(max) as they require a significant memory allocation and will negatively impact performance.
  • Set SQL Servers’s minimum and maximum memory usage to within 2 GB of total system memory so it doesn’t have to allocate more memory on-demand.
  • Always use SQL Server 2008 R2 and Windows Server 2008 R2 since they work better together because they take advantage of the next generation networking stack which dramatically increases network throughput. They can also scale up as high as 256 cores.
  • Due to how Merge Replication tracks changes with triggers, Merge Agents, and tracking tables, it will create locking contention withDML/ ETL operations.  This contention degrades server performance which negatively impacts sync times with devices.  This contention should be mitgated by performing large INSERT/UPDATE/DELETE DML/ETL operations during a nightly maintenance window when Subscribers aren’t synchronizing.
  • Since Published databases result in slower DML/ETL operations, perform changes in bulk by using XML Stored Procedures to boost performance.
  • To improve the performance of pre-computed partitions when DML/ETL operations result in lots of data changes, ensure that changes to a Parent table in a join filter are made before corresponding changes in the child tables.  This means that when DML/ETL operations are pushing new data into SQL Server, they must add master data to the parent filter table first, and then add detail data to all the related child tables second, in order for that data to be pre-computed and optimized for sync.
  • Create filter partitions based on things that don’t change every day.  Partitions that are added and deleted from SQL Server and Subscribers that move from one partition to another is very disruptive to the performance of Merge Replication.
  • Always perform initializations and re-initializations over Wi-Fi or Ethernet when the device is docked because this is the slowest operation where the entire database must be downloaded instead of just deltas.  To determine rough estimates for initialization, multiply the size of the resulting SSCE .sdf file x the bandwidth speed available to the device.  A file copy over the expected network will also yield estimates for mininum sync times.  These times don’t include the work SQL Server and IIS must perform to provide the data or data INSERT times on SSCE.
  • If your SQL Server Publisher hits a saturation point with too many concurrent mobile Subscribers, you can scale it out creating a Server/Push Republishing hierarchy. Put the primary SQL Server Publisher at the top of the pyramid and have two or more SQL Servers subscribe to it. These can be unfiltered Subscriptions where all SQL Servers get the same data or the Subscribers can filter their data feeds by region for example. Then have the Subscribing SQL Servers Publish their Subscription for consumption by mobile SSCE clients.
  • Create just a single Publication.

Internet Information Services

  • Use the x64 version of the SQL Server Compact 3.5 SP2 Server Tools with Windows Server 2008 R2 running inside IIS 7.5.
  • Use a single Server Agent in a single Virtual Directory.
  • Ensure the IIS Virtual Directory where the Server Agent resides is on a fast solid-state drive that’s separate from the disk where Windows Server is installed to better support file I/O.
  • Use a low-end server with 2 processor cores and 2 GB of RAM to support 400 concurrent Subscribers queued at the same time.
  • Set the MAX_THREADS_PER_POOL Server Agent registry key equal to 2 to match the IIS processor cores and RAM. Do not set this value to a higher number than the number of cores.
  • Set the MAX_PENDING_REQUEST Server Agent registry key equal to 400 which means the Server Agent will queue up to 400 concurrent Subscribers waiting for one of the 2 worker threads to become available to sync with SQL Server.
  • Set the IIS Connection Limits property to 400 to prevent an unlimited number of connections reaching the Server Agent.
  • Add a new load-balanced IIS server for every additional 400 concurrent Subscribers you want to add to the system.

Subscriber

  • Use the appropriate x64, x86 or ARM version of SQL Server Compact 3.5 SP2 to take advantage of the PostSyncCleanup property of the SqlCeReplication object that can reduce the time it takes to perform an initial synchronization. Set the PostSyncCleanup property equal to 3 where neither UpdateStats nor CleanByRetention are performed.
  • Increase the Max Buffer Size connection string parameter to 1024 on a phone and 4096 on a PC to boost both replication and SQL query processing performance. If you have more RAM available, set those values even higher until you reach the law of diminishing returns.
  • Keep your SSCE database compact and fast by setting the Autoshrink Threshold connection string parameter to 10 so it starts reclaiming empty data pages once the database has become 10% fragmented.
  • Replication performance testing must be performed using actual PDAs to observe how available RAM, storage space and CPU speed affect moving data into the device’s memory area and how quickly this data is inserted into the SSCE database tables.  Since the SSCE database doubles in size during replication, the device must have enough storage available or the operation will fail.  Having plenty of available RAM is important so that SSCE can utilize its memory buffer to complete a Merge Replication operation more quickly.  With plenty of available RAM and storage, a fast CPU will make all operations faster.
  • The PDA must have at least an extra 32 MB of available free RAM that can be used by the .NET Compact Framework (NETCF) application.  If additional applications are running on the device at the same time, even more RAM is needed.  If a NETCF application has insufficient RAM is will discard its compiled code and run in interpreted mode which will slow the application down.  If the NETCF app is still under memory pressure after discarding compiled code, Windows Mobile will first tell the application to return free memory to the operating system and then will terminate the app if needed.
  • Set the CompressionLevel property of the SqlCeReplication object to 0 for fast connections and increment it from 1 to 6 on slower connections like GPRS to increase speed and reduce bandwidth consumption.
  • Tune the ConnectionRetryTimeout, ConnectTimeout, ReceiveTimeout and SendTimeout properties of the SqlCeReplication object based on expected bandwidth speeds:
Property High Bandwidth Medium Bandwidth Low Bandwidth
ConnectionRetryTimeout 30 60 120
ConnectTimeout 3000 6000 12000
ReceiveTimeout 1000 3000 6000
SendTimeout 1000 3000 6000
  • You can decrease potentially slow SSCE file I/O by adjusting the Flush Interval connection string parameter to write committed transactions to disk less often than the default of every 10 seconds.  Test longer intervals between flushes like 20 or 30 seconds. Keep in mind that these transactions can be lost if the disk or system fails before flushing occurs so be careful.
  • When replicating data that has been captured in the field by the device, perform Upload-only syncs to shorten the duration.

Storage

  • Use a Fibre Channel SAN with 15k RPM or solid-state disks for best I/O performance.
  • Databases should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • Database logs should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • Tempdb should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • The Tempdb log should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • The Snapshot share should reside on a RAID 10, unshared LUN comprised of at least 6 disks.  This disk array should be large enough to accommodate a growing number of filtered Snapshots. Snapshot folders for Subscribers that no longer use the system must be manually deleted.
  • Merge Replication metadata tables should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • Increase your Host Bus Adapter (HBA) queue depths to 64.
  • Your Publication database should be broken up into half the number of files as the SQL Server has processor cores. Each file must be the same size.
  • Tempdb should be pre-sized with an auto-growth increment of 10%. It should be broken up into the same number of files as the SQL Server has processor cores. Each file must be the same size.

High Availability

  • Load-balance the IIS servers to scale them out. Enable Server Affinity (stickiness) since the Replication Session Control Blocks that transmit data between the Server Agent and SSCE are stateful. Test to ensure that your load-balancer is actually sending equal amounts of Subscriber sync traffic to each IIS server.  Some load-balancers can erroneously send all traffic to a single IIS server if not properly configured.
  • Implement Windows Clustering so that SQL Server can failover to a second node.
  • Using SQL Server Mirroring so that your Published database will failover to a standby server.
  • Make a second SQL Server into an unfiltered Subscriber to your Publisher so that it can take over Merge Replication duties for mobile clients as a Republisher if the primary SQL Server fails. SSCE clients would just have to reinitialize their Subscriptions to begin synchronizing with the new Republisher.

Ongoing Maintenance

  • Use the Replication Monitor to have a real-time view of the synchronization performance of all your Subscribers.
  • Use the web-based SQL Server Compact Server Agent Statistics and Diagnostics tools to monitor the health and activity of the Server Agent running on IIS.
  • Create a SQL Job to execute the sp_MSmakegeneration stored procedure after large DML operations. Regular execution after INSERTING/UPDATING/DELETING data from either DML/ETL operations or after receiving lots of changes from Subscribers will maintain subsequent sync performance. Executing this stored procedure from the server-side is preferable to having it executed as a result of a Subscriber sync which would block all other Subscribers.
  • During your nightly maintenance window, rebuild the indexes and update the statistics of the following Merge Replication metadata tables:
    • MSmerge_contents
    • MSmerge_tombstone
    • MSmerge_genhistory
    • MSmerge_current_partition_mappings
    • MSmerge_past_partition_mappings
    • MSmerge_generation_partition_mappings
  • If you notice performance degradation during the day due to a large number of Subscribers making large changes to the database, you can updates the statistics (with fullscan) of the Merge Replication metadata tables more frequently throughout the day to force stored proc recompiles to get a better query plan.
    • UPDATE STATISTICS MSmerge_generation_partition_mappings WITH FULLSCAN
    • UPDATE STATISTICS MSmerge_genhistory WITH FULLSCAN
  • Rebuild/defrag indexes on your database tables and Merge Replication metadata tables throughout the day to reduce locking contention and maintain performance.
  • Use the Missing Indexes feature of SQL Server to tell you which indexes you could add that would give your system a performance boost. Do not add recommended indexes to Merge system tables.
  • Use the Database Engine Tuning Advisor to give you comprehensive performance tuning recommendations that cover every aspect of SQL Server.
  • Monitor the performance of the following counters:
    • Processor Object: % Processor Time: This counter represents the percentage of processor utilization. A value over 80% is a CPU bottleneck.
    • System Object: Processor Queue Length: This counter represents the number of threads that are delayed in the processor Ready Queue and waiting to be scheduled for execution. A value over 2 is bottleneck and shows that there is more work available than the processor can handle. Remember to divide the value by the number of processor cores on your server.
    • Memory Object: Available Mbytes: This counter represents the amount of physical memory available for allocation to a process or for system use. Values below 10% of total system RAM indicate that you need to add additional RAM to your server.
    • PhysicalDisk Object: % Disk Time: This counter represents the percentage of time that the selected disk is busy responding to read or write requests. A value greater than 50% is an I/O bottleneck.
    • PhysicalDisk Object: Average Disk Queue Length: This counter represents the average number of read/write requests that are queued on a given physical disk. If your disk queue length is greater than 2, you’ve got an I/O bottleneck with too many read/write operations waiting to be performed.
    • PhysicalDisk Object: Average Disk Seconds/Read and Disk Seconds/Write: These counters represent the average time in seconds of a read or write of data to and from a disk. A value of less than 10 ms is what you’re shooting for in terms of best performance. You can get by with subpar values between 10 – 20 ms but anything above that is considered slow. Times above 50 ms represent a very serious I/O bottleneck.
    • PhysicalDisk Object: Average Disk Reads/Second and Disk Writes/Second: These counters represent the rate of read and write operations against a given disk. You need to ensure that these values stay below 85% of a disk’s capacity by adding disks or reducing the load from SQL Server. Disk access times will increase exponentially when you get beyond 85% capacity.
  • A limited number of database schema changes can be made and synchronized down to SSCE Subscribers without any code changes which makes it easier to update your system as it evolves over time.
  • Use a Merge Replication Test Harness to stress test the entire system.  The ability to simulate hundreds or thousands of concurrent synchronizing Subscribers allows you to monitor performance and the load on the system.  This is helpful in properly configuring and tuning SQL Server, IIS, and the Windows Mobile devices.  It will tell you where you’re having problems and it will let you predict how much server hardware you will need to support growing numbers of Subscribers over time.  It’s also very important to simulate worst-case scenarios that you never expect to happen.

I hope this information sufficiently empowers you to take on the largest MEAP solutions that involve SQL Server Merge Replication and SQL Server Compact.  If you need a deeper dive, go check out my book on Enterprise Data Synchronization http://www.amazon.com/Enterprise-Synchronization-Microsoft-Compact-Replication/dp/0979891213/ref=sr_1_1?ie=UTF8&s=books&qid=1271964573&sr=1-1 over at Amazon.  Now go build a fast and scalable solution for your company or your customers.

Best Regards,

Rob

P.S.  If your solution doesn’t require all the advanced features found in Merge Replication, I highly recommend you use Remote Data Access (RDA).  This is a much simpler sync technology that’s extremely fast, scalable, and easier to manage.

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]