Looking to the Future of Cross-Platform Mobile Data Sync with SQL Server

Zumero

If you’ve used Merge Replication to sync SQL Server Compact data on mobile devices with SQL Server in the past and you’re looking for a cross-platform solution to take you into the future, take a look at Zumero.

As many of you know, I spent most of the first decade of the 21st century building massive-scale mobile enterprise solutions for the world’s largest companies. The backbone of many of those architectures was based on the Merge Replication technology built into SQL Server that allowed mobile devices running the embedded SQL Server Compact database to sync data and take it offline for use with mobile apps. This was a great solution that took care of all the bi-directional, mobile-to-server data movement, conflict resolution and filtering without writing any code allowing development teams to focus on their apps. Unfortunately, the journey for this Microsoft technology arrived at the end of the road with SQL Server 2012 and SQL Server Compact 3.5 SP2. It’s no longer supported on the more recent versions of those products.

For those of you who need to keep synchronizing mobile data with SQL Server, the folks over at Zumero have a solution you should investigate. They smartly took a dependency on SQLite as the mobile database since it already runs on iOS, Android and Windows. The Zumero server runs on an Internet Information Server (IIS) to broker communications between devices and your SQL Server database. This architecture should look familiar to those of you who have built mobile Merge Replication infrastructures in the past.

Zumero Architecture

They’ve even gone so far to provide you with a migration document that will help move you from SQL Server Compact and Merge Replication to their Zumero offering. If this solution matches the scenario you’re targeting, I would encourage you to perform your own due diligence and see if Zumero meets your needs.

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Wrap a Web API around Your Enterprise & Take Microsoft SQL Server 2014 Data Offline w/ NoSQL via Universal Apps for Windows

Windows NoSQL

At TechEd New Zealand, I presented a session on how to integrate a company’s backend systems into SQL Server 2014 and deliver that data out to mobile devices via Web APIs to support the operations of occasionally-connected apps on mobile devices using NoSQL tables.

Enterprise mobility is a top priority for Chief Information Officers who must empower employees and reach customers by moving data from backend systems out to apps on mobile devices.  This data must flow over inefficient wireless data networks, be consumable by any mobile device, and scale to support millions of users while delivering exceptional performance.  Since wireless coverage is inconsistent, apps must store this data offline so users can be productive in the absence of connectivity.

In this video, I’ll teach you how mashup disparate backend systems into high-speed, SQL Server 2014 in-memory staging tables.  I boost the speed even further through the use of natively-compiled stored procedures and then link them to fast and scalable REST + JSON APIs using the ASP.NET Web API while employing techniques such as in-memory caching.  On the device, I’ll show you how your apps can work with offline data via in-memory NoSQL tables that use LINQ to support the same CRUD operations as relational databases.  You’ll walk away from this session with the ability to deliver flexible server solutions that work on-premises or in Azure and device solutions that work with Windows Phones, Tablets or Laptops.

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Supercharge your Mobile Line of Business Solutions with SQLite and Offline Data Sync via Universal Apps for Windows

Windows Phone SQLite

At TechEd New Zealand, I presented a session on Microsoft’s next generation data sync technology from Azure Mobile Services that uses SQLite to support the operations of occasionally-connected apps on mobile devices.

Most mobile apps require the ability to store data locally to deal with the realities of a disconnected world where ubiquitous wireless networks are non-existent.  While many consumer apps get by with saving light amounts of information as small files like XML or JSON, the data requirements of mobile line-of-business apps for the enterprise is significantly greater.  With Microsoft Open Technologies Portable Class Library for SQLite, .NET developers can build structured data storage into their Universal Windows Apps.  In this video I guide you through creating local databases and tables and show you how to work with offline data via CRUD operations on both Windows and Windows Phone.  I also demonstrate the new data sync capability in Microsoft Azure Mobile Services which supports conflict resolution and uses SQLite for local data storage and change tracking.  There’s no faster way to build robust mobile apps to meet your most demanding enterprise needs.

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

How to Create In-Memory Database Tables in SQL Server 2014

SQL Server 2014

Getting data off disk drives and into RAM is the biggest game changer for relational databases in decades and SQL Server 2014 brings it to the masses.

RAM is cheap and it’s finally time to reap the benefits of 64-bit computing.

SQL Server In-Memory OLTP, also know at Hekaton is here and it’s ready to transform your business.  Unlike other recent entries to the in-memory database space, SQL Server 2014 integrates this new technology directly into the database engine instead of being a separate add-on. Additionally, existing SQL Server DBAs and developers will feel right at home building memory-optimized databases with same SQL Server Management Studio they’ve used for years.  Not having to retrain your staff is pretty cool.

Benefits to using SQL Server 2014 include:

  • In-memory execution for low-latency data retrieval vs. disk-bound I/O
  • Elimination of contention (locks, latches, spinlocks) from concurrent data Inserts and Updates due to optimistic concurrency control (row versioning without using TempDB)
  • Disk I/O reduction or elimination depending selected data durability (more on this later)
  • 5x – 25x performance improvement and the equivalent throughput of 5 – 10 scaled-out database servers

Create a Memory-Optimized Database

  • Create a normal database in SQL Server Management Studio

Create Database

  •  Add Memory Optimized Data Filegroup to ensure data durability

Memory Optimized Filegroup

  • Add a FILESTREAM Data file type with Unlimited Autogrowth/Maxsize

Filestream Data

Create Memory-Optimized Tables

  • Right-click on the Tables folder of the database you just created and select New | Memory Optimized Table… to get a starter SQL script
  • Create and execute CREATE TABLE scripts to create one or more tables where MEMORY_OPTIMIZED=ON (example in a sec)
  • Set DURABILITY=SCHEMA_ONLY for staging tables to prevent transaction logging and checkpoint I/O (this means only the schema but no actual data will be saved to disk in the event of a server restart)
  • Set DURABILITY=SCHEMA_AND_DATA for standard tables (this saves the schema and in-memory data to disk in the background with the option to delay durability for better performance by not immediately flushing transaction log writes)

Here’s an example of a SQL script to create a memory-optimized Customer table with an Id, FirstName and LastName column:

USE TechEDNZ2014
GO
CREATE TABLE [dbo].[Customer] (
    [Id] uniqueidentifier NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000) DEFAULT (NEWID()), 
    [FirstName] nvarchar(50),
    [LastName] nvarchar(50)
) 
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)
GO

Create Natively-Compiled Stored Procedures

Just when you thought performance couldn’t get any better, SQL Server 2014 rewrites the book on stored procedures.  Your T-SQL code now compiles to C DLLs which minimizes code execution time to further boost performance and scalability.  Furthermore, they significantly reduce CPU usage on your SQL Server box due to the need for fewer instructions to execute.

Here’s an example of a SQL script to create a natively-compiled stored procedure to retrieve data from the memory-optimized Customer table you just created:

USE TechEDNZ2014
GO
create procedure [dbo].[CustomerSelect]
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(
    transaction isolation level = snapshot, 
    language = N'English'
)
    SELECT [Id], [FirstName], [LastName] FROM [dbo].[Customer];
end
GO

I’m hoping by now you’re feeling the need for speed.

I’ve heard plenty of reports from companies that upgraded from previous versions of SQL Server to SQL Server 2014 that they instantly doubled their performance.  This is before converting disk-based tables to in-memory tables which is pretty incredible and well worth the upgrade on its own.  Just knowing that you can jump from a 2x performance increase to anywhere from 5x to 25x is mind boggling.

Most of you know me as a mobile strategist, architect and developer.  Being a mobile guy doesn’t mean I don’t think about the server.  In fact in all the large-scale enterprise mobile solutions I’ve designed for Fortune 500 companies, I figure I spend more than 70% of my time ensuring that servers are fast and can scale.  With SQL Server 2014 being the heart of most enterprise systems, just imagine how delighted all your mobile users will be when their apps become dramatically more responsive.

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Enterprise Mobility on STL Tech Talk CodeCast

STLTechTalk

I was thrilled to join Gus Emery (@n_f_e) and JJ Hammond (@jjhammondmusic) for a lively discussion of the past and future of Microsoft enterprise mobility on CodeCast Episode 12 of @STLTechTalk.

These guys are doing great work in the developer community! Go check out their site!

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

//build/ : Wrap a Mobile API around your Enterprise and take Data Offline with NoSQL on Windows Phones and Tablets

BuildSession

For those of you who couldn’t make it to San Francisco, here’s my session on Wrapping a Mobile API around your Enterprise and taking Data Offline with NoSQL on Windows Phones and Tablets from //build/.

Enterprise mobility is a top priority for Chief Information Officers who must empower employees and reach customers by moving data from backend systems out to apps on mobile devices. This data must flow over inefficient wireless data networks, be consumable by any mobile device, and scale to support millions of users while delivering exceptional performance. Since wireless coverage is inconsistent, apps must store this data offline so users can be productive in the absence of connectivity.

In this video you’ll learn how to build fast and scalable REST + JSON APIs using the ASP.NET Web API while employing techniques such as data sharding and in-memory caching. On the device, you’ll learn how your apps can work with offline data via in-memory NoSQL tables that use LINQ to support the same CRUD operations as relational databases. You’ll walk away from this session with the ability to deliver flexible server solutions that work on-premise or in Azure and device solutions that work with Windows Phones and Tablets.

Download the two Visual Studio projects and associated source code from GitHub:
https://github.com/robtiffany/build-2014-mobile-api

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Keeping Windows 8 Tablets in Sync with SQL Server 2012

Windows 8 Book Front

I’m pleased to announce that my newest book, “Keeping Windows 8 Tablets in Sync with SQL Server 2012,” is now available for sale.

Spending a decade travelling the globe to help the world’s largest companies design and build mobile solutions had taught me a few things.  Large organizations are not interested in constantly running on the new technology hamster wheel.  They prefer to leverage existing investments, skills, and technologies rather than always chasing the next big thing.  Don’t believe me?  Take mobile and the cloud for example:

  • In 2003 I was building Pocket PC solutions for large companies that wirelessly connected apps on those devices to SAP.  I assumed mobile was going mainstream that year.  I was wrong.  I was early.  Mobile apps wouldn’t explode until the end of the decade with the iPhone 3G.
  • In 2004, my partner Darren Flatt and I launched the first cloud-based mobile device management (MDM) company to facilitate software distribution and policy enforcement on early smartphones and handhelds.  Early again.  MDM didn’t get big until the end of the decade.
  • At PDC in 2008, my company launched our cloud offering called Azure.  We skipped directly to the developer Nirvana called Platform as a Service (PaaS).  I spent a few years doing nothing but speaking and writing about Windows Phones communicating with Web Roles.  Turns outs companies wanted to take smaller steps to the cloud by uploading their existing servers as VMs.

Being early over and over again taught me how the real world of business operates outside of Redmond and Silicon Valley.  Businesses need to make money doing what they do best.  Where appropriate, they will use technology to help them improve their processes and give them a competitive advantage.  So let’s cut to the chase and talk about why I wrote my new book:

  • Tablets and Smartphones are taking over the world of business and outselling laptops and desktops.  This is a well-known fact and not speculation on my part.
  • There are 1.3 billion Windows laptops, tablets, and desktops being used all over the world.  Windows 7 is in first place with Windows XP in second.
  • Companies run their businesses on Microsoft Office combined with tens of millions of Win32 apps they created internally over the last 2 decades.  Intranet-based web apps also became a huge force starting in the late 90s.
  • Tools like Visual Basic, Access, PowerBuilder, Java, and Delphi made it easy to rapidly build those Win32 line of business apps in the 90s and helped ensure the success of Windows in the enterprise.
  • Many of those developers moved to VB and C# in the 2000s to build .NET Windows Forms (WinForms) apps that leveraged their existing Visual Basic skills from the 90s.
  • Some businesses built Service Oriented Architecture (SOA) infrastructures of Web Services based on SOAP and XML over the last decade in order to connect mobile devices to their servers.  Most business did not, and instead opted for out-of-the-box solutions that didn’t require them to write a lot of code so they could get to market faster.
  • While the “white collar” enterprise recently started building business apps for the iPhone and iPad, the “blue collar” enterprise has been building WinForms apps for rugged Windows Mobile devices using the .NET Compact Framework and a mobile database called SQL Server Compact for over a decade.
  • Most businesses run servers in their own data centers.  Many of them are using virtualization technologies like Hyper-V and VMware to help them create a private cloud.
  • Of the businesses that have dipped their collective toes in the public cloud for internal apps, most of them are following the Infrastructure as a Service (IaaS) model where they upload their own servers in a VM.  Just look at the success of Amazon and the interest in Azure Infrastructure Services.

So the goal of my new book is to help businesses transition to the tablet era in a way that respects their existing investments, skills, technologies, enterprise security requirements, and appetite for risk.

Windows 8 Book Front

Since I’ve been involved in countless mobile projects where companies used the Microsoft data sync technologies already baked into SQL Server and SQL Server Compact, I decided to illustrate how to virtualize this sync infrastructure with Hyper-V.  With an eye towards existing trends that are widely embraced, this gives businesses the flexibility to use this proven technology in a private, public, or hybrid cloud.  Companies authenticate their employees against the same Active Directory they’ve used for over a decade.  I’m deadly serious about security and you’ll be glad to know the technology in this book handles it at every tier of your solution with Domain credentials plus encrypted data-at-rest and data-in-transit.  You also have the option of synchronizing mobile data with any edition of SQL Server 2005, 2008 or 2012 using Microsoft sync technologies that takes care of all data movement plumbing.  Your development team avoids writing thousands of lines of code to create web services, sync logic, change tracking, error handling, and retry logic.  With Microsoft lowering risk to your project by taking care of the server backend, security, and data sync technologies, your team can focus on building the best possible Windows 8 tablet app for the enterprise.

Speaking of tablet app development, it’s important to show you a path that doesn’t force you to learn all-new tools or programming languages, frameworks, or paradigms.  As a developer, you get to keep using Visual Studio along with the Desktop WinForms skills you’ve mastered over the last decade.  Better still, you can accomplish everything using the free version of Visual Studio 2012.  While you might be thinking Windows 8 tablet solutions must be created via Windows Store apps, this is not the case.  Instead, I show you how to apply Modern UI principles to Desktop WinForms apps that are full-screen and touch-first.  Concepts like content over chrome, use of typography, and UI elements with large hit targets are all covered in detail.  I also respect your investment in Windows 7 laptops and tablets by ensuring your touch apps are backwards compatible and keyboard + mouse/trackpad friendly.

Windows 8 Book Back

If you’re looking to build a new Windows 8 tablet app using what you have and what you know, this book is for you.  If you’re looking to port an existing Windows XP or Windows Mobile WinForm app to a Windows 8 tablet, this book empowers you with the skills to make your porting effort a successful one.

The takeaway is you don’t have to scrap your existing investments to participate in the tablet revolution.  I purposely made the book low-cost, hands-on, short, and to-the-point so you can rapidly build mobile solutions for Windows 8 tablets instead of wasting your time with theory.  Click here to take “Keeping Windows 8 Tablets in Sync with SQL Server 2012” for a spin so you can start building mobile apps for the world’s first and only enterprise-class tablet today.

Stay in Sync!

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Windows Azure Infrastructure Services are Live

I’m pleased to announce that Windows Azure fully supports Infrastructure as a Service (IaaS).

This new service now makes it possible for companies to move their existing servers and applications into the cloud.  We understand that customers don’t want to rip and replace their current infrastructure to benefit from the cloud; they want the strengths of their on-premises investments and the flexibility of the cloud.  It’s not only about Infrastructure as a Service (IaaS) or Platform as a Service (PaaS), it’s about Infrastructure Services and Platform Services and hybrid scenarios.  The cloud should be an enabler for innovation and Windows Azure can now be an extension of your organization’s IT fabric.

The Windows Azure Virtual Machines and Windows Azure Virtual Network are now available to help you meet your changing business needs by providing an on-demand, scalable infrastructure.  Not only can these VMs support up to 8 CPU cores, but we’ve added higher memory instances that include up to 56 GB of RAM.  These infrastructure services allow you to extend your data centers and business-critical workloads into the cloud while leveraging your existing skills and investments.

Today we are also announcing a commitment to match Amazon Web Services prices for commodity services such as compute, storage and bandwidth.  This starts with reducing our GA prices on Virtual Machines and Cloud Services by 21-33%.  Windows Azure is now your most price-competitive cloud option.  At the same time, Microsoft provides you a financially backed 99.95% monthly SLA when you deploy multiple instances of Virtual Machines.

Not only are prebuilt Linux images such as Ubuntu, CentOS, and Suse Linux Enterprise Server available through the Windows Azure Virtual Machine Image Gallery, but so is Windows Server 2012, SQL Server 2012, BizTalk Server 2013 and SharePoint Server 2013.  We also provide server support for Dynamics GP 2013, Dynamics NAV 2013, Forefront Identity Manager 2010 R2 SP1, Project Server 2013, System Center 2012 SP1, and Team Foundation Server 2012.

On a personal note, I’m happy to see this breathe new life into all the mobile data sync solutions that have been deployed in data centers all over the world.  You’ll now be able to take advantage of Windows Azure VMs so all your mobile devices running SQL Server Compact can synchronize business data with SQL Server in the cloud.

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Building Microsoft MEAP: Scaling Out SQL Server 2012

Super Scale

In this third article on building Microsoft MEAP, I’ll show you how to Shard your SQL Server 2012 database using Replication to create a Shared Nothing data architecture to support Internet-scale mobile solutions.

In the previous article, I discussed Gartner’s Enterprise Application Integration Tools (EAI) critical capability for building a Mobile Enterprise Application Platform (MEAP) using Microsoft technologies.  Tying into multiple backend packages and data sources is an essential CIO requirement for moving enterprise data out to mobile devices and SQL Server Integration Services (SSIS) performs this task beautifully.

Like virtually every enterprise and Internet based application or website, the database is the heart of the system.  This is also true for MEAP systems.  Don’t be fooled by MEAP vendors that use clever marketing terms to cover up this fact.  The Mobile Middleware often associated with MEAP systems is a database and some kind of web/app server or HTTP listener.  Staging tables in the database are needed to cache data moving between devices and backend systems.  Since mobile devices run on the Internet via mobile data networks, web/app servers are needed to transmit data over HTTPS.  The problem with most on-premise and even some cloud-based MEAP solutions is that they can’t deliver Internet scalability.  I’m talking tens of thousands, hundreds of thousands, or even millions of devices.

When it comes to boosting performance and supporting more concurrent clients, you think of scaling up with beefier hardware and scaling out with more servers. Unfortunately, most systems I’ve observed around the world limit their scaling to load-balanced web/app servers pointed at a single database.  I know, hard to believe.

SQL Server + 2 IIS Servers

Obviously, this only takes you so far before you run out of gas.  Infrastructure Architects need to aim higher and learn from the scalability best practices of the world’s largest search engines, social networks, and e-commerce sites.  Guess what, if your favorite social network has a billion users, a single database and a bunch of load-balanced web/app servers are going to melt down in the first few nanoseconds of operation. Instead, multiple layers of scaleout architectures are employed to support a large percentage of the global population, and the databases are no exception.  Since databases are usually responsible for around 90% of all system bottlenecks, I would venture to say scaling out your database is one of the most important things you can do.

Data replication technologies are used by the world’s largest sites to horizontally partition Relational and NoSQL databases.  Google coined the term “sharding” to refer to the shared-nothing database partitioning used by their Big Table architecture. Common sense tells you that even the world’s most powerful database running on the world’s biggest server will eventually hit a saturation point.  Replicating either complete or partial copies of your big database to tens of thousands of commodity servers that don’t share a common disk is the key to scaling out.  Sorry about the big SAN you just spent a few million dollars on.  The other nuance you see with these large, replicated systems is breaking up the reads and writes into different groups of servers.  Following the 80/20 rule, most clients on the Internet are SELECTing data while a much smaller group are uploading INSERTs, UPDATEs, and DELETEs.  It therefore follows that the bulk of the replicated database shards are read-only and load-balanced just like web servers.  These servers have a lower amount of replication overhead because data updates only come to them in one direction from the top of the hierarchy.  A second, smaller group of load-balanced database servers handle all the writes coming in from the Internet clients. They track and merge those changes into the multi-master databases at the top of the hierarchy.  Hopefully, all this makes sense to you.  To put it simply, instead of a billion people trying to hit a single database, smaller fractions of a billion people are hitting multiple copies of that same database. Additionally, these database shards are smaller and therefore faster because disk I/O is reduced since the ratio between memory and data on disk is improved.  Of course, it can get more granular than that.  Instead of replicating complete copies of your database, you could replicate each table down to its own individual database server to scale out even further.  If that’s not enough, you can replicate different ranges of table rows down to their own individual database servers so that a single table spans multiple servers.  In all cases, your app servers would maintain the intelligence to know which sharded database servers to communicate with in order to get the right answer for a mobile user.  It also means that you need design your databases differently to avoid table JOINs across replicated server nodes.

Super Scale

While you’re probably well-versed in scaling out your stateless web/app servers using load-balancers, there’s one other ingredient that’s commonly used across the world’s largest systems.  Distributed caching.  While scaling out replicated database shards and web/app servers make it possible to support a large chunk of the planet, it’s caching that takes this performance and scalability to the next level.  It’s great that the answer to your query is found on a nearby, replicated database shard.  But what if your answer was already in RAM and you didn’t have to query the database in the first place?  A tier of distributed caching servers holding terabytes of data in RAM is what helps your Facebook page load fast when you and a few hundred million of your best friends are all trying to access the site at the same time.

Now that you have a bite-sized primer on how the world’s largest systems scale and perform, you’re probably wondering if you can do the same thing to scale your MEAP solution using the Microsoft servers and technologies that you already own.

The answer is Yes!

While I understand that you might not need to support millions of users with your solution, the important thing to take away from this article is that with Microsoft MEAP you can.  As you can guess from the title of this article, I’m going to focus on scaling out SQL Server since this might be more unfamiliar to you than scaling out Internet Information Services (IIS) using the Network Load Balancing (NLB) feature of Windows Server.  Also, if you’re wondering about that cool distributed caching thing, we have that too in the form of the Windows Server AppFabric Cache.  Both our load-balancing and distributed cache technologies are included features of Windows Server.

Let’s jump into scaling SQL Server since it’s the heart of Microsoft MEAP.  From a scale-up perspective, here’s a few things you can do with SQL Server Standard Edition:

  • It supports up to 64 GB of RAM.  Hey, RAM is cheap.  Buy all 64 GB and allow your database to run in memory as much as possible to avoid disk I/O.
  • It supports up to 16 cores.  While you might use this many cores (or more) on the SQL Server at the top of your hierarchy, you won’t need this many for your shard servers.  Faster clock speeds and the largest possible L3 shared cache you can find are key.
  • It’s time to make the move from spinning disks to solid state drives (SSD). Upgrading to a SATA SSD can give you up to a 100x performance boost over their rotating counterparts.  Do your research when looking a different vendors.
  • Make sure the replicated data is moving as fast as possible between your servers by using 10 gigabit Ethernet network cards and switches.  Make sure to keep your network drivers updated as your vendor releases new ones.

Let’s scale out!

The first thing to consider when scaling out any database, is that the schema must be designed to support breaking the database apart.  In other words, think about how you might turn your relational database with referential integrity, foreign keys, and other constraints into something that looks more like a NoSQL database.  Denormalize to eliminate performance-killing JOINs.  Keep in mind that if you can’t build a database this way or you have an existing database that you can’t change, you’ll need to use Transactional Peer to Peer Replication to make complete copies of your database to scale out.  Normally, I would prefer that you only use P2P or “Always On” to maintain a separate replica hierarchy in another data center.

Those of you who know me are aware that I’ve been lucky enough to build many of the largest mobile systems in the world utilizing SQL Server Merge Replication.  I’ve also done my best to teach others what I’ve learned along the way.  This time around, I’m going to show you how to use this same replication technology to create a variety of database shards.  As you might imagine, I’ll use the ContosoFruit database of data aggregated via SSIS from the backend data sources from the last article.

ContosoFruit

Replication in SQL Server uses the concepts of Publishers to describe which database is making its data available and Subscribers to describe which databases are consuming replicas.  As you might imagine, the ContosoFruit database will be the Publisher.  That being said, since I will be creating 3 shards, one for each table, I won’t be publishing the entire database as a single entity.  You’ll have 3 Publications instead. In order to create a new Publication based on this database, I need to you expand the Replication folder in the Object Explorer.

Customer Publication

Right click on the Local Publications folder and select New Publication to launch the New Publication Wizard.  Click Next.

New Publication Wizard

  • If your Distributor isn’t already configured, you’ll be taken to the Distributor dialog where you will select the first radio button to allow your SQL Server to act as its own Distributor.  Click Next.
  • If your SQL Server Agent isn’t already configured, you’ll be taken to the SQL Server Agent Start dialog where you should select the Yes radio button to start the Agent service automatically.  Click Next.
  • If you don’t yet have a folder to hold the initial database snapshots, you’ll be taken to the Snapshot Folder dialog.  Before entering a path in the Snapshot folder text box, create a folder on your local PC called Snapshot.  Share that folder as a network share that’s available to Everyone with Read/Write permissions.  Now go back to the Snapshot folder text box and enter \\MachineName\Snapshot and then click Next.
  • Click Next to move to the Publication Database dialog.
  • Select the ContosoFruit database and click Next.
  • In the Publication Type dialog, select Merge Publication and click Next.
  • In the Subscriber Types dialog, select SQL Server 2008 or later and click Next.
  • In the Articles dialog, expand the Tables tree view and check the Customers check box.  This means that the Customers table and all its columns will be replicated.  Check the Highlighted table is download-only check box.  This ensures that only changes made to the Publisher at the top of the hierarchy will be replicated down to read-only Subscribers.  Click Next.
  • In the Article Issues dialog you’re informed that a Uniqueidentifier column will be added to your table.  Merge replication uses this to track each row for changes. If your Primary Key is already a Uniqueidentifier, the system will use it instead of adding a new one.  Uniqueness is an important part of any data sync system. Click Next.
  • In the Filter Table Row dialog, I don’t want you to create any filters because I want this Customers shard to contain the complete table.  Click Next.
  • In the Snapshot Agent dialog, check both check boxes.  Click the Change button and in the New Job Schedule dialog, change the Recurs every: text box to 1 day(s) instead of 14.  Click OK and then click Next.
  • In the Agent Security dialog, click the Security Settings button.  In the Snapshot Agent Security dialog, select the second radio button to run it under the SQL Server Agent service account.  You won’t do this in production but we’re doing it now for expedience sake.  Click OK and then click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, enter CustomerShard and click Finish.
  • In the Creating Publication dialog, if everything succeeds, click Close.

Sorry about all those tedious steps.  Keep in mind that the next two shard Publications will be easier to create so let’s get to it.

Product Publication

Like before, expand the Replication folder in the Object Explorer, right click on the Local Publications folder, select New Publication to launch the New Publication Wizard and click Next.

  • In the Publication Database dialog, select the ContosoFruit database and click Next.
  • In the Publication Type dialog, select Merge Publication and click Next.
  • In the Subscriber Types dialog, select SQL Server 2008 or later and click Next.
  • In the Articles dialog, expand the Tables tree view and check the Products check box.  Check the Highlighted table is download-only check box.  Click Next.
  • In the Article Issues dialog you’re informed that a Uniqueidentifier column will be added to your table.  Click Next.
  • In the Filter Table Row dialog, I don’t want you to create any filters because I want this Products shard to contain the complete table.  Click Next.
  • In the Snapshot Agent dialog, check both check boxes.  Click the Change button and in the New Job Schedule dialog, change the Recurs every: text box to 1 day(s) instead of 14.  Click OK and then click Next.
  • In the Agent Security dialog, click the Security Settings button.  In the Snapshot Agent Security dialog, select the second radio button to run it under the SQL Server Agent service account.  Click OK and then click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, enter ProductShard and click Finish.
  • In the Creating Publication dialog, if everything succeeds, click Close.

Only one more shard to go.  This one will be different because it’s designed to support write operations coming in from mobile devices.

Order Publication

Expand the Replication folder in the Object Explorer, right click on the Local Publications folder, select New Publication to launch the New Publication Wizard and click Next.

  • In the Publication Database dialog, select the ContosoFruit database and click Next.
  • In the Publication Type dialog, select Merge Publication and click Next.
  • In the Subscriber Types dialog, select SQL Server 2008 or later and click Next.
  • In the Articles dialog, expand the Tables tree view and check the Orders check box.  Click Next.
  • In the Article Issues dialog you’re informed that a Uniqueidentifier column will be added to your table.  Click Next.
  • In the Filter Table Row dialog, click the Add button and select Add Filter.  In the Add Filter dialog, go to the Filter statement text box and add 1 = 0 to the end of the WHERE clause.  The filter should look like the following when you’re done:

Add Filter

  • Using 1 = 0 as the table filter causes Replication to work in an upload-only manner.  When the Orders table is synchronized, only the empty shell of the table will be created in the Subscriber database.  Any new data added to it will be uploaded to ContosoFruit and then removed from the Subscriber database.  Click Ok and Next.
  • In the Snapshot Agent dialog, check both check boxes.  Click the Change button and in the New Job Schedule dialog, change the Recurs every: text box to 1 day(s) instead of 14.  Click OK and then click Next.
  • In the Agent Security dialog, click the Security Settings button.  In the Snapshot Agent Security dialog, select the second radio button to run it under the SQL Server Agent service account.  Click OK and then click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, enter OrderShard and click Finish.
  • In the Creating Publication dialog, if everything succeeds, click Close.

You now have 3 Publications and you can have as many Subscription databases as you need to scale out.  In this example, I will just have you create 1 Subscription to match each Publication.

Customer Subscription

Expand the Replication folder in the Object Explorer, right click on the Local Subscriptions folder, select New Subscription to launch the New Subscription Wizard and click Next.

  • In the Publication dialog, select CustomerShard and click Next.
  • In the Merge Agent Location dialog, select the first radio button to run all agents at the Distributor.  Click Next.
  • In the Subscribers dialog, check the check box for the local SQL Server that you’re using.  Click the Subscription Database combo box and select New Database.  In the New Database dialog, enter Customer1 in the Database name text box and click OK.  Click Next.
  • In the Merge Agent Security dialog, click the ellipsis on the far right.  In the dialog, select the second radio button to run under the SQL Server Agent service account.  You won’t use this security option in production.  Click OK and click Next.
  • In the Synchronization Schedule dialog, click the Agent Schedule combo box and select Run Continuously.  This is the obvious choice since you creating a real-time, OLTP database.  Click Next.
  • In the Initialize Subscriptions dialog, stick with the default value of initializing immediately and click Next.
  • In the Subscription Type dialog, click the Subscription Type combo box and select Client.  This prevents new data from being added at the Subscriber and uploaded back to the Publisher.  Click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, click Finish.  If everything succeeds, click Close

To verify that all your settings are correct and that everything is working, open your new Customer1 database, right click on the Customers table and Select Top 1000 Rows. The table should be filled with the same list of customers that you find in the ContosoFruit database.  The next test is to add a new row in the ContosoFruit Customers table, wait for several seconds, and then refresh the Customer1 Customers table.  The new row should appear and you now have your first read-only database shard based on SQL Server.

Product Subscription

Expand the Replication folder in the Object Explorer, right click on the Local Subscriptions folder, select New Subscription to launch the New Subscription Wizard and click Next.

  • In the Publication dialog, select ProductShard and click Next.
  • In the Merge Agent Location dialog, select the first radio button to run all agents at the Distributor.  Click Next.
  • In the Subscribers dialog, check the check box for the local SQL Server that you’re using.  Click the Subscription Database combo box and select New Database.  In the New Database dialog, enter Product1 in the Database name text box and click OK.  Click Next.
  • In the Merge Agent Security dialog, click the ellipsis on the far right.  In the dialog, select the second radio button to run under the SQL Server Agent service account.  Click OK and click Next.
  • In the Synchronization Schedule dialog, click the Agent Schedule combo box and select Run Continuously.  Click Next.
  • In the Initialize Subscriptions dialog, stick with the default value of initializing immediately and click Next.
  • In the Subscription Type dialog, click the Subscription Type combo box and select Client.   Click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, click Finish.  If everything succeeds, click Close

To verify that all your settings are correct and that everything is working, open your new Product1 database, right click on the Products table and Select Top 1000 Rows.  The table should be filled with the same list of products that you find in the ContosoFruit database.  The next test is to add a new row in the ContosoFruit Products table, wait for several seconds, and then refresh the Product1 Products table.  The new row should appear and you now have your second read-only database shard based on SQL Server.

Order Subscription

Expand the Replication folder in the Object Explorer, right click on the Local Subscriptions folder, select New Subscription to launch the New Subscription Wizard and click Next.

  • In the Publication dialog, select OrderShard and click Next.
  • In the Merge Agent Location dialog, select the first radio button to run all agents at the Distributor.  Click Next.
  • In the Subscribers dialog, check the check box for the local SQL Server that you’re using.  Click the Subscription Database combo box and select New Database.  In the New Database dialog, enter Order1 in the Database name text box and click OK.  Click Next.
  • In the Merge Agent Security dialog, click the ellipsis on the far right.  In the dialog, select the second radio button to run under the SQL Server Agent service account.  You won’t use this security option in production.  Click OK and click Next.
  • In the Synchronization Schedule dialog, click the Agent Schedule combo box and select Run Continuously.  This is the obvious choice since you creating a real-time, OLTP database.  Click Next.
  • In the Initialize Subscriptions dialog, stick with the default value of initializing immediately and click Next.
  • In the Subscription Type dialog, click the Subscription Type combo box and select Client.  This prevents new data from being added at the Subscriber and uploaded back to the Publisher.  Click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, click Finish.  If everything succeeds, click Close.

To verify that all your settings are correct and that everything is working, open your new Order1 database, right click on the Orders table and Select Top 1000 Rows.  The table should be empty.  The next test is to add a new row in the empty Order1 Orders table, wait for several seconds, and then refresh the ContosoFruit Orders table.  The new row should appear and you now have your writable database shard based on SQL Server.

Congratulations!  You’ve scaled out one database into one writable database shard and two read-only shards as shown below:

Publications and Subscriptions

The ContosoFruit database will no longer have to bear the brunt of all your mobile devices retrieving Customers and Products while uploading new Orders.  ContosoFruit will only see 3 connections moving and merging data back and forth instead of thousands.

3 Pubs + 3 Subs

The web services I’ll be showing you how to create in the next article will point to the appropriate shards from the IIS app servers.  Keep in mind that in a production system, you’ll need to create at least 2 load-balanced SQL Servers for each shard in order to maintain high availability.

Now that you know how to shard complete tables to n database server nodes, you probably want to know how to shard at an even more granular level.  I’m talking about scaling ranges of table rows across multiple server nodes.  The example you always hear about is partitioning a Customer table with tens of millions of rows by the first letter of a customer’s last name.  For instance, node 1 gets customers A – I, node 2 gets J – R, and node 3 gets S – Z.

Customers A - Z

You can slice and dice this any way you want.  You could even have 26 separate nodes for every letter of the alphabet if you need that level of scale.  Keep in mind that you won’t necessarily get an even distribution of table rows across nodes since the “S” node will have dramatically more customers than the “Q” node.  Using a customer Id column to filter on might yield better results when it comes to numerically balancing the load.  Speaking of balancing, as the number of rows in a given table increase, you will find that some nodes will start to have more rows than others.  From time to time, you’ll need to re-balance them.

Luckily, Merge Replication Publications have the ability to perform row filtering which makes this more granular level of sharding pretty simple.  Keep in mind that you will only do this type of filtering for your download-only/read-only shards.  For this example, I’m going to create 2 Customer Publications that filter the rows based on the Id column in order to get 2 nodes with a roughly equal number of customers.

Customer (First Half) Publication

Expand the Replication folder in the Object Explorer, right click on the Local Publications folder, select New Publication to launch the New Publication Wizard and click Next.

  • In the Publication Database dialog, select the ContosoFruit database and click Next.
  • In the Publication Type dialog, select Merge Publication and click Next.
  • In the Subscriber Types dialog, select SQL Server 2008 or later and click Next.
  • In the Articles dialog, expand the Tables tree view and check the Customers check box.  Check the Highlighted table is download-only check box.  Click Next.
  • In the Article Issues dialog you’re informed that a Uniqueidentifier column will be added to your table.  Click Next.
  • In the Filter Table Row dialog, click the Add button and select Add Filter.   In the Add Filter dialog, go to the Filter statement text box and add Id <= (SELECT COUNT(*)/2 FROM [dbo].[Customers]) to the end of the WHERE clause.  The filter should look like the following when you’re done:

Add Filter

  • The subquery in the WHERE clause calculates the total number of rows, divides them by 2, and then returns a list of customers whose Id is less than or equal to the midpoint of the list.  Click OK and then click Next.
  • In the Snapshot Agent dialog, check both check boxes.  Click the Change button and in the New Job Schedule dialog, change the Recurs every: text box to 1 day(s) instead of 14.  Click OK and then click Next.
  • In the Agent Security dialog, click the Security Settings button.  In the Snapshot Agent Security dialog, select the second radio button to run it under the SQL Server Agent service account.  Click OK and then click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, enter CustomerFirstHalfShard and click Finish.
  • In the Creating Publication dialog, if everything succeeds, click Close.

Customer (Second Half) Publication

Expand the Replication folder in the Object Explorer, right click on the Local Publications folder, select New Publication to launch the New Publication Wizard and click Next.

  • In the Publication Database dialog, select the ContosoFruit database and click Next.
  • In the Publication Type dialog, select Merge Publication and click Next.
  • In the Subscriber Types dialog, select SQL Server 2008 or later and click Next.
  • In the Articles dialog, expand the Tables tree view and check the Customers check box.  Check the Highlighted table is download-only check box.  Click Next.
  • In the Filter Table Row dialog, click the Add button and select Add Filter.   In the Add Filter dialog, go to the Filter statement text box and add Id > (SELECT COUNT(*)/2 FROM [dbo].[Customers]) to the end of the WHERE clause.  The filter should look like the following when you’re done:

Add Filter

  • The subquery in the WHERE clause calculates the total number of rows, divides them by 2, and then returns a list of customers whose Id is greater than the midpoint of the list.  Click OK and then click Next.
  • In the Snapshot Agent dialog, check both check boxes.  Click the Change button and in the New Job Schedule dialog, change the Recurs every: text box to 1 day(s) instead of 14.  Click OK and then click Next.
  • In the Agent Security dialog, click the Security Settings button.  In the Snapshot Agent Security dialog, select the second radio button to run it under the SQL Server Agent service account.  Click OK and then click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, enter CustomerSecondHalfShard and click Finish.
  • In the Creating Publication dialog, if everything succeeds, click Close.

Customer (First Half) Subscription

Expand the Replication folder in the Object Explorer, right click on the Local Subscriptions folder, select New Subscription to launch the New Subscription Wizard and click Next.

  • In the Publication dialog, select CustomerFirstHalfShard and click Next.
  • In the Merge Agent Location dialog, select the first radio button to run all agents at the Distributor.  Click Next.
  • In the Subscribers dialog, check the check box for the local SQL Server that you’re using.  Click the Subscription Database combo box and select New Database.  In the New Database dialog, enter CustomerFirstHalf in the Database name text box and click OK.  Click Next.
  • In the Merge Agent Security dialog, click the ellipsis on the far right.  In the dialog, select the second radio button to run under the SQL Server Agent service account.  Click OK and click Next.
  • In the Synchronization Schedule dialog, click the Agent Schedule combo box and select Run Continuously.  Click Next.
  • In the Initialize Subscriptions dialog, stick with the default value of initializing immediately and click Next.
  • In the Subscription Type dialog, click the Subscription Type combo box and select Client.   Click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, click Finish.  If everything succeeds, click Close.

To verify that all your settings are correct and that everything is working, open your new CustomerFirstHalf database, right click on the Customers table and Select Top 1000 Rows.  The table should be filled with the first half of the products that you find in the ContosoFruit database.

Customer (Second Half) Subscription

Expand the Replication folder in the Object Explorer, right click on the Local Subscriptions folder, select New Subscription to launch the New Subscription Wizard and click Next.

  • In the Publication dialog, select CustomerSecondHalfShard and click Next.
  • In the Merge Agent Location dialog, select the first radio button to run all agents at the Distributor.  Click Next.
  • In the Subscribers dialog, check the check box for the local SQL Server that you’re using.  Click the Subscription Database combo box and select New Database.  In the New Database dialog, enter CustomerSecondHalf in the Database name text box and click OK.  Click Next.
  • In the Merge Agent Security dialog, click the ellipsis on the far right.  In the dialog, select the second radio button to run under the SQL Server Agent service account.  Click OK and click Next.
  • In the Synchronization Schedule dialog, click the Agent Schedule combo box and select Run Continuously.  Click Next.
  • In the Initialize Subscriptions dialog, stick with the default value of initializing immediately and click Next.
  • In the Subscription Type dialog, click the Subscription Type combo box and select Client.   Click Next.
  • In the Wizard Actions dialog, check the first check box and click Next.
  • In the Complete the Wizard dialog, click Finish.  If everything succeeds, click Close.

To verify that all your settings are correct and that everything is working, open your new CustomerSecondHalf database, right click on the Customers table and Select Top 1000 Rows.  The table should be filled with the second half of the products that you find in the ContosoFruit database.

Customer First + Second Half

You’ve now scaled out one Customer table shard into two read-only shards that split the number of customers evenly as shown below: Publications Subscriptions

Hopefully, you now see the power of horizontally scaling out SQL Server into shards of partial or complete tables.  When you take this shared-nothing architecture into production, you’ll have n SQL Server Subscriber nodes with their own storage, CPU, memory, and networking.  Merge Replication is a powerful, supported component of the SQL Server database engine that allows the Microsoft MEAP mobile middleware to meet your performance and scalability needs just like the world’s largest Internet sites.  In the next article in the Building Microsoft MEAP series, I’ll show you how build REST web services that connect to the various database shards in order to expose their data out to smartphones and tablets.

Keep scaling!

– Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Building Microsoft MEAP: Adapters

In this second article on building Microsoft MEAP, I’ll focus on implementing Gartner’s Enterprise Application Integration Tools critical capability using SQL Server Integration Services (SSIS) to connect to back end systems.

As I mentioned in the Introduction article, one of the top priorities for CIOs today is extending critical data from their backend systems out to the wireless devices used by employees.  This can often be easier said than done.  If your backend ERP, CRM, and other bespoke systems provide efficient, resilient, wireless-friendly connectivity and mobile client apps for smartphones, tablets, and laptops, then you’re in good shape.  Similarly, if your organization has spent the last decade building a mature Service Oriented Architecture (SOA) to expose your backend data sources, then your mobile devices have a way to consume that composite data.  Of course, you need to migrate those bloated SOAP + XML web services to something lighter and faster like REST + JSON.  On the other hand, if your organization is deficient in some of these areas, you need a Mobile Enterprise Application Platform (MEAP) with the adapters needed to connect those backend systems and data sources to Mobile Middleware.

As a recap, let’s take a look at the Gartner critical capabilities that pertain to backend adapters and the tooling needed to make those connections:

Enterprise Application Integration Tools:

  • Gartner Definition:  Tools for integration of mobile server with back end systems, both bespoke & purchased apps or application suites.
  • Microsoft Offering: SQL Server Integration Services (SSIS), Visual Studio SQL Server Data Tools.
  • Value Proposition:  Developers visually compose connections, actions, events and data movement rather than writing separate sets of integration code.  Adapters provide consistent connectivity to dozens of backend systems and data sources.  Microsoft is providing unrivaled, easy to use, drag and drop tools to connect ETL adapters with backend systems and databases.

Integrated Development  Environment:

  • Gartner Definition: Dedicated environment or plug-in for composing backend server & client side logic, including UI.
  • Microsoft Offering: Visual Studio
  • Value Proposition:  As the world’s most widely-used commercial IDE, you’re more likely to find plenty of proficient developers than with any other MEAP offering.  Additionally, developers are more productive since they don’t have to use different or specialized tools to target laptops, tablets, smartphones, servers, or the cloud.  Competing MEAP vendors have unfamiliar native and hybrid SDKs or 4GLs while Microsoft has millions of seasoned developers.
The key takeaway here is that Microsoft provides easy to use Enterprise Application Integration (EAI) Tooling in the form of SQL Server Data Tools in Visual Studio (IDE) and adapter technology in the form of SSIS.  This allows you to pull composite data into SQL Server (Mobile Middleware) for aggregation.  Keep in mind that the value of the EAI technology found in a MEAP vendor’s offering is derived from the following:
  • Must be easy to use and should connect to multiple backend systems in a consistent way.  In other words, if you have to connect to 20 different systems and you’re required to write unique code or connect 20 different ways then your MEAP package has failed.  Anyone can find a way to integrate with any system.  Doing so elegantly and consistently so that your people only have to be trained once is what you’re paying for.  Microsoft provides visual drag and drop tooling to make this task as simple as possible.
  • The more backend packages and data sources you can connect to the better.  It goes without saying that if your MEAP package can only connect to a handful of backend systems, if won’t be very valuable to your enterprise.  That being said, stay on the lookout for MEAP vendors that provide an extensive list of backend systems they can connect to – but connect to all of them via widely different methods.  SSIS can access data from any heterogeneous data source, package, message bus or interface including:
    • Database Systems: Oracle, Teradata, IBM DB2, SQL Server, MySQL, SQL Server Compact, Sybase, Access, PostgreSQL, Informix, FoxPro, Ingres, VSAM, IMS, LDAP, Lotus Notes, ADABAS, ADO, ADO.NET, ODBC, OLEDB (All databases)
    • Packages: SAP, Siebel, Dynamics, Hyperion, Salesforce, SharePoint
    • HTTP (Web Services), FTP, SMTP
    • File, Flatfile, Excel, EDI, XML
    • MSMQ, IBM MQ Series,Tibco Rendezvous, WebSphere, webMethods, SeeBeyond
  • The speed with which the data moves between backend packages and data sources and your SQL Server Mobile Middleware is critical.  Being able to interface your MEAP package with backend systems won’t be good enough if it can’t meet corporate performance SLAs.  Business operations in today’s real-time enterprise move at the speed of light and your MEAP package must do the same.  Luckily, Microsoft is ahead of the pack in this regard with its in-memory solution since it holds the world ETL record for moving in excess of 2 TB of data per hour (650+ MB/second).  It should come as no surprise that SSIS is depended on by more customers than any other ETL solution in the world.
  • Last but certainly not least, since Gartner requires security at every tier of any MEAP solution, EAI data movement between the Mobile Middleware server and backend systems must also be secure.  Microsoft provides the ability to password protect and encrypt all SSIS packages.  Furthermore, once your composite data is aggregated inside SQL Server, it is encrypted at rest.
Now that you know the facts about Gartner’s EAI critical capability and what to expect from Microsoft and other MEAP vendors, it’s time to make things real.  Theory is great, but seeing something in action is better and much more believable.  To keep things simple, I’ll build the EAI critical capability of Microsoft MEAP on my Windows 8 laptop.  I’ve got SQL Server 2012 installed and I’ll use 3 Access databases to represent a backend CRM, ERP, and mainframe.  I figured you’d be more likely to reproduce my examples on your own PC using Access than if I chose to connect to Microsoft Dynamics CRM and SAP.

CRM

To represent customers you might find in a CRM system, I’ve created an Access database with a simple Customers table with a schema that includes Id and Name:

I’ve filled the table with a short list of customers that we’ll use:

I’ve also created a Customers table in SQL Server to serve as the data destination for the CRM Access database.

ERP

To represent products you might find in an ERP system, I’ve created an Access database with a simple Products table with a schema that includes Id, Name, and Quantity:

I’ve filled the table with a short list of products that we’ll use:

I’ve also created a Products table in SQL Server to serve as the data destination for the ERP Access database.

Mainframe

To represent orders you might submit to a mainframe, I’ve created an Access database with a simple Orders table with a schema that includes Id, CustomerId, ProductId, and Quantity:

Since the mainframe is the destination after a mobile transaction is completed, the Orders table is currently empty.  As you might imagine, I’ve created an Orders table in SQL Server to serve as the data source for the Mainframe Access database.

Now it’s time to get started building the SSIS package to perform the data movement to and from our Mobile Middleware.  Since I have SQL Server 2012 installed, all I need to do is launch SQL Server Data Tools and create a new Integration Services Project.  I called my Solution “Adapters.”

In order to connect the 3 Access databases to SQL Server, you’ll need to create a few connections.  Go the bottom-center of the screen and right-click inside the Connection Managers tab area and select New Connection.  In the Add SSIS Connection Manager dialog, select OLEDB and click Add.

In the Configure OLE DB Connection Manager, click New.  In the Connection Manager dialog, select the SQL Server Native Client as the Provider, the name of your server, the appropriate Windows or SQL Server Authentication credentials, the name of the database, and click Test Connection to ensure everything is correct.  If everything checks out okay, click OK twice.

Now it’s time to create Access connections.  Right-click again to create a new OLEDB connection to the CRM database.  Since I’m using Office 2013, I used the Office 15 Access Database Engine OLE DB Provider and pointed to the path on my laptop where my database file resides.  You might use a different Access driver depending what you have installed on your PC.  As before, test your connection and then repeat this process to create Connection Managers for the ERP and Mainframe databases.

Before we move on to create the Data Flows, I want you to view the Properties for the SSIS Package.  Earlier in the article, I mentioned how important it was to secure every tier of any MEAP solution and integration is no different.  If you scroll down to the ProtectionLevel property, you’ll see a variety of ways to encrypt your Package.

One other thing I need you to configure has to do with the use of 32-bit Access drivers in an SSIS system that expects to operate in 64-bit when debugging.  In the Solution Explorer I want you to right-click on Adapters and select Properties.  In the Adapters Property Pages dialog you need to expand Configuration Properties and select Debugging.  You then need to set Run64BitRuntime to False to get things working properly if you happen to be using a 32-bit version of Access on your PC.

Data will move between SQL Server and our 3 Access databases through the use of Data Flow Tasks.  I now want you to drag a Data Flow Task from the SSIS Toolbox and drop it on the open workspace area beneath the Control Flow tab.  Rename it to CRM Data Flow Task.

Double-click on the new CRM Data Flow Task and you will be taken to the Data Flow tab.  In the SSIS Toolbox, expand Other Sources and drag OLE DB Source on to the open workspace.  Double-click on it to bring up the OLE DB Source Editor.  Select your CRM database in the OLE DB connection manager combo box, select Table or View in the Data access mode combo box, and select Customers in the Name of the table or the view combo box and then click OK.  To complete the CRM connection to our Mobile Middleware, I want you to expand Other Destinations from the SSIS Toolbox and drag OLE DB Destination on to the open workspace.  Click on the original OLE DB Source and drag the blue arrow to make a connection with the OLE DB Destination.  Double-click on OLE DB Destination to bring up the OLE DB Destination Editor.  Select your SQL Server database in the OLE DB connection manager combo box, select Table or View in the Data access mode combo box, and select [dbo].[Customers] in the Name of the table or the view combo box.  Click on Mappings to ensure you have the appropriate linkages between the Available Input Columns and the Available Destination Columns.

If everything looks good, click OK.  At this point you should Save and Build your solution just to verify there’s no errors.  Now it’s time to test this Data Flow.  Click the Play button or hit F5 on your keyboard to try it out.

If your OLE DB Source and OLE DB Destination have green circles with check signs inside, then there’s a good chance your data transferred without issues.  The connecting arrow should display 5 rows.  The final check is to go into SQL Server Management Studio and refresh the Customers table to verify that the 5 customers made it from Access to SQL Server.

The 5 customers made it over so now it’s time to complete the other 2 connections.

Go back to the Control Flow tab and drag a Data Flow Task from the SSIS Toolbox and drop it on the open workspace area.  Rename it to ERP Data Flow Task.  Double-click on the new ERP Data Flow Task and you will be taken to the Data Flow tab.  In the SSIS Toolbox, expand Other Sources and drag OLE DB Source on to the open workspace.  Double-click on it to bring up the OLE DB Source Editor.  Select your ERP database in the OLE DB connection manager combo box, select Table or View in the Data access mode combo box, and select Products in the Name of the table or the view combo box and then click OK.  To complete the ERP connection to our Mobile Middleware, I want you to expand Other Destinations from the SSIS Toolbox and drag OLE DB Destination on to the open workspace.  Click on the original OLE DB Source and drag the blue arrow to make a connection with the OLE DB Destination.  Double-click on OLE DB Destination to bring up the OLE DB Destination Editor.  Select your SQL Server database in the OLE DB connection manager combo box, select Table or View in the Data access mode combo box, and select [dbo].[Products] in the Name of the table or the view combo box.  Click on Mappings to ensure you have the appropriate linkages between the Available Input Columns and the Available Destination Columns.  If everything looks good, click OK.  Save and Build your solution just to verify there’s no errors and then test this Data Flow.  Look for the green circles with check signs and look inside SQL Server Management Studio to ensure the Products made it over.

The only connection left to make is Orders so return to the Control Flow tab and drag a Data Flow Task from the SSIS Toolbox and drop it on the open workspace area.  Rename it to Orders Data Flow Task.  Double-click on the new Orders Data Flow Task and you will be taken to the Data Flow tab.  In the SSIS Toolbox, expand Other Sources and drag OLE DB Source on to the open workspace.  Double-click on it to bring up the OLE DB Source Editor.  Select your SQL Server database in the OLE DB connection manager combo box, select Table or View in the Data access mode combo box, and select [dbo].[Orders] in the Name of the table or the view combo box and then click OK.  To complete the Orders connection to our backend data source, I want you to expand Other Destinations from the SSIS Toolbox and drag OLE DB Destination on to the open workspace.  Click on the original OLE DB Source and drag the blue arrow to make a connection with the OLE DB Destination.  Double-click on OLE DB Destination to bring up the OLE DB Destination Editor.  Select your Mainframe Access database in the OLE DB connection manager combo box, select Table or View in the Data access mode combo box, and select Orders in the Name of the table or the view combo box.  Click on Mappings to ensure you have the appropriate linkages between the Available Input Columns and the Available Destination Columns.  If everything looks good, click OK.  Save and Build your solution just to verify there’s no errors and then test this Data Flow.  Keep in mind that this Data Flow goes in the reverse direction of the first two.  An Order placed on a smartphone or tablet will make its way to SQL Server via Web Services and wireless data networks.  This makes things a little hard to test so we’ll need to insert some dummy data to mock up this scenario.  Launch SQL Server Management Studio and manually insert data into the Orders table.  My Id is an Identity column and I inserted 1, 1, 1 in CustomerId, ProductId, and Quantity.  You can now test this inside Visual Studio and look for the green circles with check signs to verify that things worked.  Last but not least, take a look inside your Mainframe Access database to ensure the Orders made it over.

Congratulations on making it to the end of this exercise!

As I show you how to build Microsoft MEAP, my goal is to illustrate how easy this can be. After walking you through the exercise in this article, the takeaway in integrating your Mobile Middleware (SQL Server) with backend systems (Access databases) is that it’s a SimpleVisual, Drag and Drop operation.

I hope you now have a good understanding of Gartner’s Enterprise Application Integration (EAI) critical capability for MEAP.  I’m also hoping you see how easy it is to perform this EAI to multiple backend systems and data sources using the Microsoft technology your organization already owns.  In the next article I’ll show you how to create .NET Business Objects that model the aggregated data schema you’ve created in SQL Server.  Then I’ll show you how to expose that data to your mobile devices via the ASP.NET Web API.

-Rob