Performance and Memory Management Improvements with Windows Embedded Handheld

A lot has changed since the launch of Windows Phone in the Fall of 2010.

Microsoft now has a compelling phone platform that targets consumers inside and outside the office.  One thing that that hasn’t changed is the widespread use of Windows Embedded Handheld to solve tough enterprise mobility problems.  It should be no surprise that over 80% of enterprise handhelds shipped are running Windows Mobile or Windows Embedded Handheld.  They include support for barcode scanning, RFID reading, rugged hardware, every type of wireless, full device encryption, complete over-the-air software distribution and device managment support, FIPS compliance, and both capacitive touch and stylus operation.  On the application platform side of the equation, they have rich support for WinForm development using Visual Studio and the .NET Compact Framework, C++ and a full-featured database with built-in sync capabilities via SQL Server Compact.  They can easily communicate with WCF SOAP and REST web services running on Windows Servers on-premise or with Azure in the cloud.  Support for Merge Replication means faster time to market to get device synchronizing with SQL Server with almost no coding.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Since Windows Embedded Handheld uses an advanced version of the operating system kernel used by Windows Mobile 6.5.3, many of the techniques and best practices I’ve taugh customers and developers all over the world still apply.  While it still uses the slotted memory model found in Windows CE 5 with 32 processes and 32 MB of memory per process, you’ll find that numerous enhancements and tuning has taken place to give your line of business apps more of what they need.  I’m talking about more memory per process and improved performance.

A recent Gartner report recommends that organizations should stay with Windows Embedded Handheld as the best mobile platform for enterprise line of business needs.  Great devices are available from OEMs like Intermec, Motorola, Psion, and Honeywell just to name a few.  I hope this video helps you with any memory management or performance issues you may need to deal with in your enterprise mobile apps.

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

SQL Server Compact 4.0 Lands on the Web

SQL Server

A decade has passed since I first started using SQL CE on my Compaq iPAQ.  What started as a great upgrade to Pocket Access turned into the ultimate embedded database for Windows CE, the Pocket PC, Windows Mobile and Windows Phones.  The one-two punch of Outlook Mobile synchronizing email with Exchange and SQL Server Compact synchronizing data with SQL Server helped set the mobile enterprise on fire.  In 2005, version 3.0 supported Windows Tablets and progressive enhancements to the code base led to full Windows support on both x86 and x64 platforms.  With the new version 4.0, the little-database-that-could has grown up into a powerful server database ready to take on the web.

We’ve come a long way and you’re probably wondering what qualifies this new embedded database to take on the Internet:

  • Native support for x64 Windows Servers
  • Virtual memory usage has been optimized to ensure the database can support up to 256 open connections – (Are you actually using 256 pooled connections with your “Big” database today?)
  • Supports databases up to 4 GB in size – (Feel free to implement your own data sharding schemeSQL Server Compact)
  • Developed, stress-tested, and tuned to support ASP.NET web applications
  • Avoids the interprocess communications performance hit by running in-process with your web application
  • Row-level locking to boost concurrency
  • Step up to Government + Military grade security SHA2 algorithm to secure data with FIPS compliance
  • Enhanced data reliability via true atomicity, consistency, isolation, and durability (ACID) support
  • Transaction support to commit and roll back grouped changes
  • Full referential integrity with cascading deletes and updates
  • Support ADO.NET Entity Framework 4 – (Do I hear WCF Data Services?)
  • Paging queries are supported via T-SQL syntax to only return the data you actually need

Wow, that’s quite a list!  SQL Server Compact 4.0 databases are easily developed using the new WebMatrix IDE or through Visual Studio 2010 SP1.  I’m loving the new ASP.NET Web Pages.  It reminds me of the good old days of building web applications with Classic ASP back in the 90’s with Visual InterDev and Homesite.

What about Mobility?

Since SQL Server Compact owes its heritage to mobile and embedded versions of Windows, you might be wanting to know what our story is there.  The good news is that you can build and deploy v4.0 databases on Windows XP, Windows Vista, and Windows 7.  If you want to implement an occasionally-connected solution that utilizes the Sync Framework, Remote Data Access (RDA), or Merge Replication, you’ll need to stick with SQL Server Compact 3.5 SP2.  Time and resource-constraints prevented the Compact team from enabling these features.  Luckily, single-user WPF/WinForms database applications running on Windows Slates, laptops and Windows Embedded Handheld devices will work just fine with the v3.5 SP2 runtime.  Get a jumpstart with this by pickup up “Enterprise Data Synchronization with Microsoft SQL Server 2008 and SQL Server Compact 3.5 Mobile Merge Replication” at   http://www.amazon.com/Enterprise-Synchronization-Microsoft-Compact-Replication/dp/0979891213/ref=sr_1_1?s=books&ie=UTF8&qid=1281715114&sr=1-1 to start building those MEAP solutions.

With the tidal wave of Windows Slates hitting the market, a secure, powerful mobile database that allows users to work offline and syncs with SQL Server is definitely going to be a hot item!

So run, don’t walk to the Microsoft Download site to download the Next-Gen database for the web:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=033cfb76-5382-44fb-bc7e-b3c8174832e2

If you need to support occasionally-connected mobile applications with sync capabilities on muliple Windows platforms, download SQL Server Compact 3.5 SP2:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e497988a-c93a-404c-b161-3a0b323dce24

Keep Syncing,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Reducing SQL Server Sync I/O Contention :: Tip 3

Primary Key
GUIDs and Clustered Indexes

Uniqueness is a key factor when synchronizing data between SQL Server/Azure and multiple endpoints like Slates and Smartphones.  With data simultaneously created and updated on servers and clients, ensuring rows are unique to avoid key collisions is critical.  As you know, each row is uniquely identified by its Primary Key.

Primary Key

When creating Primary Keys, it’s common to use a compound key based on things like account numbers, insert time and other appropriate business items.  It’s even more popular to create Identity Columns for the Primary Key based on an Int or BigInt data type based on what I see from my customers.  When you designate a column(s) to be a Primary Key, SQL Server automatically makes it a Clustered Index.  Clustered indexes are faster than normal indexes for sequential values because the B-Tree leaf nodes are the actual data pages on disk, rather than just pointers to data pages.

While Identity Columns work well in most database situations, they often break down in a data synchronization scenario since multiple clients could find themselves creating new rows using the same key value.  When these clients sync their data with SQL Server, key collisions would occur.  Merge Replication includes a feature that hands out blocks of Identity Ranges to each client to prevent this.

When using other Microsoft sync technologies like the Sync Framework or RDA, no such Identity Range mechanism exists and therefore I often see GUIDs utilized as Primary Keys to ensure uniqueness across all endpoints.  In fact, I see this more and more with Merge Replication too since SQL Server adds a GUID column to the end of each row for tracking purposes anyway.  Two birds get killed with one Uniqueidentifier stone.

Using the Uniqueidentifier data type is not necessarily a bad idea.  Despite the tradeoff of reduced join performance vs. integers, the solved uniqueness problem allows sync pros to sleep better at night.  The primary drawback with using GUIDs as Primary Keys goes back to the fact that SQL Server automatically gives those columns a Clustered Index.

I thought Clustered Indexes were a good thing?

They are a good thing when the values found in the indexed column are sequential.  Unfortunately, GUIDs generated with the default NewId() function are completely random and therefore create a serious performance problem.  All those mobile devices uploading captured data means lots of Inserts for SQL Server.  Inserting random key values like GUIDs can cause fragmentation in excess of 90% because new pages have to be allocated with rows pushed to the new page in order to insert the record on the existing page.  This performance-killing, space-wasting page splitting wouldn’t happen with sequential Integers or Datetime values since they actually help fill the existing page.

What about NEWSEQUENTIALID()?

Generating your GUIDs on SQL Server with this function will dramatically reduce fragmentation and wasted space since it guarantees that each GUID will be sequential.  Unfortunately, this isn’t bulletproof.  If your Windows Server is restarted for any reason, your GUIDs may start from a lower range.  They’ll still be globally unique, but your fragmentation will increase and performance will decrease.  Also keep in mind that all the devices synchronizing with SQL Server will be creating their own GUIDs which blows the whole NEWSEQUENTIALID() strategy out of the water.

Takeaway

If you’re going to use the Uniqueidentifier data type for your Primary Keys and you plan to sync your data with RDA, the Sync Framework or Merge Replication, ensure that Create as Clustered == No for better performance.  You’ll still get fragmentation, but it will be closer to the ~30% range instead almost 100%.

Keep synching

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Reducing SQL Server I/O Contention during Sync :: Tip 2

Database Storage
Indexing Join Columns

In my last Sync/Contention post, I beat up on a select group of SAN administrators who aren’t willing to go the extra mile to optimize the very heart of their organization, SQL Server.  You guys know who you are.

This time, I want to look at something more basic, yet often overlooked.

All DBAs know that Joining tables on non-indexed columns is the most expensive operation SQL Server can perform.  Amazingly, I run into this problem over and over with many of my customers.  Sync technologies like the Sync Framework, RDA and Merge Replication allow for varying levels of server-side filtering.  This is a popular feature used to reduce the size of the tables and rows being downloaded to Silverlight Isolated Storage or SQL Server Compact.

It’s also a performance killer when tables and columns participating in a Join filter are not properly indexed.  Keeping rows locked longer than necessary creates undue blocking and deadlocking.  It also creates unhappy slate and smartphone users who have to wait longer for their sync to complete.

Do yourselft a favor and go take a look at all the filters you’ve created and makes sure that you have indexes on all those Joined columns.

Keep synching,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Reducing SQL Server I/O Contention during Sync :: Tip 1

SAN Storage
RAID

Sync technologies like Merge Replication and the Sync Framework track changes on SQL Server using triggers, stored procedures and special tracking tables.  The act of tracking changes made by each SQL Server Compact or Silverlight sync subscriber can cause a lot of locking and blocking on the server.  This diminishes performance and sometimes leads to deadlocks. SAN Storage

Therefore, don’t listen to your SAN administrator when he says the RAID 5 will do.  RAID 1 or 10 must always be used for all databases, tempdb, and transaction logs.  Furthermore, each of these database objects must be placed on their own dedicated RAID arrays.  No sharing!  Remembers, as a DBA and sync expert, knowledge of SAN configuration must always be part of your skillset.

Keeping synching,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Windows Phone 7 Line of Business App Dev :: Improving the In-Memory Database

Phone 7

About a month ago, I wrote an article intended to help you fill some of the gaps left by the missing SQL Server Compact database.

Since your Windows Phone 7 Silverlight app is consuming an ObservableCollection of objects streaming down from Windows Azure and SQL Azure, it makes sense to organize those objects in a database-like format that’s easy to work with.  If you’ve ever worked with Remote Data Access (RDA) in the past, the notion of pre-fetching multiple tables to work with locally should look familiar.

In this case, each ObservableCollection represents a table, each object represents a row, and each object property represents a column.  I had you create a Singleton class to hold all these objects in memory to serve as the database.  The fact that Silverlight supports Language Integrated Query (LINQ) means that you can use SQL-like statements to work with the multiple, ObservableCollections of objects.

If you’re wondering why I have you cache everything in memory in a Singleton, there’s a few reasons.  For starters, it makes it easy to query everything with LINQ with the fastest performance possible for single and multi-table JOINs.  Secondly, I don’t represent a Microsoft product group and therefore wouldn’t engineer an unsupported provider that can query subsets of serialized data from files residing in Isolated Storage.  Finally, I don’t want you to accidentally find yourself with multiple instances of the same ObservableCollection when pulling data down from Azure or loading it from Isolated Storage.  Forcing everything into a Singleton prevents you wasting memory or updating objects in the wrong instance of an ObservableCollection.  An inconsistent database is not a good thing.  Don’t worry, you can control which tables are loaded into memory.

So what is this article all about and what are the “improvements” I’m talking about?

This time around, I’m going to focus on saving, loading and deleting the serialized ObservableCollections from Isolated Storage.  In that last article, I showed you how to serialize/de-serialize the ObservableCollections to and from Isolated Storage using the XmlSerializer.  This made it easy for you to save each table to its own XML file which sounds pretty cool.

So what’s wrong with this?

Saving anything as XML means that you’re using the largest, most verbose form of serialization.  After hearing me preach about the virtues of doing SOA with WCF REST + JSON, using the XmlSerializer probably seems out of place.  Luckily, the DataContractJsonSerializer supported by Silverlight on Windows Phone 7 gives you the most efficient wire protocol for data-in-transit can also be used to save those same .NET objects to Isolated Storage.  So the first improvement in this article comes from shrinking the size of the tables and improving the efficiency of the serialization/de-serializing operations to Isolated Storage using out-of-the-box functionality.

While going from XML to JSON for your serializing might be good enough, there’s another improvement in the way you write the code that will make this much easier to implement for your own projects.  A look back to the previous article reveals a tight coupling between the tables that needed to be saved/loaded and the code needed to make that happen.  This meant that you would have to create a SaveTable and LoadTable method for each table that you wanted to retrieve from Azure.  The new code you’re about to see is generic and allows you to use a single SaveTable and LoadTable method even if you decide to download 100 tables.

Enough talk already, let’s see some code.  Launch your ContosoCloud solution in Visual Studio and open Database.cs.  I want you to overwrite the existing code with the code shown below:

using System;
using System.Net;
using System.Windows;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.IO.IsolatedStorage;
using System.Runtime.Serialization.Json;

namespace ContosoPhone
{
sealed class Database
{
//Declare Instance
private static readonly Database instance = new Database();

        //Private Constructor
private Database() { }

        //The entry point into this Database
public static Database Instance
{
get
{
return instance;
}
}

        //Serialize ObservableCollection to JSON in Isolated Storage
public void SaveTable<T>(T tableToSave, string tableName)
{
if (tableToSave != null)
{
using (IsolatedStorageFile store = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream stream = store.CreateFile(tableName + “.txt”))
{
DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(T));
serializer.WriteObject(stream, tableToSave);
}
}
}
else
{
throw new Exception(“Table is empty”);
}
}

        //Deserialize ObservableCollection from JSON in Isolated Storage
public T LoadTable<T>(T tableToLoad, string tableName)
{
using (IsolatedStorageFile store = IsolatedStorageFile.GetUserStoreForApplication())
{
if (store.FileExists(tableName + “.txt”))
{
using (IsolatedStorageFileStream stream = store.OpenFile(tableName + “.txt”, System.IO.FileMode.Open))
{
DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(T));
return (T)serializer.ReadObject(stream);
}
}
else
{
throw new Exception(“Table not found”);
}
}
}

        //Delete ObservableCollection from Isolated Storage
public void DropTable(string tableName)
{
using (IsolatedStorageFile store = IsolatedStorageFile.GetUserStoreForApplication())
{
if (store.FileExists(tableName + “.txt”))
{
store.DeleteFile(tableName + “.txt”);
}
else
{
throw new Exception(“Table not found”);
}
}
}


//Declare Private Table Variables
private ObservableCollection<Customer> customerTable = null;

        //Customer Table
public ObservableCollection<Customer> Customers
{
get { return customerTable; }
set { customerTable = value; }
}
}
}

 

Looking from top to bottom, the first change you’ll notice is the new SaveTable method where you pass in the desired ObservableCollection and table name in order to serialize it as JSON using the DataContractJsonSerializer.  The next method down the list is LoadTable where you pass in the same parameters as SaveTable but you get back a de-serialized ObservableCollection.  The last new method in the Database Singleton is DropTable which simply deletes the serialized table from Isolated Storage if you don’t need it anymore.

So how do you call this code?

Bring up MainPage.xaml.cs, and find the click event for Save button.  Delete the existing XmlSerializer code and replace it with the following:

try
{
Database.Instance.SaveTable<ObservableCollection<Customer>>(Database.Instance.Customers, “Customers”);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

The code above shows you how to call the SaveTable method in the Singleton with the appropriate syntax to pass in the ObservableCollection type as well as actual ObservableCollection value and name.

Now find the click event for the Load button, delete the existing code and paste in the following:

try
{
Database.Instance.Customers = Database.Instance.LoadTable<ObservableCollection<Customer>>(Database.Instance.Customers, “Customers”);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

This code looks pretty much the same as the SaveTable code except that you set Database.Instance.Customers equal to the return value from the method.  For completeness sake, drop another button on MainPage.xaml and call it Drop.  In its click event, paste in the following code:

try
{
Database.Instance.DropTable(“Customers”);
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
}

For this code, just pass in the name of the table you want to delete from Isolated Storage and it’s gone.

It’s time to hit F5 so you can see how things behave.

phone7

When your app comes to life in the emulator, I want you to exercise the system by Getting, Adding, Updating and Deleting Customers.  In between, I want you to tap the Save button, close the app, reload the app and tap the Load button and then View Customers to ensure you’re seeing the list of Customers you expect.  Keep in mind that when you Save, you overwrite the previously saved table.  Likewise, when you Load, you overwrite the current in-memory ObservableCollection.  Additionally, Saving, Loading, and Dropping tables that don’t exist should throw an appropriate error message.

So what’s the big takeaway for these tweaks I’ve made to the in-memory database?

While switching serialization from XML to JSON is a great improvement in size and efficiency, I truly believe that making the SaveTable and LoadTable methods generic and reusable will boost developer productivity.  The new ease with which you can Save and Load 1, 10 or even 1,000 tables makes this more attractive to mobile developers that need to work with local data.

So where do we go from here?

You now have some of the basic elements of a database on Windows Phone 7.  You don’t have ACID support, indexes, stored procedures or triggers but you have a foundation to build on.  So what should be built next?

To help ensure database consistency, I would add an AutoFlush feature next.  SQL Server Compact flushes its data to disk every 10 seconds and there’s nothing to prevent you from using the SaveTable method to do the same.  A timer set to fire at a user-specified interval that iterates through all the ObservableCollections and saves them will help keep your data safe from battery loss and unforeseen system failures.  The fact that your app can be tombstoned at any moment when a user taps the Back button makes an AutoFlush feature even more important.

Anything else?

At the beginning of this article I mentioned RDA which is a simple form of data synchronization.  It’s simple because it only tracks changes on the client but not the server.  To find out what’s new or changed on the server, RDA requires local tables on the device to be dropped and then re-downloaded from SQL Server.  With the system I’ve built and described throughout this series of articles, we already have this brute force functionality.  So what’s missing is client-side change tracking.  To do this, I would need to add code that fires during INSERTS, UPDATES, and DELETES and then writes the appropriate information to local tracking tables.  To push those changes back to SQL Azure, appropriate code would need to call WCF REST + JSON Services that execute DML code on Windows Azure.

I hope with the improvements I’ve made to the in-memory database in this article, you’ll feel even more empowered to build occasionally-connected Windows Phone 7 solutions for consumers and the enterprise.

Keep coding!

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Microsoft SQL Server Compact 3.5 SP2 has Arrived

SQL Server Compact

My favorite embedded database for Windows Phones, laptops, tablets and desktops has been released to the Web along with Visual Studio 2010.

New features for SQL Server Compact 3.5 SP2 include:SSCE

  • Supports working with a SQL Server Compact 3.5 database using the Transact-SQL Editor in Visual Studio 2010. The Transact-SQL Editor can be used to run free-text Transact-SQL queries against a SQL Server Compact 3.5 database. The Transact-SQL Editor also provides the ability to view and save detailed estimated and actual query show-plans for SQL Server Compact 3.5 databases. Previously, the functionality provided by the Transact-SQL Editor was only available through SQL Server Management Studio.
  • New classes and members named SqlCeChangeTracking have been added to the System.Data.SqlServerCe namespace to expose the internal change tracking feature used by Sync Framework to track changes in the database. The SQL Server Compact change tracking infrastructure maintains information about inserts, deletes, and updates performed on a table that has been enabled for change tracking. This information is stored both in columns added to the tracked table and in system tables maintained by the tracking infrastructure. By using System.Data.SqlServerCe.SqlCeChangeTracking one can configure, enable, and disable change tracking on a table, and also access the tracking data maintained for a table. The API can be used to provide functionality in a number of scenarios. For example it can be used to provide custom implementations of client-to-server or client-to-client sync for occasionally connected systems (OCS) or to implement a custom listener application.
  • The managed assemblies of SQL Server Compact for use by the applications that privately deploy SQL Server Compact are installed in the folder %Program Files%\Microsoft SQL Server Compact Edition\v3.5\Private. Using these assemblies ensure that the application uses the privately deployed version of Compact even when a lower version of SQL Server Compact 3.5 is installed in the GAC.
  • Visual Studio 2010 installs both the 32-bit and 64-bit versions of SQL Server Compact 3.5 SP2 on a 64-bit machine. If a SQL Server Compact application is deployed using Click Once in Visual Studio 2010 then both the 32-bit and the 64-bit version of SQL Server Compact are installed on a 64-bit machine
  • SQL Server Compact 3.5 SP2 adds support for Windows Mobile 6.5, Windows 7 and Windows Server 2008 R2, and can sync data using Merge Replication and RDA with SQL Server 2008 R2 November CTP.
  • The SqlCeReplication object gets a new property called PostSyncCleanup which you can use to prevent SQL Server Compact from Updating Statistics after an initial Merge Replication initialization.  This has the potential to shave a substantial amount of time off of your initial syncs depending on the size of your database.

 

In addition to these new features, the following hotfixes from SQL Server 2005 Compact Edition or SQL Server Compact 3.5 SP1 have been rolled up in SQL Server Compact 3.5 SP2:

  • http://support.microsoft.com/kb/953259: Error message when you run an SQL statement that uses the Charindex function in a database that uses the Czech locale in SQL Server 2005 Compact Edition: “The function is not recognized by SQL Server Compact Edition”
  • http://support.microsoft.com/kb/958478: Error message when you run a “LINQ to Entities” query that uses a string parameter or a binary parameter against a SQL Server Compact 3.5 database: “The ntext and image data types cannot be used in WHERE, HAVING, GROUP BY, ON, or IN clauses”
  • http://support.microsoft.com/kb/959697: Error message when you try to open a database file from a CD in SQL Server Compact 3.5 with Service Pack 1: “Internal Error using read only database file”
  • http://support.microsoft.com/kb/960142: An error message is logged, and the synchronization may take a long time to finish when you use an application to synchronize a merge replication that contains a SQL Server 2005 Compact Edition subscriber
  • http://support.microsoft.com/kb/963060: An error message is logged, and the synchronization may take a long time to finish when you synchronize a merge replication that contains a SQL Server Compact 3.5 subscriber: “UpdateStatistics Start app=<UserAppName>.exe”
  • http://support.microsoft.com/kb/967963: Some rows are deleted when you repair a database by using the Repair method together with the RepairOption.RecoverCorruptedRows option in SQL Server 2005 Compact Edition and in SQL Server Compact 3.5
  • http://support.microsoft.com/kb/968171: Error message when you try to create an encrypted database in SQL Server 2005 Compact Edition: “The operating system does not support encryption”
  • http://support.microsoft.com/kb/968864: Error message when you run a query in SQL Server Compact 3.5: “The column name cannot be resolved to a table. Specify the table to which the column belongs”
  • http://support.microsoft.com/kb/969858: Non-convergence occurs when you synchronize a SQL Server Compact 3.5 client database with the server by using Sync Services for ADO.NET in a Hub-And-Spoke configuration
  • http://support.microsoft.com/kb/970269: Access violations occur when you run an application under heavy load conditions after you install the 64-bit version SQL Server Compact 3.5 Service Pack 1
  • http://support.microsoft.com/kb/970414: Initial synchronization of a replication to SQL Server Compact 3.5 subscribers takes significant time to finish
  • http://support.microsoft.com/kb/970915: Error message when you synchronize a merge replication with SQL Server 2005 Compact Edition subscribers: “A column ID occurred more than once in the specification. HRESULT 0x80040E3E (0)”
  • http://support.microsoft.com/kb/971027: Error message when you upgrade a very large database to SQL Server Compact 3.5: “The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only”
  • http://support.microsoft.com/kb/971273: You do not receive error messages when you run a query in a managed application that returns columns of invalid values in SQL Server Compact 3.5
  • http://support.microsoft.com/kb/971970: You cannot insert rows or upload changes into the SQL Server 2005 Compact Edition subscriber tables after you run the “sp_changemergearticle” stored procedure or you add a new merge publication article when another article has an IDENTITY column
  • http://support.microsoft.com/kb/972002: Error message when you try to create an encrypted database in SQL Server Compact 3.5: “The operating system does not support encryption”
  • http://support.microsoft.com/kb/972390: The application enters into an infinite loop when you run an application that uses Microsoft Synchronization Services for ADO.NET to synchronize a SQL Server Compact 3.5 database
  • http://support.microsoft.com/kb/972776: When the application calls the SqlCeConnection.Close method or the SqlCeConnection.Dispose method in SQL Server Compact 3.5, the application may stop responding at the method call
  • http://support.microsoft.com/kb/974068: Error message when an application inserts a value into a foreign key column in SQL Server Compact 3.5: “No key matching the described characteristics could be found within the current range”

 

Web downloads for SQL Server Compact 3.5 SP2 is as listed below:

SQL Server Compact 3.5 SP2 for Windows desktop (32-bit and 64-bit)

Note that the file available for download is a 6 MB self-extracting executable (exe) file that contains the 32-bit and the 64-bit Windows Installer (MSI) files for installing SQL Server Compact 3.5 SP2 on a 32-bit and a 64-bit Computer. It is important to install both the 32-bit and the 64-bit version of the SQL Server Compact 3.5 SP2 MSI on a 64-bit Computer. Existing SQL Server Compact 3.5 applications may fail if only the 32-bit version of the MSI file is installed on the 64-bit computer. Developers should chain both the 32-bit and the 64-bit MSI files with their applications and install both of them on the 64-bit Computer. Refer to the KB article for more information.

SQL Server Compact 3.5 SP2 for Windows mobile devices (all platforms & processors)

SQL Server Compact 3.5 SP2 Server Tools (32-bit and 64-bit)

SQL Server Compact 3.5 SP2 Books Online (Note that the books online will be available for download by the third week of April 2010)

SQL Server Compact 3.5 SP2 Samples

Visual Studio 2010 and .NET Framework 4

This is a great release for SQL Server Compact that adds some important new features, squashes a bunch of bugs and adds support for our newest operating systems.  I strongly recommend you update your existing SSCE runtimes with SQL Server Compact 3.5 SP2.

Keep on Synching,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Enterprise Data Synchronization with Microsoft SQL Server 2008 and SQL Server Compact 3.5 Mobile Merge Replication

Enterprise Data Synchronization with Microsoft SQL Server 2008 and SQL Server Compact 3.5 Mobile Merge Replication

I’m happy to say that my latest book is now available on Amazon.

With the world’s largest organizations rolling out tens of thousands of Windows® phones, laptops, tablets and Netbooks to empower their respective mobile workforces, the ability to create mobile line of business solutions that support large numbers of users is absolutely critical. In my fourth book on mobile infrastructure and development, I show you how to take the SQL Server data you use to run your organization and make it available to all of your mobile employees.

Step-by-step, I’ll walk you through the process of building a secure, performant, n-tier, mobile enterprise application platform architecture designed to scale to thousands of users. You’ll also learn how to create occasionally-connected .NET applications designed to thrive in unreliable wireless conditions.

Enterprise Data Synchronization with Microsoft SQL Server 2008 and SQL Server Compact=

  • Learn how to “Mobilize” your organization by making your enterprise data available to employees carrying Windows® phones, laptops, Netbooks and tablets in the field.
  • Learn how to build an N-Tier Mobile Sync infrastructure that will scale to thousands of users.
  • Learn how to create occasionally-connected .NET applications designed to thrive in unreliable wireless conditions.
  • Learn best practices in security, reliability, performance, load-balancing, reverse proxy and hardware configuration.
  • Learn how to implement this technology in real world scenarios like supply chain management, retail, sales force automation, healthcare and emergency management.

Keep in mind that the knowledge you gain from this book didn’t come from me dreaming this stuff up in an Ivory Tower.  It came from building some of the worlds largest and most complex data synchronization systems for the world’s largest companies.  In addition to the hands-on experience that went into this book, I’d also like to thank some of my colleagues for their invaluable contributions:

  • Liam Cavanagh is a Senior Lead Program Manager for Microsoft’s Sync Framework and Cloud Data Services and he wrote the forward.
  • Catherine Wyatt is the Managing Editor for Hood Canal Press who made the publishing of the book possible.
  • Darren Shaffer is the CEO of Handheld Logic and he wrote the Chapter on building the Mobile Subscriber.
  • Michael Jimenez is a Mobility Architect at Microsoft and he wrote the Appendix that shows you how to create an ISA Server 2006 Reverse Proxy to publish your sync infrastructure to the Internet.

It’s my sincere hope that this book will encourage you to un-tether your workforce from their desktop computers and boost your organization’s agility by pushing out critical business functions to the point of activity where employees are empowered to make timely decisions and perform tasks that best serve the interests of their customers and their company.

This repudiation of the traditional “connected” software application model increases customer satisfaction, boosts worker efficiency, reduces “missed opportunities” and results in cost savings as “un-wired” employees get their jobs done wherever they happen to be.

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

The Hidden Message Queue on your Windows phone

The ability to be “offline” and “occasionally-connected” is a critical component of successful mobile apps.

Wireless data networks lack complete coverage and exhibit a level of unreliability that immediately disqualify permanently-connected apps like you might see on a corporate LAN.  For a mobile app to be successful, it must allow the user to keep working in the absence of a data network.  It must also be able to transparently sync data changes from the mobile client to the server whenever a wireless data network is detected.  The primary means of accomplishing that today is via one of Microsoft’s sync technologies that allows SQL Server Compact on the mobile client to replicate data to and from SQL Server in the data center or the cloud.  Since SQL Server Compact runs almost anywhere, your mobile client could be a Windows phone, a laptop, a desktop or even a Netbook.

Besides synchronizing the tables, rows and columns of a complete database between mobile clients and servers, the use of message queuing should be considered for many scenarios due to its high-reliability by ensuring that a critical message arrives at its destination.  Products like MSMQ, MQ Series, Tibco and JMS are used all over the world in the most mission-critical environments to ensure a high level of availability and reliability.  They’re asynchronous by nature and use store and forward mechanisms so that messages get from point A to B to C.  A typical queue message includes the Destination which tells the messge where to go, a Label which describes the message, a Body which contains the message, and a Body Length so the receiver can verify that it received everything.

So how does any of this relate to Windows phones?  A number of years ago, an MSMQ client was made available for download and installation on Windows phones.  Additionally, the .NET Compact Framework 2.0 included classes to work with MSMQ.  Unfortunately, the installation of MSMQ was far from seamless which inhibited its adoption by customers.  More recently, functionality was included in the .NET Compact Framework 3.5 that facilitated store and forward messaging using Exchange 2007 as a transport.  This is a good solution for customers running the newest version of Exchange, have an unlimited data plan for their phones, and don’t mind running line of business applications over their email infrastructure.

So what do we do for customers that have found the Windows Mobile MSMQ client too much of a hassle, don’t have Exchange 2007 or don’t want to use it as a mobile message queue server?  I think the answer has been under our noses all along.  Burned in the ROM of every Windows Mobile 6.x device is SQL Server Compact + a lightweight data sync solution called Remote Data Access (RDA).  For those of you running Windows Mobile 5, XP, Vista or 7, you can easily download these bits to your mobile client.  SQL Server Compact is you local queue, RDA is your transport and SQL Server in the cloud or data center is your message queue server.  So let’s break this down and see how it will work.

A mobile application that captures data in the field would want to drop that info in a local queue.  SQL Server Compact becomes that local queue and the message format is actually a table with the following structure:

Table name Message
MessageId Uniqueidentifier
Destination NVarchar(whatever)
Label NVarchar(whatever)
Body NVarchar(4000)
BodyLength int

This table would be created inside a MessgeQueue database in SQL Server and RDA would pull it down to SQL Server Compact.  In the Pull method you call from .NET on the client, you would add “WHERE 1=0” to the SQL statement.  This filter has the effect of pulling down an empty shell of the table without retrieving any data to the client since that’s all you want.  It also means that when you insert local data into the table and call the Push method, the data will be removed from the client at the completion of a successful sync.

So you’re probably wondering, what makes this so special and message queue-like vs. anything else?  The secret is that unlike other sync technologies, RDA can wrap the upload of data into a transaction.  As the data is being uploaded over wireless, if any of the INSERTs into SQL Server fail for whatever reason, everything gets rolled-back and the original data remains in the local SQL Server Compact queue.  This is the kind of guaranteed commit that you expect from a message queuing system.  It’s an “all or nothing” success or rollback.

It actually gets better.  A property of both RDA and Merge Replication is called ConnectionRetryTimeout.  This feature is designed to help you with unreliable wireless coverage where you have signal one minute and then lose it the next.  Let’s say you have this timeout value set to 2 minutes and you begin your Push upload of queued data.  Everything is working fine for the first few seconds but then you lose wireless coverage from your mobile operator.  If you regain coverage before the 2 minute time-out, the upload will resume where it left off.  Since both RDA and Merge send and receive data between the SQL Server Compact and IIS in tiny blocks, you never have to worry about running out of memory and you can pick up where you left off in case of a network dropout.

So the big takeaway here is that we do in fact have a Mobile Message Queue solution hidden on our Windows phones.  We have a message format that lets us drop text/xml/whatever data into the body, a label that a server process or SQL trigger can key off of to perform an action, and a transactional upload mechanism that ensures your critical data will cross the wireless chasm and make it to the other side intact.

So what’s next?  Now that you can capture data in a local queue and safely upload it, you might be wondering how queued messages from someone else can be pushed to your device.  Don’t worry, that will be in my next post.  Also, this isn’t just an article on how to solve a big problem in the mobile space, I’m actually building the necessary client and server pieces as well.  We’re all looking for a reliable and unified way to connect mobile devices to corporate assets and this just might be the simple answer we’re looking for.

– Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

What ever happened to RDA?

Who remembers using RDA Remote Data Access to synchronize data between SQL Server and SQL Server Compact?  I certainly do!

Before I dove head first into the world of Merge Replication, I always used RDA to get my customers up and running quickly.  Mobilizing an organization’s workforce quickly and easily is what it’s all about so they can start reaping the benefits.  In addition to a speedy time to market, there’s no faster or more scalable mobile sync technology on the market anywhere.

So why wouldn’t I always use RDA?  Here’s a quick list:

  1. You’re using Identity columns.
  2. You want to replicate schema changes to the client.
  3. You want change tracking on both the client and server to perform diffs of each of the tables during a sync instead of re-downloading the entire table.
  4. You want to automatically resolve conflicts that arise when 2 people update the same data.
  5. You want referential integrity constraints to be pushed down to the client database from SQL Server.
  6. You don’t want to write code to perform synchronization or filter data.

If anything on the above list applied to you, you would shift to Merge Replication because it could manage ranges of Identity columns, push down schema changes, only sync data differences, resolve conflicts and push down a database’s referential integrity constraints.  Merge requires almost no code to get started and tables and columns are filtered visually via a wizard.

So why might you choose to use RDA?  Here’s another list:

  1. Your Primary Keys use GUIDs instead of Identity columns.
  2. Users don’t overwrite each other’s data so you don’t need conflict resolution.  The rule of “Last in Wins” works for you.
  3. While you want indexes to be pushed down, you don’t care if your local SSCE database has referential integrity constraints applied.
  4. You want to wrap the changes you upload to SQL Server in a transaction so that all changes are applied or none of them are.
  5. Change tracking on the client is good enough and re-downloading updated server tables doesn’t take too long.
  6. You developers don’t mind writing some sync code.
  7. Be able to execute SQL and Stored Procedures directly against SQL Server via IIS.
  8. You’re downloading read-only data.

If your solution meets the criteria in the list above, you’re probably a good candidate for using RDA instead of Merge.  Are there any other choices out there?

Back at MEDC 2007, we announced a new data replication technology for devices called Occasionally Connected Sync that would sit somewhere between RDA and Merge.  OCS as it used to be called was renamed Sync Services for ADO.NET and then was eventually merged into the Sync Framework.

The Sync Framework is a developer-focused technology:

  1. Supports conflict resolution.
  2. Change tracking on the server as well as the client so that only data differences are exchanged.
  3. Peer to Peer sync in the forthcoming v2 of Sync Framework.
  4. Sync with databases other than SQL Server.
  5. Best suited for SSCE running on a desktop or laptop.

The clearest differentiation that the Sync Framework has over Merge is its provider model which allows it to sync with other ADO.NET databases like Oracle or DB2.  SQL Server supports built-in P2P Transactional replication and v2 of the Sync Framework will allow you to do this via WCF.  If you development team doesn’t mind writing lots of sync code and needs to support scenarios like synchronizing with other databases from SSCE on the desktop, then the Sync Framework might be the way to go for you.  I wouldn’t yet recommend the Sync Framework for device sync since its wire protocol is currently based on the DataSet which may cause out of memory errors on Windows phones with limited working sets.

So where does this leave RDA?

The reason I’m writing this blog post is because time and time again I run into customer sync scenarios that don’t always need the power of Merge or the extra flexibility of the Sync Framework.  Most field service applications follow the same kind of pattern:

  1. Lots of download-only lookup/reference tables that aren’t changed by the user.
  2. Tables that are pushed down to the device that tell a user where to go and what to do.
  3. Tables (sometimes empty) that are used to capture data from the user in the field that are upload-only.

These kinds of schemas don’t require conflict resolvers or server change tracking and are therefore well suited for RDA.

What’s the big benefit of using RDA if a sync scenario meets its criteria?

  1. You won’t modify SQL Server’s schema with GUIDs and Triggers.
  2. You won’t degrade the performance of SQL Server by having it track changes and maintain extra metadata.
  3. You will have the fastest and most scalable sync solution with least amount of hardware.
  4. Time to market is shorter.

The big takeaway here is that I want you to consider your sync solution carefully before choosing a technology.  If your customer’s needs are met by RDA, then you should use it and reap the benefits of developing and deploying a simpler solution with fewer moving parts.

Remember Occam’s Razor.

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]