SQL Server Compact 3.5 Service Pack 2 Cumulative Update Package 7 RTW

SQL Server Compact

Hey folks, just wanted to let you know that cumulative update package 7 for SQL Server Compact 3.5 Service Pack 2 has been released to the web.

You can download the new bits over at http://support.microsoft.com/kb/2665342.

This is a hotfix for an incorrect sort order for a subscriber in SQL Server Compact 3.5 SP2 that synchronizes with a publisher in SQL Server.  For instance, you may have a column with an ASC index on SQL Server, but during sync, the sort order may not be specified.  The problem occurs due to an incorrect index creation statement in the .OUT file in the virtual directory on IIS.  Therefore, only the Server Tools need to be updated.  Both x86 and x64 versions of the update are available to download.

Keep in mind that cumulative updates 6 and above will allow your Windows tablets, laptops and Windows Embedded Handheld devices to sync with SQL Server 2012.

Go get things sorted out,

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Performance and Memory Management Improvements with Windows Embedded Handheld

A lot has changed since the launch of Windows Phone in the Fall of 2010.

Microsoft now has a compelling phone platform that targets consumers inside and outside the office.  One thing that that hasn’t changed is the widespread use of Windows Embedded Handheld to solve tough enterprise mobility problems.  It should be no surprise that over 80% of enterprise handhelds shipped are running Windows Mobile or Windows Embedded Handheld.  They include support for barcode scanning, RFID reading, rugged hardware, every type of wireless, full device encryption, complete over-the-air software distribution and device managment support, FIPS compliance, and both capacitive touch and stylus operation.  On the application platform side of the equation, they have rich support for WinForm development using Visual Studio and the .NET Compact Framework, C++ and a full-featured database with built-in sync capabilities via SQL Server Compact.  They can easily communicate with WCF SOAP and REST web services running on Windows Servers on-premise or with Azure in the cloud.  Support for Merge Replication means faster time to market to get device synchronizing with SQL Server with almost no coding.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Since Windows Embedded Handheld uses an advanced version of the operating system kernel used by Windows Mobile 6.5.3, many of the techniques and best practices I’ve taugh customers and developers all over the world still apply.  While it still uses the slotted memory model found in Windows CE 5 with 32 processes and 32 MB of memory per process, you’ll find that numerous enhancements and tuning has taken place to give your line of business apps more of what they need.  I’m talking about more memory per process and improved performance.

A recent Gartner report recommends that organizations should stay with Windows Embedded Handheld as the best mobile platform for enterprise line of business needs.  Great devices are available from OEMs like Intermec, Motorola, Psion, and Honeywell just to name a few.  I hope this video helps you with any memory management or performance issues you may need to deal with in your enterprise mobile apps.

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

What the new App Hub in Windows Phone Mango means for the Enterprise

Windows Phone 8 Tiles

If you attended MIX 11 or watched it on Channel 9, you might have seen Todd Brix’s session titled “Making Money with your Applications on Windows Phone.”

In this session, Todd talked about all the great things Windows Phone users and developers can expect with the new Marketplace and App Hub in the Mango timeframe.  I just want to focus on two items that will be of great significance to companies and organizations that are looking to build, and privately distribute Windows Phone apps to their employees, partners and customers.

Mango-Sizes

The Beta Distribution Service allows developers to distribute pre-certified apps to an access-controlled set of beta users.  How does it work?

  • The developer selects a list of up to 100 testers.  This number is subject to change based on feedback we get.
  • Developer sends an email to the designated testers that includes a private deeplink that points to the app in the Marketplace.  This allows only the testers to access and download the content since the app is not discoverable in the Marketplace via Search.
  • Only testers selected in the App Hub based on their Windows Live ID can test the app and provide feedback for 90 days.  Yes, the app will “time bomb” after 90 days.
  • The beta cannot be updated.  If you have multiple updates based on testing feedback, you must resubmit them like the first beta and send updated deeplinks to testers.
  • Testers won’t have to unlock their phone in order to beta test the apps.
  • Since there’s no certification requirement, there’s no latency between when you publish a beta app and when your private list of testers can access and download your content.
  • The cost of the beta app must be free.

Takeaway: No matter who you’re building apps and games for, the Beta Distribution Service will allow you to create higher quality content since you can now include beta testing in your development cycle.

The Private Distribution Service allows developers to privately distribute certified apps to a targeted group of users.  How does it work?

  • The app must be certified by Microsoft before distributing.
  • Developer sends an email to the targeted group of users that includes a private deeplink that points to the app in the Marketplace.  Keep in mind that the app is not discoverable in the Marketplace via Search by the general public.
  • A private app can be updated and pushed to the targeted group of users.
  • There are no limits on the number of users or the duration of time that those users can use the app.  This works just like the public Marketplace rules we have today.
  • There is no access enforcement based list of targeted users.  In other words, if an employee at a company shares the deeplink with a fellow coworker, that new person can download the content.  By including appropriate authentication and authorization mechanisms in published apps, you can prevent unwanted users from being able to do anything with the app.
  • Private apps can be free or paid
  • These private apps can be published to the public Marketplace at any time.

Takeaway: This enables the private distribution of released apps to a small or large community of users.  You could use this as an extension of your application beta testing cycle if you want to send out a release candidate to a broader group of testers than the 100 allowed via the Beta Distribution Service.  It’s also a great way to privately send your app to magazines, blogs, and other media channels to be publicly reviewed.

What does this mean for the enterprise?  Those of you who have worked with or administered enterprise software distribution systems, will quickly recognize that the Private Distribution Service doesn’t allow an administrator to push out and restrict software usage to specific organizational groups or roles.  It also doesn’t allow an administrator to uninstall specific apps  from the phones of specific users or groups either.  Lastly, it doesn’t map to an enterprise LDAP service like Active Directory.  You’re probably thinking System Center and this is definitely not that.

That being said, the Private Distribution Service overcomes the single-biggest blocker that company executives have expressed to me as a reason why they might not create and publish apps for Windows Phone.  They don’t want their private corporate apps publicly viewable and/or accessible by the broad general public searching for apps in the public Marketplace.  When they build B2C apps to reach their own customers, this is no problem, but when they build line-of-business apps meant just for their employees or partners, they don’t want these apps to be discoverable.

This means IT departments will be able to build undiscoverable Windows Phone apps for private internal use by the users they designate.  Some of the administrative issues around software distribution can be alleviated by having a corporate IT authority publish Beta and Private apps via a single Windows Live ID.  That publishing administrator can then map users, groups or roles to existing or new Windows Live IDs of employees that need to use the app.  That administrator will be able maintain the application lifecycle through beta testing, publishing, updating and decommissioning.  As I alluded to earlier in the post, once a designated employee has access to the app, her ability to run and access data and various parts of the app can be controlled by on-premise or cloud-based authentication and authorization mechanisms.  This includes things like passing Domain credentials or using claims-based auth.  Your data-in-transit is protected by SSL and your data-at-rest in Isolated Storage is protected by AES encryption.

We’ll be seeing a new Windows Phone, App Hub, and Marketplace before the end of 2011.  Its line-of-business credentials include encryption, private software distribution, server auth mechanisms, the ability to call SOAP and REST web services, socket support, multitasking, background agents, and a local SQL database just to name a few.

You’ll soon be looking at the most enterprise-ready smartphone on the market.

-Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

SQL Server Compact 4.0 Lands on the Web

SQL Server

A decade has passed since I first started using SQL CE on my Compaq iPAQ.  What started as a great upgrade to Pocket Access turned into the ultimate embedded database for Windows CE, the Pocket PC, Windows Mobile and Windows Phones.  The one-two punch of Outlook Mobile synchronizing email with Exchange and SQL Server Compact synchronizing data with SQL Server helped set the mobile enterprise on fire.  In 2005, version 3.0 supported Windows Tablets and progressive enhancements to the code base led to full Windows support on both x86 and x64 platforms.  With the new version 4.0, the little-database-that-could has grown up into a powerful server database ready to take on the web.

We’ve come a long way and you’re probably wondering what qualifies this new embedded database to take on the Internet:

  • Native support for x64 Windows Servers
  • Virtual memory usage has been optimized to ensure the database can support up to 256 open connections – (Are you actually using 256 pooled connections with your “Big” database today?)
  • Supports databases up to 4 GB in size – (Feel free to implement your own data sharding schemeSQL Server Compact)
  • Developed, stress-tested, and tuned to support ASP.NET web applications
  • Avoids the interprocess communications performance hit by running in-process with your web application
  • Row-level locking to boost concurrency
  • Step up to Government + Military grade security SHA2 algorithm to secure data with FIPS compliance
  • Enhanced data reliability via true atomicity, consistency, isolation, and durability (ACID) support
  • Transaction support to commit and roll back grouped changes
  • Full referential integrity with cascading deletes and updates
  • Support ADO.NET Entity Framework 4 – (Do I hear WCF Data Services?)
  • Paging queries are supported via T-SQL syntax to only return the data you actually need

Wow, that’s quite a list!  SQL Server Compact 4.0 databases are easily developed using the new WebMatrix IDE or through Visual Studio 2010 SP1.  I’m loving the new ASP.NET Web Pages.  It reminds me of the good old days of building web applications with Classic ASP back in the 90’s with Visual InterDev and Homesite.

What about Mobility?

Since SQL Server Compact owes its heritage to mobile and embedded versions of Windows, you might be wanting to know what our story is there.  The good news is that you can build and deploy v4.0 databases on Windows XP, Windows Vista, and Windows 7.  If you want to implement an occasionally-connected solution that utilizes the Sync Framework, Remote Data Access (RDA), or Merge Replication, you’ll need to stick with SQL Server Compact 3.5 SP2.  Time and resource-constraints prevented the Compact team from enabling these features.  Luckily, single-user WPF/WinForms database applications running on Windows Slates, laptops and Windows Embedded Handheld devices will work just fine with the v3.5 SP2 runtime.  Get a jumpstart with this by pickup up “Enterprise Data Synchronization with Microsoft SQL Server 2008 and SQL Server Compact 3.5 Mobile Merge Replication” at   http://www.amazon.com/Enterprise-Synchronization-Microsoft-Compact-Replication/dp/0979891213/ref=sr_1_1?s=books&ie=UTF8&qid=1281715114&sr=1-1 to start building those MEAP solutions.

With the tidal wave of Windows Slates hitting the market, a secure, powerful mobile database that allows users to work offline and syncs with SQL Server is definitely going to be a hot item!

So run, don’t walk to the Microsoft Download site to download the Next-Gen database for the web:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=033cfb76-5382-44fb-bc7e-b3c8174832e2

If you need to support occasionally-connected mobile applications with sync capabilities on muliple Windows platforms, download SQL Server Compact 3.5 SP2:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e497988a-c93a-404c-b161-3a0b323dce24

Keep Syncing,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Reducing SQL Server Sync I/O Contention :: Tip 3

Primary Key
GUIDs and Clustered Indexes

Uniqueness is a key factor when synchronizing data between SQL Server/Azure and multiple endpoints like Slates and Smartphones.  With data simultaneously created and updated on servers and clients, ensuring rows are unique to avoid key collisions is critical.  As you know, each row is uniquely identified by its Primary Key.

Primary Key

When creating Primary Keys, it’s common to use a compound key based on things like account numbers, insert time and other appropriate business items.  It’s even more popular to create Identity Columns for the Primary Key based on an Int or BigInt data type based on what I see from my customers.  When you designate a column(s) to be a Primary Key, SQL Server automatically makes it a Clustered Index.  Clustered indexes are faster than normal indexes for sequential values because the B-Tree leaf nodes are the actual data pages on disk, rather than just pointers to data pages.

While Identity Columns work well in most database situations, they often break down in a data synchronization scenario since multiple clients could find themselves creating new rows using the same key value.  When these clients sync their data with SQL Server, key collisions would occur.  Merge Replication includes a feature that hands out blocks of Identity Ranges to each client to prevent this.

When using other Microsoft sync technologies like the Sync Framework or RDA, no such Identity Range mechanism exists and therefore I often see GUIDs utilized as Primary Keys to ensure uniqueness across all endpoints.  In fact, I see this more and more with Merge Replication too since SQL Server adds a GUID column to the end of each row for tracking purposes anyway.  Two birds get killed with one Uniqueidentifier stone.

Using the Uniqueidentifier data type is not necessarily a bad idea.  Despite the tradeoff of reduced join performance vs. integers, the solved uniqueness problem allows sync pros to sleep better at night.  The primary drawback with using GUIDs as Primary Keys goes back to the fact that SQL Server automatically gives those columns a Clustered Index.

I thought Clustered Indexes were a good thing?

They are a good thing when the values found in the indexed column are sequential.  Unfortunately, GUIDs generated with the default NewId() function are completely random and therefore create a serious performance problem.  All those mobile devices uploading captured data means lots of Inserts for SQL Server.  Inserting random key values like GUIDs can cause fragmentation in excess of 90% because new pages have to be allocated with rows pushed to the new page in order to insert the record on the existing page.  This performance-killing, space-wasting page splitting wouldn’t happen with sequential Integers or Datetime values since they actually help fill the existing page.

What about NEWSEQUENTIALID()?

Generating your GUIDs on SQL Server with this function will dramatically reduce fragmentation and wasted space since it guarantees that each GUID will be sequential.  Unfortunately, this isn’t bulletproof.  If your Windows Server is restarted for any reason, your GUIDs may start from a lower range.  They’ll still be globally unique, but your fragmentation will increase and performance will decrease.  Also keep in mind that all the devices synchronizing with SQL Server will be creating their own GUIDs which blows the whole NEWSEQUENTIALID() strategy out of the water.

Takeaway

If you’re going to use the Uniqueidentifier data type for your Primary Keys and you plan to sync your data with RDA, the Sync Framework or Merge Replication, ensure that Create as Clustered == No for better performance.  You’ll still get fragmentation, but it will be closer to the ~30% range instead almost 100%.

Keep synching

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Reducing SQL Server I/O Contention during Sync :: Tip 2

Database Storage
Indexing Join Columns

In my last Sync/Contention post, I beat up on a select group of SAN administrators who aren’t willing to go the extra mile to optimize the very heart of their organization, SQL Server.  You guys know who you are.

This time, I want to look at something more basic, yet often overlooked.

All DBAs know that Joining tables on non-indexed columns is the most expensive operation SQL Server can perform.  Amazingly, I run into this problem over and over with many of my customers.  Sync technologies like the Sync Framework, RDA and Merge Replication allow for varying levels of server-side filtering.  This is a popular feature used to reduce the size of the tables and rows being downloaded to Silverlight Isolated Storage or SQL Server Compact.

It’s also a performance killer when tables and columns participating in a Join filter are not properly indexed.  Keeping rows locked longer than necessary creates undue blocking and deadlocking.  It also creates unhappy slate and smartphone users who have to wait longer for their sync to complete.

Do yourselft a favor and go take a look at all the filters you’ve created and makes sure that you have indexes on all those Joined columns.

Keep synching,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Build the Mobile Web with WebMatrix

HTML5

Build mobile web sites that adhere to W3C Mobile Web Best Practices using the new WebMatix web development tool.

This tool introduces simple-to-use ASP.NET Web Pages which don’t follow the MVC pattern nor do they include server controls like WebForms.  It also introduces the “Razor” templating engine and a model where you have HTML and inline code where needed.  This way to building sites is easy and flexible and takes me back to the golden age of Microsoft ASP web development in the late ‘90’s.

Our favorite mobile database, SQL Server Compact 4.0 finds it’s way to the web with this tool providing a simple way to give your mobile web site a database.  It’s been beefed up and tuned for the stress of providing data services to Internet and supports 256 concurrent connections.  Since it’s a file-based database, you just copy it along with your web pages to your on-premise server, web hosting provider or Azure.

Last but not least, you get IIS Express which is a welcome replacement for the Cassini development web server currently used by Visual Studio.  This gives all developers the power of IIS 7.x without needing Administrator access to their box, even if they’re running on Windows XP.

The lightweight, inline-code nature of developing with WebMatrix makes it easy to build low-bandwidth sites that follow XHTML Basic 1.1 recommendations so you can target any mobile web browser.  From there, it’s up to you to determine if you want to support more advanced features found in mobile browsers like IE Mobile, Opera, or Webkit (iPhone, Android, webOS or Blackberry).

– Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Mobile Merge Replication Performance and Scalability Cheat Sheet

SQL Server Compact

If your Mobile Enterprise Application Platform (MEAP) is using SQL Server Merge Replication to provide the mobile middleware and reliable wireless wire protocol for SQL Server Compact (SSCE) running on Windows Mobile and Windows Embedded Handheld devices + Windows XP/Vista/7 tablets, laptops, desktops, slates, and netbooks; below is a guide to help you build the fastest, most scalable systems:

Active Directory

  • Since your clients will be passing in their Domain\username + password credentials when they sync, both IIS and SQL Server will make auth requests of the Domain Controller. Ensure that you have at least a primary and backup Domain Controller, that the NTDS.dit disk drives are big enough to handle the creation of a large number of new AD DS objects (mobile users and groups), and that your servers have enough RAM to cache all those objects in memory.

Database Schema

  • Ensure your schema is sufficiently de-normalized so that you never have to perform more than a 4-way JOIN across tables. This affects server-side JOIN filters as well as SSCE performance.
  • To ensure uniqueness across all nodes participating in the sync infrastructure, use GUIDs for your primary keys so that SQL Server doesn’t have to deal with the overhead of managing Identity ranges. Make sure to mark your GUIDs as ROWGUIDCOL for that table so that Merge won’t try to add an additional Uniqueidentifier column to the table.  Don’t create clustered indexes when using GUIDs as primary keys because they will suffer horrible fragmentation that will rapidly degrade performance.  Just use a normal index.
  • Create clustered indexes for your primary keys when using Indentity columns, Datetime, or other natural keys.  Ensure that every column in every table that participates in a WHERE clause is indexed.

Distributor

  • If your network connection is fast and reliable like Wi-Fi or Ethernet, your SSCE client has more than 32 MB of free RAM, and SQL Server isn’t experiencing any deadlocks due to contention with ETL operations or too many concurrent Merge Agents, create a new Merge Agent Profile based on the High Volume Server-to-Server Profile so that SQL Server will perform more work per round-trip and speed up your synchronizations.
  • If you’re using a 2G/3G Wireless Wide Area Network connection, create a Merge Agent Profile based on the Default Profile so that SQL Server will perform less work and use fewer threads per round-trip during synchronization than the High Volume Server to Server Profile which will help to reduce server locking contention and perform less work per round trip which will make your synchronizations more likely to succeed.
  • In order to prevent SQL Server from performing Metadata Cleanup every time a Subscriber synchronizes, set the –MetadataRetentionCleanup parameter to 0.
  • As SQL Server has to scale up to handle a higher number of concurrent users in the future, locking contention will increase due to more Merge Agents trying to perform work at the same time.  When this happens, adjust the parameters of the Default Profile so that both  –SrcThreads and –DestThreads are equal to 1.

Publication

  • When defining the Articles you’re going to sync, only check the minimum tables and columns needed by the Subscriber to successfully perform its work.
  • For Lookup/Reference tables that aren’t modified by the Subscriber, mark those as Download-only to prevent change-tracking metadata from being sent to the Subscriber.
  • Despite the fact the column-level tracking sends less data over the air, stick with row-level tracking so SQL Server won’t have to do as much work to track the changes.
  • Use the default conflict resolver where the “Server wins” unless you absolutely need a different manner of picking a winner during a conflict.
  • Use Static Filters to reduce the amount of server data going out to all Subscribers.
  • Make limited use of Parameterized Filters which are designed to reduce and further specify the subset of data going out to a particular Subscriber based on a HOST_NAME() which creates data partitions.  This powerful feature slows performance and reduces scalability with each additional filter, so it must be used sparingly.
  • Keep filter queries simple and don’t use IN clauses, sub-selects or any kind of circular logic.
  • Strive to always create “well-partitioned” Articles where all changes that are uploaded/downloaded are mapped to only the single partition ID for best performance and scalability.
    • When using Parameterized Filters, always create non-overlapping data partitions where each row from a filtered table only goes to a single Subscriber instead of more than one which will avoid the use of certain Merge metadata tables.
    • Each Article in this scenario can only be pubished to a single Publication
    • A Subscriber cannot insert rows that do not belong to its partition ID.
    • A Subscriber cannot update columns that are involved in filtering.
    • In a join filter hierarchy, a regular article cannot be the parent of a “well-partitioned” article.
    • The join filter in which a well-partitioned article is the child must have the join_unique_key set to a value of 1 which relates to the Unique key check box of the Add Join dialog.  This means there’s a one-to-one or one-to-many relationship with the foreign key.
    • Each “well-partitioned” Article can have only one subset or join filter. The article can have a subset filter and be the parent of a join filter, but cannot have a subset filter and be the child of a join filter.
  • Never extend a filter out to more than 4 joined tables.
  • Do not filter tables that are primarily lookup/reference tables, small tables, and tables with data that does not change.
  • Schedule the Snapshot Agent to run once per day to create an unfiltered schema Snapshot.
  • Set your Subscriptions to expire as soon as possible to keep the amount change-tracking metadata SQL Server has to manage to an absolute minimum. Normally, set the value to 2 to accommodate 3-day weekends since 24 hours are automatically added to the time to account for multiple time zones. If server-side change tracking isn’t needed and Subscribers are pulling down a new database every day and aren’t uploading data, then set the expiration value to 1.
  • Set Allow parameterized filters equal to True.
  • Set Validate Subscribers equal to HOST_NAME().
  • Set Precompute partitions equal to True to allow SQL Server to optimize synchronization by computing in advance which data rows belong in which partitions.
  • Set Optimize synchronization equal to False if Precompute partitions is equal to True.  Otherwise set it to True to optimize filtered Subscriptions by storing more metadata at the Publisher.
  • Set Limit concurrent processes equal to True.
  • Set Maximum concurrent processes equal to the number of SQL Server processor cores.  If exceesive locking contention occurs, reduce the number of concurrent processes until the problem is fixed.
  • Set Replicate schema changes equal to True.
  • Check Automatically define a partition and generate a snapshot if needed when a new Subscriber tries to synchronize. This will reduce Initialization times since SQL Server creates and applies snapshots using the fast BCP utility instead of a series of slower SELECT and INSERT statements.
  • Add data partitions based on unique HOST_NAMEs and schedule the Snapshot Agent to create those filtered Snapshots nightly or on the weekend so they’ll be built using the fast BCP utility and waiting for new Subscribers to download in the morning.
  • Ensure that SQL Server has 1 processor core and 2 GB of RAM for every 100 concurrent Subscribers utilizing bi-directional sync. Add 1 core and 2 GB of RAM server for every additional 100 concurrent Subscribers you want to add to the system.  Never add more Subscribers and/or IIS servers without also adding new cores and RAM to the Publisher.
  • Turn off Hyperthreading in the BIOS of the SQL Server as it has been known to degrade SQL Server performance.
  • Do not add your own user-defined triggers to tables on a Published database since Merge places 3 triggers on each table already.
  • Add one or more Filegroups to your database to contain multiple, secondary database files spread out across many physical disks.
  • Limit use of large object types such as text, ntext, image, varchar(max), nvarchar(max) or varbinary(max) as they require a significant memory allocation and will negatively impact performance.
  • Set SQL Servers’s minimum and maximum memory usage to within 2 GB of total system memory so it doesn’t have to allocate more memory on-demand.
  • Always use SQL Server 2008 R2 and Windows Server 2008 R2 since they work better together because they take advantage of the next generation networking stack which dramatically increases network throughput. They can also scale up as high as 256 cores.
  • Due to how Merge Replication tracks changes with triggers, Merge Agents, and tracking tables, it will create locking contention withDML/ ETL operations.  This contention degrades server performance which negatively impacts sync times with devices.  This contention should be mitgated by performing large INSERT/UPDATE/DELETE DML/ETL operations during a nightly maintenance window when Subscribers aren’t synchronizing.
  • Since Published databases result in slower DML/ETL operations, perform changes in bulk by using XML Stored Procedures to boost performance.
  • To improve the performance of pre-computed partitions when DML/ETL operations result in lots of data changes, ensure that changes to a Parent table in a join filter are made before corresponding changes in the child tables.  This means that when DML/ETL operations are pushing new data into SQL Server, they must add master data to the parent filter table first, and then add detail data to all the related child tables second, in order for that data to be pre-computed and optimized for sync.
  • Create filter partitions based on things that don’t change every day.  Partitions that are added and deleted from SQL Server and Subscribers that move from one partition to another is very disruptive to the performance of Merge Replication.
  • Always perform initializations and re-initializations over Wi-Fi or Ethernet when the device is docked because this is the slowest operation where the entire database must be downloaded instead of just deltas.  To determine rough estimates for initialization, multiply the size of the resulting SSCE .sdf file x the bandwidth speed available to the device.  A file copy over the expected network will also yield estimates for mininum sync times.  These times don’t include the work SQL Server and IIS must perform to provide the data or data INSERT times on SSCE.
  • If your SQL Server Publisher hits a saturation point with too many concurrent mobile Subscribers, you can scale it out creating a Server/Push Republishing hierarchy. Put the primary SQL Server Publisher at the top of the pyramid and have two or more SQL Servers subscribe to it. These can be unfiltered Subscriptions where all SQL Servers get the same data or the Subscribers can filter their data feeds by region for example. Then have the Subscribing SQL Servers Publish their Subscription for consumption by mobile SSCE clients.
  • Create just a single Publication.

Internet Information Services

  • Use the x64 version of the SQL Server Compact 3.5 SP2 Server Tools with Windows Server 2008 R2 running inside IIS 7.5.
  • Use a single Server Agent in a single Virtual Directory.
  • Ensure the IIS Virtual Directory where the Server Agent resides is on a fast solid-state drive that’s separate from the disk where Windows Server is installed to better support file I/O.
  • Use a low-end server with 2 processor cores and 2 GB of RAM to support 400 concurrent Subscribers queued at the same time.
  • Set the MAX_THREADS_PER_POOL Server Agent registry key equal to 2 to match the IIS processor cores and RAM. Do not set this value to a higher number than the number of cores.
  • Set the MAX_PENDING_REQUEST Server Agent registry key equal to 400 which means the Server Agent will queue up to 400 concurrent Subscribers waiting for one of the 2 worker threads to become available to sync with SQL Server.
  • Set the IIS Connection Limits property to 400 to prevent an unlimited number of connections reaching the Server Agent.
  • Add a new load-balanced IIS server for every additional 400 concurrent Subscribers you want to add to the system.

Subscriber

  • Use the appropriate x64, x86 or ARM version of SQL Server Compact 3.5 SP2 to take advantage of the PostSyncCleanup property of the SqlCeReplication object that can reduce the time it takes to perform an initial synchronization. Set the PostSyncCleanup property equal to 3 where neither UpdateStats nor CleanByRetention are performed.
  • Increase the Max Buffer Size connection string parameter to 1024 on a phone and 4096 on a PC to boost both replication and SQL query processing performance. If you have more RAM available, set those values even higher until you reach the law of diminishing returns.
  • Keep your SSCE database compact and fast by setting the Autoshrink Threshold connection string parameter to 10 so it starts reclaiming empty data pages once the database has become 10% fragmented.
  • Replication performance testing must be performed using actual PDAs to observe how available RAM, storage space and CPU speed affect moving data into the device’s memory area and how quickly this data is inserted into the SSCE database tables.  Since the SSCE database doubles in size during replication, the device must have enough storage available or the operation will fail.  Having plenty of available RAM is important so that SSCE can utilize its memory buffer to complete a Merge Replication operation more quickly.  With plenty of available RAM and storage, a fast CPU will make all operations faster.
  • The PDA must have at least an extra 32 MB of available free RAM that can be used by the .NET Compact Framework (NETCF) application.  If additional applications are running on the device at the same time, even more RAM is needed.  If a NETCF application has insufficient RAM is will discard its compiled code and run in interpreted mode which will slow the application down.  If the NETCF app is still under memory pressure after discarding compiled code, Windows Mobile will first tell the application to return free memory to the operating system and then will terminate the app if needed.
  • Set the CompressionLevel property of the SqlCeReplication object to 0 for fast connections and increment it from 1 to 6 on slower connections like GPRS to increase speed and reduce bandwidth consumption.
  • Tune the ConnectionRetryTimeout, ConnectTimeout, ReceiveTimeout and SendTimeout properties of the SqlCeReplication object based on expected bandwidth speeds:
Property High Bandwidth Medium Bandwidth Low Bandwidth
ConnectionRetryTimeout 30 60 120
ConnectTimeout 3000 6000 12000
ReceiveTimeout 1000 3000 6000
SendTimeout 1000 3000 6000
  • You can decrease potentially slow SSCE file I/O by adjusting the Flush Interval connection string parameter to write committed transactions to disk less often than the default of every 10 seconds.  Test longer intervals between flushes like 20 or 30 seconds. Keep in mind that these transactions can be lost if the disk or system fails before flushing occurs so be careful.
  • When replicating data that has been captured in the field by the device, perform Upload-only syncs to shorten the duration.

Storage

  • Use a Fibre Channel SAN with 15k RPM or solid-state disks for best I/O performance.
  • Databases should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • Database logs should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • Tempdb should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • The Tempdb log should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • The Snapshot share should reside on a RAID 10, unshared LUN comprised of at least 6 disks.  This disk array should be large enough to accommodate a growing number of filtered Snapshots. Snapshot folders for Subscribers that no longer use the system must be manually deleted.
  • Merge Replication metadata tables should reside on a RAID 10, unshared LUN comprised of at least 6 disks.
  • Increase your Host Bus Adapter (HBA) queue depths to 64.
  • Your Publication database should be broken up into half the number of files as the SQL Server has processor cores. Each file must be the same size.
  • Tempdb should be pre-sized with an auto-growth increment of 10%. It should be broken up into the same number of files as the SQL Server has processor cores. Each file must be the same size.

High Availability

  • Load-balance the IIS servers to scale them out. Enable Server Affinity (stickiness) since the Replication Session Control Blocks that transmit data between the Server Agent and SSCE are stateful. Test to ensure that your load-balancer is actually sending equal amounts of Subscriber sync traffic to each IIS server.  Some load-balancers can erroneously send all traffic to a single IIS server if not properly configured.
  • Implement Windows Clustering so that SQL Server can failover to a second node.
  • Using SQL Server Mirroring so that your Published database will failover to a standby server.
  • Make a second SQL Server into an unfiltered Subscriber to your Publisher so that it can take over Merge Replication duties for mobile clients as a Republisher if the primary SQL Server fails. SSCE clients would just have to reinitialize their Subscriptions to begin synchronizing with the new Republisher.

Ongoing Maintenance

  • Use the Replication Monitor to have a real-time view of the synchronization performance of all your Subscribers.
  • Use the web-based SQL Server Compact Server Agent Statistics and Diagnostics tools to monitor the health and activity of the Server Agent running on IIS.
  • Create a SQL Job to execute the sp_MSmakegeneration stored procedure after large DML operations. Regular execution after INSERTING/UPDATING/DELETING data from either DML/ETL operations or after receiving lots of changes from Subscribers will maintain subsequent sync performance. Executing this stored procedure from the server-side is preferable to having it executed as a result of a Subscriber sync which would block all other Subscribers.
  • During your nightly maintenance window, rebuild the indexes and update the statistics of the following Merge Replication metadata tables:
    • MSmerge_contents
    • MSmerge_tombstone
    • MSmerge_genhistory
    • MSmerge_current_partition_mappings
    • MSmerge_past_partition_mappings
    • MSmerge_generation_partition_mappings
  • If you notice performance degradation during the day due to a large number of Subscribers making large changes to the database, you can updates the statistics (with fullscan) of the Merge Replication metadata tables more frequently throughout the day to force stored proc recompiles to get a better query plan.
    • UPDATE STATISTICS MSmerge_generation_partition_mappings WITH FULLSCAN
    • UPDATE STATISTICS MSmerge_genhistory WITH FULLSCAN
  • Rebuild/defrag indexes on your database tables and Merge Replication metadata tables throughout the day to reduce locking contention and maintain performance.
  • Use the Missing Indexes feature of SQL Server to tell you which indexes you could add that would give your system a performance boost. Do not add recommended indexes to Merge system tables.
  • Use the Database Engine Tuning Advisor to give you comprehensive performance tuning recommendations that cover every aspect of SQL Server.
  • Monitor the performance of the following counters:
    • Processor Object: % Processor Time: This counter represents the percentage of processor utilization. A value over 80% is a CPU bottleneck.
    • System Object: Processor Queue Length: This counter represents the number of threads that are delayed in the processor Ready Queue and waiting to be scheduled for execution. A value over 2 is bottleneck and shows that there is more work available than the processor can handle. Remember to divide the value by the number of processor cores on your server.
    • Memory Object: Available Mbytes: This counter represents the amount of physical memory available for allocation to a process or for system use. Values below 10% of total system RAM indicate that you need to add additional RAM to your server.
    • PhysicalDisk Object: % Disk Time: This counter represents the percentage of time that the selected disk is busy responding to read or write requests. A value greater than 50% is an I/O bottleneck.
    • PhysicalDisk Object: Average Disk Queue Length: This counter represents the average number of read/write requests that are queued on a given physical disk. If your disk queue length is greater than 2, you’ve got an I/O bottleneck with too many read/write operations waiting to be performed.
    • PhysicalDisk Object: Average Disk Seconds/Read and Disk Seconds/Write: These counters represent the average time in seconds of a read or write of data to and from a disk. A value of less than 10 ms is what you’re shooting for in terms of best performance. You can get by with subpar values between 10 – 20 ms but anything above that is considered slow. Times above 50 ms represent a very serious I/O bottleneck.
    • PhysicalDisk Object: Average Disk Reads/Second and Disk Writes/Second: These counters represent the rate of read and write operations against a given disk. You need to ensure that these values stay below 85% of a disk’s capacity by adding disks or reducing the load from SQL Server. Disk access times will increase exponentially when you get beyond 85% capacity.
  • A limited number of database schema changes can be made and synchronized down to SSCE Subscribers without any code changes which makes it easier to update your system as it evolves over time.
  • Use a Merge Replication Test Harness to stress test the entire system.  The ability to simulate hundreds or thousands of concurrent synchronizing Subscribers allows you to monitor performance and the load on the system.  This is helpful in properly configuring and tuning SQL Server, IIS, and the Windows Mobile devices.  It will tell you where you’re having problems and it will let you predict how much server hardware you will need to support growing numbers of Subscribers over time.  It’s also very important to simulate worst-case scenarios that you never expect to happen.

I hope this information sufficiently empowers you to take on the largest MEAP solutions that involve SQL Server Merge Replication and SQL Server Compact.  If you need a deeper dive, go check out my book on Enterprise Data Synchronization http://www.amazon.com/Enterprise-Synchronization-Microsoft-Compact-Replication/dp/0979891213/ref=sr_1_1?ie=UTF8&s=books&qid=1271964573&sr=1-1 over at Amazon.  Now go build a fast and scalable solution for your company or your customers.

Best Regards,

Rob

P.S.  If your solution doesn’t require all the advanced features found in Merge Replication, I highly recommend you use Remote Data Access (RDA).  This is a much simpler sync technology that’s extremely fast, scalable, and easier to manage.

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]

Microsoft SQL Server Compact 3.5 SP2 has Arrived

SQL Server Compact

My favorite embedded database for Windows Phones, laptops, tablets and desktops has been released to the Web along with Visual Studio 2010.

New features for SQL Server Compact 3.5 SP2 include:SSCE

  • Supports working with a SQL Server Compact 3.5 database using the Transact-SQL Editor in Visual Studio 2010. The Transact-SQL Editor can be used to run free-text Transact-SQL queries against a SQL Server Compact 3.5 database. The Transact-SQL Editor also provides the ability to view and save detailed estimated and actual query show-plans for SQL Server Compact 3.5 databases. Previously, the functionality provided by the Transact-SQL Editor was only available through SQL Server Management Studio.
  • New classes and members named SqlCeChangeTracking have been added to the System.Data.SqlServerCe namespace to expose the internal change tracking feature used by Sync Framework to track changes in the database. The SQL Server Compact change tracking infrastructure maintains information about inserts, deletes, and updates performed on a table that has been enabled for change tracking. This information is stored both in columns added to the tracked table and in system tables maintained by the tracking infrastructure. By using System.Data.SqlServerCe.SqlCeChangeTracking one can configure, enable, and disable change tracking on a table, and also access the tracking data maintained for a table. The API can be used to provide functionality in a number of scenarios. For example it can be used to provide custom implementations of client-to-server or client-to-client sync for occasionally connected systems (OCS) or to implement a custom listener application.
  • The managed assemblies of SQL Server Compact for use by the applications that privately deploy SQL Server Compact are installed in the folder %Program Files%\Microsoft SQL Server Compact Edition\v3.5\Private. Using these assemblies ensure that the application uses the privately deployed version of Compact even when a lower version of SQL Server Compact 3.5 is installed in the GAC.
  • Visual Studio 2010 installs both the 32-bit and 64-bit versions of SQL Server Compact 3.5 SP2 on a 64-bit machine. If a SQL Server Compact application is deployed using Click Once in Visual Studio 2010 then both the 32-bit and the 64-bit version of SQL Server Compact are installed on a 64-bit machine
  • SQL Server Compact 3.5 SP2 adds support for Windows Mobile 6.5, Windows 7 and Windows Server 2008 R2, and can sync data using Merge Replication and RDA with SQL Server 2008 R2 November CTP.
  • The SqlCeReplication object gets a new property called PostSyncCleanup which you can use to prevent SQL Server Compact from Updating Statistics after an initial Merge Replication initialization.  This has the potential to shave a substantial amount of time off of your initial syncs depending on the size of your database.

 

In addition to these new features, the following hotfixes from SQL Server 2005 Compact Edition or SQL Server Compact 3.5 SP1 have been rolled up in SQL Server Compact 3.5 SP2:

  • http://support.microsoft.com/kb/953259: Error message when you run an SQL statement that uses the Charindex function in a database that uses the Czech locale in SQL Server 2005 Compact Edition: “The function is not recognized by SQL Server Compact Edition”
  • http://support.microsoft.com/kb/958478: Error message when you run a “LINQ to Entities” query that uses a string parameter or a binary parameter against a SQL Server Compact 3.5 database: “The ntext and image data types cannot be used in WHERE, HAVING, GROUP BY, ON, or IN clauses”
  • http://support.microsoft.com/kb/959697: Error message when you try to open a database file from a CD in SQL Server Compact 3.5 with Service Pack 1: “Internal Error using read only database file”
  • http://support.microsoft.com/kb/960142: An error message is logged, and the synchronization may take a long time to finish when you use an application to synchronize a merge replication that contains a SQL Server 2005 Compact Edition subscriber
  • http://support.microsoft.com/kb/963060: An error message is logged, and the synchronization may take a long time to finish when you synchronize a merge replication that contains a SQL Server Compact 3.5 subscriber: “UpdateStatistics Start app=<UserAppName>.exe”
  • http://support.microsoft.com/kb/967963: Some rows are deleted when you repair a database by using the Repair method together with the RepairOption.RecoverCorruptedRows option in SQL Server 2005 Compact Edition and in SQL Server Compact 3.5
  • http://support.microsoft.com/kb/968171: Error message when you try to create an encrypted database in SQL Server 2005 Compact Edition: “The operating system does not support encryption”
  • http://support.microsoft.com/kb/968864: Error message when you run a query in SQL Server Compact 3.5: “The column name cannot be resolved to a table. Specify the table to which the column belongs”
  • http://support.microsoft.com/kb/969858: Non-convergence occurs when you synchronize a SQL Server Compact 3.5 client database with the server by using Sync Services for ADO.NET in a Hub-And-Spoke configuration
  • http://support.microsoft.com/kb/970269: Access violations occur when you run an application under heavy load conditions after you install the 64-bit version SQL Server Compact 3.5 Service Pack 1
  • http://support.microsoft.com/kb/970414: Initial synchronization of a replication to SQL Server Compact 3.5 subscribers takes significant time to finish
  • http://support.microsoft.com/kb/970915: Error message when you synchronize a merge replication with SQL Server 2005 Compact Edition subscribers: “A column ID occurred more than once in the specification. HRESULT 0x80040E3E (0)”
  • http://support.microsoft.com/kb/971027: Error message when you upgrade a very large database to SQL Server Compact 3.5: “The database file is larger than the configured maximum database size. This setting takes effect on the first concurrent database connection only”
  • http://support.microsoft.com/kb/971273: You do not receive error messages when you run a query in a managed application that returns columns of invalid values in SQL Server Compact 3.5
  • http://support.microsoft.com/kb/971970: You cannot insert rows or upload changes into the SQL Server 2005 Compact Edition subscriber tables after you run the “sp_changemergearticle” stored procedure or you add a new merge publication article when another article has an IDENTITY column
  • http://support.microsoft.com/kb/972002: Error message when you try to create an encrypted database in SQL Server Compact 3.5: “The operating system does not support encryption”
  • http://support.microsoft.com/kb/972390: The application enters into an infinite loop when you run an application that uses Microsoft Synchronization Services for ADO.NET to synchronize a SQL Server Compact 3.5 database
  • http://support.microsoft.com/kb/972776: When the application calls the SqlCeConnection.Close method or the SqlCeConnection.Dispose method in SQL Server Compact 3.5, the application may stop responding at the method call
  • http://support.microsoft.com/kb/974068: Error message when an application inserts a value into a foreign key column in SQL Server Compact 3.5: “No key matching the described characteristics could be found within the current range”

 

Web downloads for SQL Server Compact 3.5 SP2 is as listed below:

SQL Server Compact 3.5 SP2 for Windows desktop (32-bit and 64-bit)

Note that the file available for download is a 6 MB self-extracting executable (exe) file that contains the 32-bit and the 64-bit Windows Installer (MSI) files for installing SQL Server Compact 3.5 SP2 on a 32-bit and a 64-bit Computer. It is important to install both the 32-bit and the 64-bit version of the SQL Server Compact 3.5 SP2 MSI on a 64-bit Computer. Existing SQL Server Compact 3.5 applications may fail if only the 32-bit version of the MSI file is installed on the 64-bit computer. Developers should chain both the 32-bit and the 64-bit MSI files with their applications and install both of them on the 64-bit Computer. Refer to the KB article for more information.

SQL Server Compact 3.5 SP2 for Windows mobile devices (all platforms & processors)

SQL Server Compact 3.5 SP2 Server Tools (32-bit and 64-bit)

SQL Server Compact 3.5 SP2 Books Online (Note that the books online will be available for download by the third week of April 2010)

SQL Server Compact 3.5 SP2 Samples

Visual Studio 2010 and .NET Framework 4

This is a great release for SQL Server Compact that adds some important new features, squashes a bunch of bugs and adds support for our newest operating systems.  I strongly recommend you update your existing SSCE runtimes with SQL Server Compact 3.5 SP2.

Keep on Synching,

Rob

Sharing my knowledge and helping others never stops, so connect with me on my blog at http://robtiffany.com , follow me on Twitter at https://twitter.com/RobTiffany and on LinkedIn at https://www.linkedin.com/in/robtiffany

Sign Up for my Newsletter and get a FREE Chapter of “Mobile Strategies for Business!”

[mc4wp_form id=”5975″]