I've been doing some Google searching and high availability is EASY to find... but I'm looking to do a high speed replication of an SQL database.Basically, I have a DB that is accessed by a number of machines to check is a customer has enough credits before making a transaction. Currently the number of machines accessing the DB is so high we've built a queue to be able to process all the requests.We're looking to replicate the "transaction" DB in three geographical areas USA, Europe, and Asia. The problem is that all three servers need to be kept in sync very quickly.I'm wondering if this is even possible OR is we will need to work in some special logic with a geographical load balancer to keep a customer from using too many credits.For example, customer has 1000 credits, uses 500 on Asia's server, 500 on the USA server and is then permitted to use 500 on the European server... this can't happen.Ideas???Suggestions on where to find more information???
12/15/2009 6:26:53 AM
Out of curiousity, how many connections/second are you dealing with? MS. SQL Server can handle thousands at a time under nominal conditions.I'm interested in the proper solution to your problem.
12/15/2009 8:22:59 AM
I'm no expert, but is there any particular reason you'd think mysql multimaster replication wouldn't work for you? http://en.wikipedia.org/wiki/Multi-master_replicationhttp://dev.mysql.com/doc/refman/5.1/en/replication-howto.htmlhttp://onlamp.com/onlamp/2006/04/20/advanced-mysql-replication.html
12/15/2009 8:42:04 AM
This is sort of what Amazon does.Yes, your best bet is to use a load balancer to direct users. It will reduce your efficiency slightly, but will allow you to scale and will prevent the critical data mismatch scenario you just explained.
12/15/2009 12:17:35 PM
llama
12/17/2009 11:04:17 PM
llama looking at the article http://onlamp.com/onlamp/2006/04/20/advanced-mysql-replication.html it seems that what I'm looking to do might be possible with the circular replication failover recovery.The problem comes into the speed and bandwidth required to do so and how this can be done from our services cluster.
12/17/2009 11:18:21 PM
3M per hour?holy shit
12/18/2009 9:11:49 AM
12/18/2009 9:55:24 AM
^^3M/hour is only slightly more than 800/second, which is well within SQL Server tolerance of 2-3,000/second.But yeah, he already said it was an issue with the internet connection being choked; and it makes more sense now.
12/18/2009 2:56:51 PM
12/18/2009 3:04:36 PM
Well, it's not capped by any means, but those numbers are easily achievable. There's a site that performs transaction benchmarks by hardware and database types to prove my point:Here's a server that handles 1,379 transactions/second (tps) ...http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=107032701And here's a really expensive server that achieved 20,523 tps ...http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=105112801Interesting article, by the way...http://sqlblog.com/blogs/paul_nielsen/archive/2007/12/12/10-lessons-from-35k-tps.aspx
12/18/2009 6:17:00 PM
http://www.NetApp.com
12/19/2009 11:45:06 AM
^^ ok, i was wondering if you knew something i didn't about inherent limitations to SQL server. where i work, our database handles about 2k tps at peak so it got me thinking we might be hitting a wall at some point.[Edited on December 19, 2009 at 12:28 PM. Reason : i'm not a dba by any means so i wasn't sure]
12/19/2009 12:25:53 PM