Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SQL Server Administration (2005)
 Performance Degradation over time

Author  Topic 

sfortner
Yak Posting Veteran

63 Posts

Posted - 2011-06-13 : 13:54:30
I'm using a Sage Timesheet database over which I can't change (other than adding indexes and new custom tables), and we're seeing pretty major performance problems over time. We had a consultant in that thought that the lack of primary and foreign keys may conttribute to this. The database itself has very few clustered indexes as well, but uses mostly non-clustered indexes and some of these have unique constraints as well. In addition to using the Sage Timesheet application against this database, we also have a custom web app that is inserting/updating/deleting employee time as well. We have several custom stored procedures that are acceptably fast on an older copy of the database, but as more data is added to each of the tables, it slows considerably. All of the sprocs that are performing bad happen to be pivoted in the front end. I've already rewritten two to use the pivot operator and pivot on the sql side but haven't updated the UI to just pull these in, so I'm not sure how much faster it will be. Based on the complexity of the columns to pivot and the performance issues with some of the tables, the rewrites may help immensely.

However, my question is why does an older copy of this database perform well, but the newer database is quite a bit slower? Of course, there's more data in all the tables, but not a significant amount. For example, the main tsp_TimeEntries table on the older database has around 192,000 rows, but the production version only has 262,000. The total database (MDF) size is 872 MB. To run a daily report, it takes 1 - 2 seconds to build the report page from our web app when run locally from my machine (configuration 1), but 2 - 3 seconds to run from another test web server (configuration 2). Running the same report for a copy of the current production database on my laptop takes 7 - 8 seconds (configuraiton 3), while running from the same test web server but pointing to the production database takes 11 - 12 seconds. Here's some metrics in a grid:

Action____________________| Config 1 | Config 2 | Config 3 | Config 4
Build Daily Report_______ | 1 - 2 sec | 2 - 3 sec | 7 - 8 sec | 11 - 12 sec
Refresh Daily Report_____ | 1/2 sec | 1/2 - 1 sec | 5 sec | 7 - 8 sec
Build Weekly Report______ | 4 - 5 sec | 5 - 6 sec | 15 sec | 22 - 23 sec
Refresh Weekly Report____ | 1 - 2 sec | 2 sec | 5 - 6 sec | 14 - 17 sec
Build Main Summary Screen | 2 sec | 3 - 4 sec | 7 - 8 sec | 10 - 11 sec


The first three configurations are where the database is running from my laptop, a dual core HP with 2.8 GHz processors and 4 GB of RAM, SQL Server 2005 Developer Edition w/ no SPs (9.0.1399). The production db server is a SAN running ESX 4.1 with 8 processors and 32 GB of RAM and SP 3 (9.0.4035). The stored procedures used to generate the data are identical on all databases, and the tables and indexes are identical as well (according to Red Gate's SQL Compare 8). I tried rebuilding the indexes and statistics for the local copy of the production database (restored from a 6/8/2011 backup), but it made very little difference to configuration 3 in terms of performance. The indexes and/or statistics may need to be recreated to offer better performance, but our consultant/dba that came in used a sql trace from last week and let the engine tuning wizard apply the recommendations from that trace and it made little difference, probably because the sprocs used the same indexes/statistics as before the changes. For the most part with few exceptions the execution plans are indicating index seeks not table scans or index scans. There's some small tables/heaps for which table scans are occurring (maybe one), and another index where the sproc responsible for building the main summary screen is doing an index scan on a small table (but the consultant says it's only 6% of the plan). It almost seems like something else is wrong. A little additional data doesn't seem like it should be slowing things down this much. We have a physical quad core box with 8 GB of RAM we may try, but you can see the database itself is slower even on my laptop with no one else hitting it. It is interesting that the production box is quite a bit slower than my laptop with the same database though.

--Steve

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-06-13 : 14:00:39
Have you contacted Sage customer support?
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2011-06-13 : 14:09:22
Sort of - we have three consultants that have worked with it in one form or another. I honestly don't know to what extent they have checked with Sage on this particular matter. That is probably a wise suggestion to start with them first. Part of the problem is that while Timesheet is slow from time to time, I haven't tracked it's performance as much as our custom web app.

--Steve
Go to Top of Page

robvolk
Most Valuable Yak

15732 Posts

Posted - 2011-06-13 : 14:50:14
You should contact customer support first for any third-party product you're using, especially before calling in a consultant, and even moreso before making any changes to the database. I'd be very surprised if they allow you to do that, most companies would consider that a violation of license agreement and terminate support. If you're not sure of their position on that I'd suggest not mentioning the changes you've made, and be prepared to undo them. (since they don't appear to be helping anyway that's no great loss)

Be wary of any "consultant" that blindly accepts or deploys the recommendations of the Tuning Advisor. Did they gather any performance counters or traces before and after they added the indexes (besides the DTA workload)? Did they compare those metrics? Did they test each server in turn? Report generation time is not a reliable performance metric, it's basically useless. You'd want to look at trace durations, disk I/O, CPU, and memory counters before and after to see if they were affected by adding the indexes.

It's pointless to compare performance amongst different machines if any of those machines are not used in production. It's possible your performance problem is entirely hardware or system related, i.e. slow network, fragmented disk, or heavy memory paging. The DTA will not recognize these kinds of problems, and tuning queries probably won't fix them.

You might want to get a trial version of SQLSentry or Confio Ignite and see if it can find any additional problems. Confio does some wait stat analysis that can help pinpoint a bottleneck more precisely than DTA. You can also try some of Glenn Berry's diagnostic queries:

http://sqlserverperformance.wordpress.com/2010/05/02/recap-of-april-2010-dmv-a-day-series/

If nothing else, they'll help to eliminate specific things like CPU, memory, and so on, and you can focus on other aspects like disk, network, or query plans.
Go to Top of Page

sfortner
Yak Posting Veteran

63 Posts

Posted - 2011-07-01 : 14:52:13
To confirm, our consultants worked with Sage to get sql code for integrating with their database, so it's not a problem. Actually, the other question you helped me with was based on some code from Sage. The DBA/consultant actually helped us quite a bit, and it ended up being a combination of updated indexes/statistics, problems with function calls in a view, and just inefficient stored procedures. When we applied the new indexes/statistics, converted the view to a table and rewrote the long-running stored procedures, we are now having very good performance. The OP was really out of frustration to be honest with you, because I was out of ideas and was not aware of how slow the view that was calling functions actually was. In hind sight, I'm not sure how well the updates to the indexes and statistics helped, but all in all, it seems like we had numerous problems that helped contribute to the performance problems and the OP was only after an initial visit.

I actually had run several traces previously which mirrored the performance I saw from the reports to some extent. The only thing that concerned me was that the trace times were much quicker than the report times in some cases, but the rewrites helped trim the time to generate the reports significantly and narrowed the gap between the trace and the time to generate the reports.

I had also read on sqlteam.com previously that you should do any pivots/cross tables in the software code, and that's what I was doing, but it seemed to be slow the old way. Replacing the view with a table really helped since I think the function calls in the view were getting called for each row, and apparently really slowed down a lot when several users were using the system (or at least it seemed that way). That view seemed to slow significantly when more items were added into the several tables and views it used, and just seemed to really compound the problem. I rewrote the report sprocs with the pivot operator and it seemed even faster to build the weekly reports. The daily report may be about the same to build initially, but a little slower to refresh the page if you change the date and recall the sproc. It's still plenty fast and only shows the loading gif for a 1/2 second or so.

So, all in all, our performance is pretty good - I think we just needed more analysis than what I could provide on 6/13. Our dba/consultant also ran filemon to check for I/O on a follow up visit so I think he checked everything out pretty well. The results are what proved to me that he was worth the effort. Thanks also for the links and product suggestions - I had never really seen much in the way DTA analysis.

BTW, the performance is now about 1 1/2 seconds to build the daily report, and about 1/2 second to refresh it. Checking performance from sql server, it takes on average about 273 ms to run. The weekly report can be generated in about 2 seconds, and refreshed in 1/2 second, and takes about 380 ms to run from SSMS. Also, the biggest gain we saw was in the performance of the main summary screen - it now builds in less than a second or takes 93 ms to run from SSMS. Another consultant had an inner and outer cursor in it as well, and replacing that with a for xml statement and parsing notes was much faster apparently.


--Steve
Go to Top of Page
   

- Advertisement -