Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
ekelmans
Starting Member
7 Posts |
Posted - 2011-11-03 : 07:09:35
|
Hi Graz,I've stumbled upon a bug in SQL Profiler that somehow corrupts a tracfile at a certain point.When i try to load that trace in ClearTrace i get this message; "Error: Failed to read next event." But only after it could sucessfully load almost 1.400.000 rows! Hitting the cancel at 1.3M aborts the read thusfar too. Whe i load the trace in profile, can can see the 1.4 useable rows.How about a setting in options that allows to use the readable part, and ignore the part after the bugged read. Or ask if you want to use the readble part when you hit the error, or when i press cancel after 1.3 MAnd if it's a tracefile of a rollover series, to continue with the rest of the series, using whatever is readable. This way i can use what ever statistics that are salvageable, which is much better then nothing :)Great job with ClearTrace thusfar dude, you really make our DBA lives a lot easier when it comes to reading traces.Theo EkelmansNLFiles to process: 1 ( 2.111,5 MB )Clearing Target Tables...************************************************************Processing: OrdinaTrace_20111103_085404.trc************************************************************An error occurred reading the trace file.Error: Failed to read next event.Done. |
|
graz
Chief SQLTeam Crack Dealer
4149 Posts |
Posted - 2011-11-03 : 09:16:42
|
I'll put this on the list to consider. I think it is unlikely I'll spend any time working on it. This is the first time I've had this request. I'm not sure this happens enough to make it worth my time. Can you delete the offending file and put in a wild card for the file name? That should process all the rest of the files.=================================================Creating tomorrow's legacy systems today. One crisis at a time. |
|
|
ekelmans
Starting Member
7 Posts |
Posted - 2011-11-03 : 12:08:04
|
Hi Graz,Just so were clear on my request; I'm asking you not to throw away the data if you hit a "read next event" error, but to let the data that has been read through.Thats it Why trash the usable part of a huge trace (or series of trace files)?Why not add a message in the load screen saying you hit a "read next event" error and that you will ignore the rest of this file? And i do understand why you give this no prio whatsoever But if you think this error is rare, i must disappoint you. My company runs 3 datacentres, an i use scripted & scheduled traces very often, and also see this error about 2 times in batch of 80-110 traces a month.To minimize the impact of an error i now rollover to a new file every 50 MB, and throw out all files that are corrupted, and restart the cleartrace load.Its a workaround i can live with if you want to spend your time on other things you find more important.I would still love cleartrace Theo |
|
|
graz
Chief SQLTeam Crack Dealer
4149 Posts |
Posted - 2011-11-03 : 12:50:10
|
I'll look into it. I'll have to see if I can capture that error and skip that file. I don't have a good way out of that loop right now.=================================================Creating tomorrow's legacy systems today. One crisis at a time. |
|
|
|
|
|
|
|