Showing posts with label machine. Show all posts
Showing posts with label machine. Show all posts

Thursday, March 8, 2012

custom log provider - not recognised.

Hi All,

I have a tricky one for you...

1. I create a custom log provider on machine A.

2. I deploy the signed assembly to thehttp://DTS/90/LogProviders directory & the GAC on both machine A & machine B.

3. I create a new package on machine A & go to enable logging for this new custom provider: I find it in the drop-down list of providers, and I can add it successfully.

4. I create a new package on machine B & go to enable logging for this new custom provider: I find it in the drop-down list of providers, but I can't add it. I get the following error:

'...failed to create log provider...the log provider type "_" specified for log provider "_" is not recognized as a valid log provider type. This occurs when an attempt is made to create a log provider for an unknown log provider type. Verify the spelling in the log provider type name (package)."

Any ideas anyone?

Tamim.

All I can suggest is to double check the version you have in LogProviders folder and the GAC, and perhaps clean out any duplicate/old versions. Restart VS as well if you make any changes. Seems simple but I know I have confused myself when developing stuff and it has been redeployed after some changes.|||

Many thanks for responding Darren. Both your suggestions are sound, but unfortunately they aren't applicable in this case: in order to rule out any problem introduced by my own work, I tried the same with HtmlLogProviderCS - one of Microsoft's own custom log providers, included in their samples - and I recreated the problem exactly. I didn't change the version and it wouldn't have existed on my machines beforehand, so it's definately something else. In essence, there is a problem in deploying the signed assembly of a custom log provider, onto any machine on which the library was NOT built. (Depending on your level of curiosity, perhaps you might like to try the same with HtmlLogProviderCS yourself?). I might contact Microsoft directly on this one, as I fear it may be a bug.

Thanks again for your input. I'm enjoying the book (am returning to it again & again!), and will let you know if I reach a definate conclusion.

Cheers,

Tamim.

|||

I figured it out!! Basically the error message means that it couldn't find the custom object's DLL in the GAC, and the problem was that I drageed-&-dropped the library into the GAC, across servers - i.e. from a location on server A to the GAC on server B. This looks like it has worked, but in actual fact it didn't. When I remoted onto server B & opened the GAC up locally, it wasn't there. The solution therefore was to drag-&-drop into the GAC on server B, from a location on server B. But now for a tantilising follow up...

...my custom log provider opens up a client-side channel to a remote server, and sends messages into it. When I run the package from the package store it works, but when I run it from within a job the job succeeds and throws no errors or warnings, but the remoting bit doesn't work. I had added my remoting client-side configuration information into dtexecui.exe.config, but I'm thinking that when I run it from a job, I have to add the same config info into another config file? Any ideas anyone?

Hope the partial step forward is of use to someone...

|||

If you have configuration info, then you need some more files, as the UI host is not the same as used when scehduled.

Try dtexec.exe.config and dtshost.exe.config, also in C:\Program Files\Microsoft SQL Server\90\DTS\binn

Sunday, February 19, 2012

Custom Assembly Problems

Hi All,
I am having a problem with a shared assembly I created. I created a
dll and registered it in the GAC of my development machine. Then I
added a new key into my system registry so I could add it into a
project easily when in Visual Studio. It works well when I reference
it in my ASP.NET/VB.NET applications.
In reporting services, I added the reference to the dll in my report
just fine. I call a function in the custom report code, which
references my dll. The function returns a boolean which, within an
"IIf" statement, determines the background color of certain cells. It
does the job, builds without errors or warnings, and previews in
Visual Studio just the way it should. However, after deploying to my
production server, it seems like it is basically ignoring my call to
the shared assembly. I get no errors or warnings, but the background
color of the mentioned cells never gets set.
Is there another reference I am missing? Any other suggestions?
Thanks in advance!Some basic stuff which you may have already tried
1. Extend your assembly code to output a "hello world" string. Do this
by using a variable , setting its value through a parameter and use
error handling to output any errors , instead of the parameter string,
in case of errors.
2. Create a blank report and use the function call to display the
string in a text box
3. Deploy your assembly to the production GAC
4. Deploy your report to the production GAC
If that works , use the same function call to perform bits of your
original code and return a "done" string or the error string after the
function is completed .
That should help you in finding out where you are getting issues.
Best of luck
Cheers
Shai|||How do you deploy a report to the GAC?|||On Nov 30, 6:48 am, John <john.n.b...@.gmail.com> wrote:
> How do you deploy a report to the GAC?
Sorry , thats was a typo. I meant deploy the report to production
environment
Cheers
Shai

Tuesday, February 14, 2012

cursors are bad

These guys I work with have some sql scripts they run over night and they bog down the server and the machine will be gummed up in the morning etc..

Well, I finally looked at this processing and the culprit is cursors. And cursors within cursors. I would like to just get some opinions about what would be more processor efficient so I can send my boss a link to this thread.

Using a cursors to pull records and update them.

vs

Create script using a scripting language that pulls the records through ADO, loops through them and performs updates as necessary using update statements and the like.

Be nice. I have to work with these guys.In my experience, T-SQL cursors are a generally bad idea. Most books I have read use the first page of the cursors chapter to say how bad cursors can be, then go on to describe how to build one.

In order to see how cursors impact performance, I have found that the best way to show people how bad they are, is to show them a perfmon with a pretty line showing CPU usage. Then run the cursor, and point out the big spike. Then ask them to imagine what happens when 4 or 5 more people run that cursor at the same time...

You should also mention that multiple application servers can be added on, but you will always be cursed with a single database machine. If the database machine is resource bound, no amount of extra application servers can help you.

As for how ADO stacks up, you may have to stage a race between a couple of processes. One T-SQL cursor based, and the other ADO based, and see who wins.

Hope this helps.|||And nested cursors are extremely bad...

The must open the first,
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close
then open then next, Fetch and then close

Put a dress on a cursor and you know what you have?

Anyway...You could just do it a set based method..

Got a sample of some code?|||At least in my experience, set-based solutions are at least one order of magnitude (ten times) faster than cursors in Microsoft SQL. They are often much faster than that, but the 10 times benchmark is easy to hit so that's all that I claim up front.

In every form of data processing, the closer you can keep the processing to the hardware itself, the faster the process runs. Cache is faster than disk, set based solutions on the SQL Server are faster than cursors, which can be faster than ADO under the right circumstances.

Rather than argue for ages or produce/quote meaningless benchmarks, I'd suggest you try it. Take a very simple process that affects many rows and rewrite it as either ADO, set based, or both. Compare doing the same task all three ways.

I'm confident that the set based solution will comfortably outperform any other way of solving the problem. ADO will probably come next, depending on the amount of network I/O it has to do. I'd expect the cursor to come in dead last.

I'd really like to hear your results when you get them!

-PatP|||thanks guys.

I have what looks like a few hundred .sql files. I have done set based querying before. I am not as tough as you guys but I am studying everyday.|||Can post or attach a small one so we can take a look?|||if it was my code I would but I probably should'nt because I gave the CIO a link to the thread.

95% chance they won't do anything with the advice. there is never any time or resources for anything here. they brought me into fix things and they tied my hands in the first week I was here by telling me I could not change any of the existing stuff and everything had to be a workaround.

I have told myself I have to stick it out for a year and then I would evaluate the situation. It's almost been 6 months.|||OK...

What I would do.

From a client, use Profiler to record some statistics.

Then "clone" and existing cursor process with a set based solution. Give us some "examples" that are close to what yo uhave so we can help.

Make sure the results produced are identical...write them to work tables for proof.

Publish your results.

If the results are anywhere what I expect it should be clear that it should be rewritten...it can be done a little at a time, and not everything has to be done...just find the biggest offenders...

How big is the database?

I'd dump and restore a copy of production so you can have your own dev environement...

Anyway...good luck

CIO

Constantly Ignorant Officer|||Check you calendar. It's probably only been two weeks, and just SEEMS like six months.

How are you supposed to fix anything if you can't change anything?

Do you take your car into the shop and say, "Make it run better, but don't touch the engine?"

Seems like the only thing you could recommend would be a massive upgrade to the hardware. See how they like the price-tag on that.|||Blindman,

Trust me I know. They have already talked hardware. In my last company, a few years ago when I was just a pup of a developer they beefed up the db server hardware for crappy application they were running and saw little or no gain because the code was so bad. they finally revamped the app and it was'nt until then that things got better.

I am thinking about switching auto shops. I am going to reproduce things on my local like Brett said when I get a chance. Maybe this weekend. I have app development to do today.