Astera Integration Server Service failing

Hi,
I have noticed that the Astera Integration Server service has failed on my server and stopped. I didn’t start getting this error until I had scheduled a job that runs more than once (daily). It also appears that the error indicates it is related to the scheduled jobs.
The error I get in the Application Log is:

Attempted to read or write protected memory. This is often an indication that another memory is corrupt.
at System.Data.SqlServerCe.Accessor.get_Value()
at System.Data.SqlServerCe.SqlCeDataReader.FetchValue(Int32 index)
at System.Data.SqlServerCe.SqlCeDataReader.IsDBNull(Int32 ordinal)
at System.Data.SqlServerCe.SqlCeDataReader.GetValue(Int32 ordinal)
at System.Data.SqlServerCe.SqlCeDataReader.get_Item(Int32 index)
at Astera.Persistence.c.a(CPObject A_0, IDataReader A_1)
at Astera.Persistence.c.a(CPPrimaryKey A_0, IDbConnection A_1, IDbTransaction A_2)
at Astera.Persistence.c.a(CPPrimaryKey A_0, IDbConnection A_1)
at Astera.Persistence.ObjectPersister1.a(Int64 A_0, IDbConnection A_1, IDbTransaction A_2) at Astera.Persistence.ObjectPersister1.a(Int64 A_0, IDbConnection A_1)
at Astera.Persistence.ObjectPersister1.Load(Int64 id, IDbConnection connection) at Astera.Persistence.ObjectPersister1.Load(Int64 id)
at Astera.Scheduler.ScheduledJobPersister.LoadJob(Int64 id)
at Astera.Scheduler.Scheduler2.a(IScheduledJob A_0)

The Astera.TransferService was using 2,337,028K of memory and climbing. The service is still running. There scheduled tasks were just three simple tasks doing the same thing at different times.

When I tried to see the results from the jobs, I got an error on my machine stating that I had accessed a protected area of memory and it did not switch to the monitor screen before I tried again.

I was able to see the job results when I connected to the other server (server2). But I got an error when I chose the schedule. I tried it again on the first server (server 1) and got the same error. I also logged into my other server (server2), and it is using 1,992,232K of memory and climbing.

Hi,

That is an awful lot of memory. Does the memory ever decrease after the job has finished?

I logged into both servers to check memory just now and the service has crashed on both servers….so no memory measurements. They crashed with the same error I sent below for … at Astera.Scheduler.ScheduledJobPersister.LoadJob(Int64 id) at Astera.Scheduler.Scheduler2.a(IScheduledJob A_0).
The service that I set to restart did not restart, so they both stopped. Also, the server crashing is the new installation of version 4.X (4.0.93.1).

Hi,
Can you tell me what version of SQL CE you have installed? Specifically, what is the service pack?
If this is a 64-bit machine, try this Windows hotfix:

http://support.microsoft.com/kb/970269

Let me know if you’re able to install it and if this does anything to the behavior here.

The servers do not have SQLCE installed (per control panel) and the first step in the hotfix instructions is to uninstall SP1 for SQLCE. Is there another place I should look for for the installation of SQLCE? Can I assume that the Astera Integration Server has some SQLCE functionality built in and that is what we are trying to resolve?

Yes. Centerprise relies on SQLCE to store its server logs and job schedules.

Try and install SQL CE 3.5 on that machine. SQL CE 3.5 is included with the installation of Astera Integration Server.

Since you are using build 4.0.93.1, it ships with versions of these assemblies that are about a year older than what is described in the hotfix. Please install the hotfix and then replace the sqlce.dlls in the Centerprise server directory with the ones from the hotfix or later. I would think SP2 would include whatever fix the hotfix provided.

Try this out and let me know if this changes the behavior. If it does, we’ll update the build to include the later version of SQL CE (version 5 of Centerprise already uses this one).

I have installed SQLCE 3.5 SP1 32-bit on our DEV server per the instructions in the hotfix. I then proceeded to install the hotfix itself which installed the 64-bit version. I backed up the DLLs that were in both the client and server folders and copied the newly installed ones from the hotfix directory. I started the Astera Transfer Service and it kicked off the jobs that were ‘past due.’ I then rescheduled them to run again and at various intervals. The memory usage is continuing to climb even with nothing running. Currently, it is at 69,444K and rising. Time will tell if it increases enough to have the service fail.

I restarted the server at the suggestion of our server engineer. When I logged back in, the Astera.TransferService.exe was not started even though it is configured to start automatically. I started the service, and it immediately took 15,000K of memory and has grown to 31,616K and rising as I write this email. No jobs have run on the server since it was rebooted. The next scheduled jobs are scheduled for tomorrow.

Let me know if it climbs above 100,000K and doesn’t release memory after the jobs stop or if you get the message at the beginning of this ticket at any time. Does this server (the machine itself, VM or not) get reset regularly?

It is well over that. It is at 397,616K right now. There are no jobs currently running. The server does not get restarted regularly. The only time it would get restarted would be if patches or updates require a restart.

Hi,

Try installing the same version that resided on your 2003 server.

Let me know if you still see the same behavior.

I have not installed .83 here. I would like it if you could install that version there and test it on one of your VMs.

Yesterday the memory usage for .93.1 was reaching 4GB. At 12:30:13 AM CST today, the service crashed. This is the error that is produced. It is different than what we have seen before.

Exception of type ‘System.OutOfMemoryException’ was thrown.

at DeployLX.Licensing.v4.NoLicenseException.T(SecureLicenseContext ?)

at DeployLX.Licensing.v4.NoLicenseException…ctor(SecureLicenseContext context)

at DeployLX.Licensing.v4.SecureLicenseManager.T(Object ?, Type ?, LicenseValidationRequestInfo ?, LicenseContext ?, StackTrace ?, U ?, Object[] ?)

at DeployLX.Licensing.v4.SecureLicenseManager.Validate(Object instance, Type type, LicenseValidationRequestInfo requestInfo)

at Astera.Licensing.LicenseManager.ValidateLicense(Type type, String serial)

at Astera.Transfer.Server.TransferService.CheckLicense()

at Astera.Transfer.Server.TransferServerBase.OnServerLoopIteration()

at Astera.Transfer.Server.TransferServer.OnServerLoopIteration()

at Astera.Transfer.Server.TransferServerBase.RunServer()

at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)

at System.Threading.ThreadHelper.ThreadStart()

Something to note, as the service chewed up more and more memory, I saw that other services started to fail on the server also. This is a big deal, and I will not be restarting the Astera service on this server until we need to test a new build.

Also, I had deactivated the license on the client and needed to reactivate it on the server. When I start the client, I don’t get the registration wizard. When starting I get this:

Can you tell me how to get the client reactivated?

Hi,
This looks like a known issue in 4.x. Deactivation will delete the old license file, but the Server License Manager is not putting it back. Run the installer again and select “repair”. This will put the original license file back. We will try to reproduce your environment more closely this time.

We recreated this on VM Ware 4. We have been running your job every 15 minutes without any incident. Memory has climbed maybe 10 MB in about 2 days. So, I can say that I’m not seeing your behavior.

However, I’d like to go back to that out-of-memory exception from earlier. Perhaps it is not only failing there due to lack of memory, but it is the culprit as well. So, I’d like you to try the build that I have emailed you with that piece taken out. This is not an official release, so please do not install this as anything other than a test for this issue.

Run this for a bit and let me know.

I was able to install the build you sent me. It is not climbing as drastically as the other version. It is currently hovering around 26MB of memory running the job every 15 minutes starting at 10 AM CST. I did notice though, that this build is 32bit and I believe the other build was 64-bit. That could be a contributing factor also.

Hi, that is what we are seeing here as well. I will give you another build where the only difference is the 32 vs 64-bit flag. If after this build, you again see the spike in memory usage, we’ll know we found our culprit.

Hi,
I have emailed you a build.

It is not a 64-bit one. It just has a combination of “32 bitness” and that piece of code we took out in the last build to see what would happen. I believe with the latest build you’ll get the same results of memory continuing to climb. If that is the case, we know what to do.

If we’re right, you should see the same behavior where the memory keeps on climbing and climbing. If this is the case, we know what to do and will present you with a new build that will resolve this issue.

Installed (over top) and it is climbing as expected. We are @ 60MB after about 25 minutes.

Emailed you a new build. This build should take care of the memory problem.

Please download and confirm that this issue is resolved. We will not be sending a 64-bit build. That was the problem with the other ticket. A 64-bit build would break existing dataflows using ODBC or Access connections. The 32-bit build running in WoW mode on your 64-bit machines should perform just fine

Can you send me a change log for this build as compared to the 4.0.93.x (most recent 4.x version) that we started testing with? Also, how is this one different than what we are running on the other servers for the Access issues?

Build 94: Forced execution of Centerprise Server to run in 32-bit mode.

This preserves the compatibility of dataflows running on all platforms.

Build 95 - 96: Intermediate versions used to try and track down a memory leak

Build 97: Memory leak fixed.