Showing results for 
Search instead for 
Do you mean 
Community Home Request Access Read Blogs Share Your Ideas Search Community View My Settings
Reply
Highlighted
Bronze Super Contributor
Posts: 146
Registered: ‎04-01-2009

Sync Choked. Where are Transactions coming from?

We gave a couple remote users access to the Campaigns.  This has caused our application to send out hundreds of thousands of transactions to remote users (not just those two, but that is not important to me right now).  The files are mainly history, campaigns, campaign targets, etc.  This is choking our Sync process. It is still processing files from 2/19, over a week ago. 

 

Users are getting individual sync files in the range of 150 to 350 MB. The only reason they are getting any files is because we moved all the waiting WGLog files out of the directory except for 3 or 4 so that it could process those and then complete the sync cycle pulling in inbound files and sending what it has.  It is taking about 30 minutes to process each WGLog file which are about 10kb in size and have maybe 3 or 4 simple sql updates in them.

 

The transactions in the large TEF’s being sent to the users are NOT coming from WGLog.   If I looked at all of the files there now, there would be a total of maybe 20,000 transactions. The WGLog directory seems to get only updates from the SLX UI.  It seems that things that the SLX Server decides on its own need to be sent to remote users are “stored” somewhere else.  What I’d like to do is just kill the whole decision from the SLX server to send the users these records.  Since they are not in TEFs yet, killing this shouldn’t affect the existing and building WGLog files.

 

I just don’t know where this is all happening. It *seems* like I should be able to do something like Shut down sync, clear the temp files/ memory/ ??? of whatever it thinks it needs to send still, reboot the SLX and Sync Servers, and turn sync back on.  The WGLogs should then continue to be picked up where they left off without the Sync Service pulling in this other data. 

 

Anyone have any idea on how this bit works? Where the heck are those files or instructions to send them are stored or coming from?

Thanks

Highlighted
Silver Super Contributor
Posts: 801
Registered: ‎03-24-2009

Re: Sync Choked. Where are Transactions coming from?

The problem you have is that you have TEF files that have instructions within them. These say "Oh, you want this contact? Right, no probs, I'll just send you the entire account as well - as a contact doesn't stand alone". It then goes off and SyncServer knows that in order to have a contact, then you need the account, the address, all sub-entities, the attachments, opportunities etc. So, even though the original TEF may be very small (10Kb with 3-4 items) it is SyncServer that is simply taking these and using internal logic to ensure that the relevant entity from the ACCOUNT level all the way down is sent out to the remote. Take for example the SQL: UPDATE CONTACT SET USERFIELD1 = 'X' WHERE CONTACTID = 'MYID' That runs just fine on the contact that exists. The problem is - it may not on the remote database. So, it's pointless sending an update when the row doesn't exist. But, subsequent transactions may rely on that data (as it's serialised). That's why SyncServer is smart enough to know not to do this (it'll send entire entities out for you automatically, especially where leads & marketing is concerned). It's built right into the exe. Obviously, this is a problem for you now. By allowing access to CAMPAIGNTARGET, it's finding all the ones they relate to and sending those as well. You can watch it do this by viewing the QUE files it builds (watch it process a TEF, watch the QUE files build enormously, then it'll zip those into TEFs to go to the remotes. As you know the date of when it started you really only have 3 choice. (a) let it run (b) Re-cut all remotes and just delete all tefs or (c) delete all tefs originating on this day/time and let it pickup from there. There is no magic button to this short of just deleting until you are done. You'll find you'll lose data (as data is also coming in at the same time and being processed/queued up). But, either which way may be quicker and simpler than (a).