Scheduler Task and Email Source Issues

Hi Team,
I have two questions regarding a job scheduler setup and email source setup.

  1. When trying to save a new job scheduler task, I am receiving the following error message. Can you please advise how to resolve this and successfully save this scheduler task?
  2. We use AWS S3 server connection in our setup. One of our admin role users set up the Email Source last week successfully. When I tried to complete the same process to test our email box connectivity, I am receiving an error when clicking “Next” after the first properties screen where I entered our email box info and login username/password. First, I receive the below error message

    Then if I click ok on the error message and get to the 2nd properties screen, I am unable to see any folders in the folder selection drop-down when trying to select the “Inbox” folder as the read email source location in that mailbox.
    image
    Our admin user’s client install is on the same IP as our server, which our security team has whitelisted, but we were told earlier that this shouldn’t be an issue.
    Please advise on these two scenarios.
    Thanks.
  1. For this, can you please try and run the ‘Upgrade Cluster Database’ Command on your database repository (screenshot attached) and see if this error goes away?
  2. This error generally occurs if the password is incorrect and hence it cannot authenticate the email. Can you please double-check if your password is correct? Also, please take a look at this article and see if any of these workarounds are able to resolve your issue: Authentication fails when you use an IMAP server in Outlook 2016
    Please let me know if you were able to get past both issues.
    Thanks.

Hi,

  1. I am not an admin user and therefore the “Upgrade Cluster Database” command is greyed out in my server’s management settings. Would having an admin user (who did not create this .crpj, .rmd, and workflow files) run that update still work? If so, I will have our admin user try this for me.
  2. I have double-checked that the password I entered is the exact same as the one used by our admin user who was able to connect successfully. I have confirmed the workarounds in the link you sent. Our password does not have any Unicode characters, we cannot use Pop3 as an alternative connection due to restrictions by our security team, and I would prefer not to downgrade Outlook versions just to work around this issue.
    Let me know if you have any other suggestions.
    Thanks.

Hi,

  1. Yes, I am expecting the upgrade of database repository to solve this issue. Once the admin user on your end has done it, please let me know if the error goes away.

  2. Thank you for checking this. A possibility is that you might not have the required permissions to access the folders of this email account. Can you please confirm with your network team about this?

Thanks.

Hi Team,
Regarding item #1: When our admin user clicks on the Upgrade Cluster Database option, he is taken to a second screen (pictured below). There is no information pre-populated in the drop-down boxes when either of the radio buttons is selected.


What radio button should he choose here and what should be entered in the drop-down boxes?
Thanks.

It seems like your database connection dialog is not pointing to your repository database when you’re trying to upgrade it. Is it possible we can schedule a call to resolve this issue whenever your team is available?
Also, can you please re-enter the database details where your repository resides? It is recommended to create a backup of your repository before upgrading it. Please let us know if this works for you.

Hi,
Our team performed the Update Cluster Database yesterday and we are still seeing the same pop-up error message when trying to save a Scheduler Task (pictured below).


Scheduler Task Error:

Hi,
We were able to reproduce this issue on our end. Our developers are working on its fix. A bug was filed and resolved for the error of creating a schedule when using Postgres repository. We will share the links for download and update of integration server and Reportminer client.

Hi,
We were able to update the integration server and client using the latest links provided and the installation was successful. I tested and am no longer seeing the deploymentconfigID error when trying to save a scheduler task, so we can consider that specific issue resolved.
But I have two questions:

  1. Does the scheduler time match the time zone of our server where the integration server is installed, or Astera’s PST time zone, or local user’s time zone?
  2. We may decide to run our provider-level workflow tasks concurrent to each other but would have them all generate a .csv output file with the exact same naming convention (as specified the same in the configuration for each provider’s workflow we set up).

Do you think there will be any issues if we set the same daily run time for each provider’s workflow or should we stagger them? I guess alternatively we could set up two master workflows for each of the 2 daily output file types and then drop all the provider looping workflows into the master one…. we just may have 100s of providers, so that could get a little cluttered from the UI perspective. Would like to simplify as much as possible.

Can you please confirm that all these tasks will be pointing to the same .csv file or different .csv(s) with a similar naming convention?
If they are pointing to the same csv file then running all the jobs concurrently will lead you to an error that the file is being used by another process. In case, they point to different delimited destinations, you can run these jobs concurrently.

We are intending to have each provider’s delimited destination file name map to the exact same naming convention, and we have it set to append the file with new records as the workflow loops through the Source folder.
Since running concurrent provider-level workflows seems problematic in our case, would it be better to create a new master workflow where we embed all the provider-level workflows so that we can set up the scheduler tasks on a workflow that doesn’t overlap the end file being written/appended with any other scheduled jobs?
If we do it that way and have many workflows embedded in a master one, how does ReportMiner choose which order to cycle through those so they don’t run concurrently?

Hi,
I believe, you are processing all the files of a specific provider in a loop, right?
In that case, each file will be processed at a time, so it won’t cause the file access issue.
Additionally, you are running flows for different providers, concurrently? Which means each one of them will be referring to a different destination file, right?

If that’s the case, you can schedule flows for different providers concurrently and that should work for you.

For embedding all the provider-level workflows into a master workflow, you can either link all these workflows so that they execute sequentially (Refer ss SequentialExecution.png). However, if you just drop the workflow tasks in the master workflow without linking them then they’ll run in parallel, and the sequence of their execution might not be consistent on each run.


Please let us know if this answers your questions or we can jump on a call to explain further.
Thankyou.

Since running concurrent provider-level workflows seems problematic in our case, would it be better to create a new master workflow where we embed all the provider-level workflows so that we can set up the scheduler tasks on a workflow that doesn’t overlap the end file being written/appended with any other scheduled jobs?

If we, do it that way and have many workflows embedded in a master one, how does ReportMiner choose which order to cycle through those, so they don’t run concurrently?

Our output file will be named the exact same thing, regardless of which provider workflow it is coming from. This is intentional on our end to consolidate all of our multi-provider (source) data into only two standardized output files daily that are aligned to our data model.

I did set up a master workflow and linked two provider source workflows (that already contain the file loop within them) so they would run in sequential order, and it looks like that worked, so I think we will use that method.

We can consider this issue resolved.

Thanks.