I have two questions regarding a job scheduler setup and email source setup.
- When trying to save a new job scheduler task, I am receiving the following error message. Can you please advise how to resolve this and successfully save this scheduler task?
- We use AWS S3 server connection in our setup. One of our admin role users set up the Email Source last week successfully. When I tried to complete the same process to test our email box connectivity, I am receiving an error when clicking “Next” after the first properties screen where I entered our email box info and login username/password. First, I receive the below error message
Then if I click ok on the error message and get to the 2nd properties screen, I am unable to see any folders in the folder selection drop-down when trying to select the “Inbox” folder as the read email source location in that mailbox.
Our admin user’s client install is on the same IP as our server, which our security team has whitelisted, but we were told earlier that this shouldn’t be an issue.
Please advise on these two scenarios.
Regarding item #1: When our admin user clicks on the Upgrade Cluster Database option, he is taken to a second screen (pictured below). There is no information pre-populated in the drop-down boxes when either of the radio buttons is selected.
What radio button should he choose here and what should be entered in the drop-down boxes?
It seems like your database connection dialog is not pointing to your repository database when you’re trying to upgrade it. Is it possible we can schedule a call to resolve this issue whenever your team is available?
Also, can you please re-enter the database details where your repository resides? It is recommended to create a backup of your repository before upgrading it. Please let us know if this works for you.
Our team performed the Update Cluster Database yesterday and we are still seeing the same pop-up error message when trying to save a Scheduler Task (pictured below).
Scheduler Task Error:
We were able to reproduce this issue on our end. Our developers are working on its fix. A bug was filed and resolved for the error of creating a schedule when using Postgres repository. We will share the links for download and update of integration server and Reportminer client.
We were able to update the integration server and client using the latest links provided and the installation was successful. I tested and am no longer seeing the deploymentconfigID error when trying to save a scheduler task, so we can consider that specific issue resolved.
But I have two questions:
- Does the scheduler time match the time zone of our server where the integration server is installed, or Astera’s PST time zone, or local user’s time zone?
- We may decide to run our provider-level workflow tasks concurrent to each other but would have them all generate a .csv output file with the exact same naming convention (as specified the same in the configuration for each provider’s workflow we set up).
Do you think there will be any issues if we set the same daily run time for each provider’s workflow or should we stagger them? I guess alternatively we could set up two master workflows for each of the 2 daily output file types and then drop all the provider looping workflows into the master one…. we just may have 100s of providers, so that could get a little cluttered from the UI perspective. Would like to simplify as much as possible.
Can you please confirm that all these tasks will be pointing to the same
.csv file or different
.csv(s) with a similar naming convention?
If they are pointing to the same csv file then running all the jobs concurrently will lead you to an error that the file is being used by another process. In case, they point to different delimited destinations, you can run these jobs concurrently.
We are intending to have each provider’s delimited destination file name map to the exact same naming convention, and we have it set to append the file with new records as the workflow loops through the Source folder.
Since running concurrent provider-level workflows seems problematic in our case, would it be better to create a new master workflow where we embed all the provider-level workflows so that we can set up the scheduler tasks on a workflow that doesn’t overlap the end file being written/appended with any other scheduled jobs?
If we do it that way and have many workflows embedded in a master one, how does ReportMiner choose which order to cycle through those so they don’t run concurrently?
I believe, you are processing all the files of a specific provider in a loop, right?
In that case, each file will be processed at a time, so it won’t cause the file access issue.
Additionally, you are running flows for different providers, concurrently? Which means each one of them will be referring to a different destination file, right?
If that’s the case, you can schedule flows for different providers concurrently and that should work for you.
For embedding all the provider-level workflows into a master workflow, you can either link all these workflows so that they execute sequentially (Refer ss SequentialExecution.png). However, if you just drop the workflow tasks in the master workflow without linking them then they’ll run in parallel, and the sequence of their execution might not be consistent on each run.
Please let us know if this answers your questions or we can jump on a call to explain further.
Our output file will be named the exact same thing, regardless of which provider workflow it is coming from. This is intentional on our end to consolidate all of our multi-provider (source) data into only two standardized output files daily that are aligned to our data model.
I did set up a master workflow and linked two provider source workflows (that already contain the file loop within them) so they would run in sequential order, and it looks like that worked, so I think we will use that method.
We can consider this issue resolved.