Managing Alfred Inflow

Managing jobs

The import of the files and their associated metadata is organized in jobs, which can be scheduled or executed manually.
On the main page, there is an overview of all jobs that are configured.

Alfred Inflow
Dashboard

For each job the dashboard shows: * the name of the job * a summary of the previous run: time, number of documents and number of failed documents * the current status

Adding a job

To add a job, click on the button in the top left corner of the job table. You will be redirected to a form that let’s you configure the new job.

Creating a job

Some job parameters explained:

  • Extension
    The extension of the files that should be used for this job. E.g. *.pdf indicates only pdf files need to be processed by the metadata processor. At the moment, only one extension can be entered.
  • Destination server
    The Alfresco server to which the files should be uploaded. See ‘Managing destinations’ section for more info.
  • Destination path
    The path of the folder in Alfresco to which the files should be uploaded.
  • Skip content upload
    Checkbox to skip the uploading of content.
    This makes sense when the contentUrl that should be saved in Alfresco, is already provided in the metadata attached to the files (e.g. migration import). This setting is deprecated and not used anymore.
  • Schedules
    This setting can be used to trigger a job periodically or on a given point in time. Any chron job expression can be se to trigger a job run. Cron expressions can be easily generated with some handy tools.
  • Command before
    The command specified in this box, will be triggered before the job starts processing documents. You can use this to run scripts etc.
  • Metadata processor
    a.k.a. parser. Specify which metadata processor or parser will be used to for this job. Xenit provides some out of the box metadata processors, but mostly custom processors are used. The following parsers are available out of the box:
    • CsvMetadataLoader
    • EmptyMetadataAction
    • FileSystemMetadataAction
  • Metadata parameters
    A set of metadata that will be attached to all documents uploaded in this job.
  • Command after
    Similar to command before, you can also configure to execute a command after the documents are processed.
  • Allow residual properties
    Indicates if it is allowed to add properties that are not recognized by the document model installed on the destination Alfresco. If this is disabled and an unknown property gets uploaded, an exception will be thrown. This is deprecated
  • Move before processing to path
    Path to which files should be moved, before starting to process the document.
  • Move loaded files to path
    If filled in, files that are successfully uploaded to Alfresco, will be moved to this folder.
  • Move not loaded files to path
    If filled in, files that failed uploading to Alfresco, will be moved to this folder.
  • Move unprocessed files to path
    Files that are not uploaded to Alfresco but keep behind the upload folder (e.g files that provide metadata), can also be moved to a directory with this setting.
    Not: this will not have effect on the files that are moved within the parser.

Editing a job

To edit a job, click on the pencil icon on the left of the job. The same form for creating a job will appear and all the settings can be updated.

Deleting a job

To delete a job, click on the pencil icon on the left of the job. In the edit job window, scroll to the bottom and click the delete button.

Running a job

To run a job manually, click on the RUN button on the right. After refreshing the page, the status of the job will change to Running. The RUN button will change in a STOP button that can be used to stop the execution of the job.

Dashboard with a running
job

Managing destinations

Destinations are Alfresco repositories. In order to move documents from a filesystem into an Alfresco repository, first the user has to define the destination.
The destination overview page lists all created destinations.

Destinations

Create a destination

To create a new destination, click the AlfrescoHttp button.

Creating a
destination

Explanation of the parameters:

  • Name
    The name of the destination, this can be whatever you want. This is only used to specify the destination in the job form.
  • URL
    The URL to the Alfresco destination. Has to be of format: {{scheme}}://{{host}}:{{port}}/alfresco/service or {{scheme}}://{{host}}:{{port}}/alfresco/s. Note that there should be no ending backslash.
  • Username
    The username of the account that we want to use to upload the documents.
  • Password
    The password of the account that we want to use to upload the documents.
  • Number of store content threads
    On the Alfred Inflow backend, the first thing we do is storing the content to Alfresco. This number indicates how many threads we will used to upload content.
  • Number of create node threads
    After storing the content, the backend will try to create the node for the document. This number indicates how many threads will be used to create the nodes.
  • Create node transaction size
    This number indicates how many packages will be stored in a single transaction. E.g. 250 means that the Inflow backend tries to upload 250 packages of documents in a single transaction. Nevertheless, if the transaction fails, all packages will be retried individually.
    Be careful: The more threads that are used, the heavier this will be on the processor. A balance must be found between upload speed and processing power.
  • Max buffered reports If a document is created, a report will be buffered in the backend to send back to the client. This number indicates how manny reports may be buffered maximally in the Alfred Inflow backend.

Managing users

The users page gives an overview of all created users.

Users

A user account identifies who can use the Alfred Inflow application.

To create a user, click on the icon in the top left corner of the table.
To edit a user, click on the pencil on the left of the column of that user.
To delete a user, click on the bin icon on the right of the column of that user.

User roles

Assign a role to the user account to specify the tasks the user is allowed to perform.

The following table summarizes the roles with their associated tasks. Every role also has access to all the tasks of lower roles.


Role Explanation Task


System admin A system admin has access to all View users aspects of the application.

                                                         Add user

                                                         Delete user

                                                         Change user's
                                                         password

Job admin A job admin defines jobs. Create job

                                                         Edit job

                                                         Delete job

Schedule admin A schedule admin decides when jobs have Add schedule to to run. job

                                                         Delete schedule

                                                         Edit schedule

                                                         Run job on demand

Consumer A consumer views job reports. View (running) jobs/schedules

                                                         View report

                                                         Export report

                                                         Change own
                                                         password

Depending on the role of the user the web interface will contain a subset of the available tabs.