Description:

To resolve the error message, “Restart the Server Express License Manager or License Manager is corrupt.” The errors are listed as compile error messages 191, 192, and 197, follow the troubleshooting steps outlined below.

 

Enter the command ps -ef|grep mfl to see if your License Manager is running. If the License Manager isn’t running, start it. If the License Manager is running, kill and re-start it by moving to the mflmf directory and entering the command sh ./mflmman.

 

If the license database is corrupt, go to the License Manager directory. (Note: The License Manager directory is the location where the license was installed.) Remove the following four files from the mflmf directory: mflmfdb, mflmfdb.idx, mflmfdbX, and mflmfdbX.idx. After these files have been removed, run License Administration Services (mflmadm) and re-install the licenses.

 

Follow these steps if you want to add or re-add developer licenses:

Use the cd command to move to the directory where License Manager was installed.

Execute the mflmadm program by entering the command ./mflmadm.

Press F3 (Install) to install the ServerExpress and/or the MicroFocus Cobol license.

When prompted, enter your key and serial number. ( Note: You must hit the slash ( / ) key twice.) Press Enter to save your key and serial number.

Press F3 (Install) to install and F7 (Refresh) to refresh. Press F5 (Browse) to see your ServerExpress license. Press F6 (More) to see both your ServerExpress and MicroFocus Cobol licenses.

Start the License Manager by going to the mflmf directory and entering the command sh ./mflmman. To verify that the License Manager is running, enter the command ps -ef|grep mfl. (If the License Manager is running, a root mflm_manager process should be returned.)

 

If the License Manager is still corrupt, remove the entire mflmf directory and use the cd command to move into the $COBDIR/lmf directory. Run lmfinstall. Select just the ServerExpress install option. You can either enter your developer serial number and license during this ServerExpress install OR you can enter them after the install has completed.

 

 

Follow these steps if you want to enter your developer serial number and license after your ServerExpress install is complete:

Use the cd command to move to the mflmf directory.

Run ./mflmadm.

Press F3 (Install) to install, and add your serial and license number.

Press F3 (Install) again.

Press F7 (Refresh) to refresh.

Verify that the License Manager is running by entering the command ps -ef|grep mfl. If the License Manager is running, a root mflm_manager process should be returned. If the License Manager isn’t running, move to the mflmf directory and run the command sh ./mflmman to start your License Manager.

Infor recently announced that Nacita, a leading third-party logistics (3PL) company in Egypt, has implemented Infor warehouse management system (WMS) with Infor partner SNS managing the project. Per the press release, Infor WMS allows Nacita to enhance its warehouse and logistics operations. This move aims to bolster key processes like receiving, picking, shipping, and the efficient capture of serial numbers, solidifying Nacita’s standing as an end-to-end logistics solutions powerhouse. Further, Vishal Minocha, Infor VP of product management, commented, “Deep warehousing functionality, ability to handle large volumes and highly experienced consultants, these are a perfect combination to get the maximum throughput from warehouse operations. Infor is proud to be Nacita’s partner on its journey to become end-to-end logistics provider leader.”

 

For Full Article, Click Here

The longer a business runs, the more data they accumulate. Archive storage is a topic to consider even before you reach the point where you need  to clear up storage space from dated historical information. Storage solutions are becoming a hot topic and many organizations don’t realize how many options they have or even the features they need. Samudyata Bhat, Content Marketing Specialist at G2, shares an informative article on the objective, techniques, options and best practices of data archiving. Let’s break down what data archiving is and how your organization can benefit from implementing a useful archive solution

 

Objectives of data archiving
Data archiving goals include effective data management, compliance and regulation policies, preserving digital history, and recovering data from disasters if they occur. Specifically, Bhat says that they concern your organization’s solution for long-term storage, cost optimization in being able to decommission old servers, compliance and regulatory requirements, easy to use historical reference and analysis, efficient data management and knowledge management.

 

Data archive vs. data storage vs. data backup
Data archiving may sometimes get confused with data storage or data backup – but they are all different in regards to your data. Data storage is the immediate data you house in your hard drives – data you are currently using every day. Data backup is exactly what it says – a backup of your current data that you can access should you need to recover lost or compromised data. Lastly, data archiving draws a bit from both storage and back up, with its main job is to preserve older and less-used or read-only data.

 

Benefits of data archiving
According to Bhat, data archiving goes beyond simply keeping data around; the practice lets enterprises improve productivity, maintain compliance, make educated decisions, and secure their digital assets for long-term retention. Bhat explains that when analyzed properly, historical data can provide significant insights and trends. With archived files stashed away, this can improve system efficiency, faster processors, and lower expenses. Data archiving also protects digital assets in a secure environment  to ensure your historical data is safe from unwanted access or breaches. Moreover, archiving is consistent with successful data governance practices and confirms that data is well-organized, easily accessible, and accurately classified.

 

Challenges of data archiving
With benefits, there also comes challenges to data archiving. As your business continues to grow, your volume of historical data will grow along with it. This poses a challenge for organizations to provide scalable archiving solutions to accommodate rising data quantities. Bhat points out that one such task to tackle first would be to make  the difficult choices about which data to preserve and which to purge. This can be helped with regulatory and compliance policies your company may need to follow. Additionally, archived data is often less accessible than current data. Finally, with the ever advancement of technology, data formats and storage methods also change and you will need to stay updated to keep your data accessible and secure.

 

Best practices for data archiving
While you think you may want to just throw all your historical data in an archiving platform, there are some best practices you may want to consider to build a strong data archival strategy. The first, according to Bhat, is to establish clear archiving policies such as the purpose of the archival data, how long it should be kept, and who gets regular access to it. She states that sorting and prioritizing data based on compliance, regulations and purpose is one way to organize your data, as well as establishing security and access measures from the get go for this sensitive information. You should consider focusing on document archiving by keeping records of your data archiving operations, including rules, methods, and reasoning behind data archiving choices. The final step would be choosing the right archiving system that would be scalable and efficient for your needs. Consider how often a user will access this data, the security measures you want in place and any reports you’ll need to run for larger data requests.

 

Data archiving solutions
Once you know what historical data you will be taking with you and the purpose of access, the next step would be selecting the right data archiving solution to house your data. Many archiving solutions have different priorities of focus and function – whether it’s the amount of storage, security measures, retention requirements, cloud storage, ease of access and use, report running, or a combination of any of these, there are many storage software options for your historical data. Amazon hase Simple Storage Service (S3), Google has cloud storage, Azure has Archive Storage, IDDrive uses Online Backup, and Dell has EMC Flash Storage to name a few.  Consulting firms and software companies also offer storage solutions such as Infor’s Data Lake and Nogalis’ APIX Cloud Archive Solution. Whatever your organization chooses, make sure it checks off all the boxes for the purposes of your historical archives.

 

For Full Article, Click Here

If you have several files in the LAWDIR/productline/work directory that are taking up a lot of space and need to clean them up, most can be deleted, but be aware that the files with UPPERCASE file names are often used to transfer data to non-Lawson systems or ACH bank files, and they may be waiting to be used by a future process that has not run yet.

 

The following procedure is to clean up print files, work files, RQC files, user home directory files, and WebSphere trace and dump files by either running a program or by defining a recurring job that calls the program.

 

Automated Cleanup of Print Files, Work Files, and Other Files

Use this procedure to clean up print files, work files, RQC files, user home directory files, and WebSphere trace and dump files by either running a program or by defining a recurring job that calls the program. Before running the program or the recurring job, you must set up configuration files. These files enable you to set the cleanup options, exclude specific files from the cleanup process (by file name or by the user name associated with the files), and to specify date ranges for the files to be deleted.

The types of files to be deleted include:

  • Completed job entries
  • Completed jobs forms and the associated job logs
  • Batch report files
  • All print files from the print manager
  • Files from $LAWDIR/productline/work directory
  • All user-related files from the $LAWDIR/RQC_work, $LAWDIR/RQC_work/log, $LAWDIR/RQC_work/cookies directories
  • WebSphere trace and dump (.dmp, .phd, javacore and .trc) files that are in subdirectories of the <WASHOME>/profiles directory.

To clean up print files, work files, RQC files, and other files:

  1. Configure the prtwrkcleanup.cfg file.

You can edit the parameters in the prtwrkcleanup.cfg file in two ways:

    • By using the Lawson for Infor Ming.le Administration Tools. See Configuring Automated File Cleanupin the Lawson for Infor Ming.le Administration Guide, 10.1. (This option is only available in Lawson for Infor Ming.le 10.1.)
    • By directly editing the file in the $LAWDIR/system directory. See Directly Updating the prtwrkcleanup.cfg File.
  1. Configure the prtwrkcln_usrnames.cfg file.

This configuration file is divided into multiple sections:

    • Section 1: usernames list for print and completed job cleanup
    • Section 2: usernames list for RQC cleanup
    • Section 3: usernames list for users home directory cleanup.

The script uses each different section for a different cleanup job. Make sure to put usernames in the right sections to avoid undesired outcomes.

You can enter multiple usernames in either a comma-separated format or a line-break-separated format.

For example:

Username1,Username2,Username3…

Or

Username1

Username2

Username3

Note: Do not remove the section dividers.

  1. Configure the prtwrkcln_exclude.cfg file.

Use this file to specify a list of file names to be excluded from the work file cleanup process.

You can enter multiple file names in either a comma-separated format or a line-break-separated format.

For example:

Filename1,Filename2,Filename3…

Or

Filename1

Filename2

Filename3

  1. If you want to run the cleanup program just once, open a command line session and follow the substeps below. (Note that a recurring job may be more useful in the long term. See the next main step below.)
    • Ensure that the prtwrkcln executable exists in $GENDIR/bin.
      • In a command line session, navigate to $GENDIR/bin.
      • At the command line, enter the following command:

prtwrkcln.

  1. If you want to run the cleanup program via a recurring job, use the following substeps.
    • In Lawson for Infor Ming.le, navigate to BookmarksJobs and Reports > Multi-step Job Definition.
      • Specify this information to define a multi-step job.

Job Name

Specify a name for the multi-step job.

Job Description

Specify a description for the multi-step job.

User Name

Displays the name of the user defining the job.

Form

Specify prtwrkcln. (This assumes this form ID exists. Use the For ID Definition utility (tokendef) to check if it exists and to add it as an environment form ID if necessary.)

Step Description

Specify a description for the step.

      • Click Addto save the new job.
        • Navigate to Related FormsRecurring Job Definition. Define a recurring job according to the instructions in the “Recurring Jobs” topic in Infor Lawson Administration: Jobs and Reports.

Infor recently announced the successful go-live with the State of Idaho and their implementation of Infor’s cloud-based public sector applications. With this project, the State of Idaho will improve service, increase transparency and streamline key business processes. Per the press release, the transition to a modern enterprise system is transforming the way the State of Idaho does business, and is supporting its teams as the government continues to grow in size and complexity. Additionally, Infor’s fully supported solution will remove the state’s administrative burden of software fixes and upgrades, hardware refresh cycles, backups and disaster recovery. Agencies will also be able to achieve a single source of truth when providing budgetary and financial reporting, increasing data-driven decision making through the integrated functionality, scalability and flexibility of the Infor CloudSuite solution. Moreover, Infor’s FedRAMP certification is a key differentiator and offers a confident, independent confirmation that Infor’s cloud security solutions meet high industry standards. Infor is among a select group of vendors that went through rigorous testing to offer the gold standard of data security certifications to customers. Infor reached FedRAMP authorization after extensive review of the company’s security posture. The State of Idaho’s solution is deployed in the Amazon Web Services (AWS) GovCloud.

 

For Full Article, Click Here

Customizing the Lawson Ribbon can be a good idea for giving users a visual cue about which environment they are working in (see screenshot below).

 

Customizing the ribbon is as simple as changing one line of html code.  To update the ribbon image, you will need to open up the index.htm file at WEBDIR/lawson/portal/.  Next, navigate to the “topBanner” element and add a background image (you can use the “find” function to search for this faster if needed), setting the URL to the path where you saved your image (see below).

 

Save your changes and your ribbon will now be a custom view!

Infor announced on October 5th, 2023 the launch of its new Developer Program and Developer Portal. These portals are designed to provide developers with the information and tools they need to build applications on top of Infor cloud ERP (enterprise resource planning) systems. Per the press release, the Infor Developer Portal includes baseline concepts and definitions to get started, a centralized library of APIs (application programming interfaces), and a set of specific tutorials that will help developers assemble the components they need to build next- generation solutions. As the central place for developers, links to product documentation, developer forums (Infor Communities), and best-practice guides are all located in the portal. Chris Griffith, CEO of StarPoint Technologies, comments, “One of the most exciting parts of the Developer Portal is the new API library. Having broad accessibility to this information will not only save us time by reducing our development timeframes, it also will accelerate our ability to bring fully integrated solutions to market faster, ultimately driving value and improving cost performance for our shared customers. I’m really excited about the future of our partnership with Infor.” Further, the Infor Developer Portal also will provide resources for two primary Infor application development solutions, Infor Mongoose and Infor OS (Operating Service) App Designer.

 

For Full Article, Click Here

Introduction:

Migrating data from on-premises databases to the cloud is a critical step for organizations seeking to modernize their infrastructure and unlock the full potential of the cloud. Among the various tools available for data migration, the AWS Data Migration Service (DMS) stands out as a powerful and comprehensive solution. In this article, we will explore the benefits of using the AWS Data Migration Service and how it can simplify and streamline your data migration journey.

 

Seamless Data Replication:

One of the key advantages of using AWS DMS is its ability to perform seamless data replication from various source databases to AWS services. Whether you’re migrating from Oracle, Microsoft SQL Server, MySQL, PostgreSQL, or others, DMS supports a wide range of source databases. This flexibility allows you to replicate data in real-time or perform one-time full data loads efficiently, minimizing downtime and ensuring data consistency throughout the migration process.

 

High Data Transfer Speed:

AWS DMS leverages AWS’s global infrastructure and network backbone, enabling high-speed data transfer between your on-premises databases and AWS services. The service optimizes data transfer by parallelizing data extraction, transformation, and loading operations. This results in faster migration times, reducing the overall migration duration and minimizing the impact on your production environment.

 

Minimal Downtime:

Downtime can have a significant impact on businesses, causing disruptions, revenue loss, and user dissatisfaction. AWS DMS minimizes downtime during the data migration process by enabling continuous replication and keeping the source and target databases in sync. This ensures that your applications can remain operational while the migration is ongoing, with minimal interruption to your business operations.

 

Data Consistency and Integrity:

Maintaining data consistency and integrity during migration is paramount to ensure the accuracy and reliability of your data. AWS DMS provides built-in mechanisms to validate and transform data during the replication process. It performs data validation checks, handles schema and data type conversions, and ensures referential integrity, helping you maintain the quality and integrity of your data as it moves to the cloud.

 

Flexible Schema Mapping and Transformation:

Data migrations often involve schema changes and data transformations to align with the target database’s requirements. AWS DMS offers flexible schema mapping and transformation capabilities, allowing you to define and customize the mapping between the source and target databases. This empowers you to harmonize and optimize the data structure, format, and organization during the migration, ensuring a seamless transition to the cloud.

 

Continuous Data Replication and Change Data Capture (CDC):

AWS DMS supports ongoing replication and Change Data Capture (CDC), enabling real-time synchronization of your databases. CDC captures and replicates data changes as they occur, providing up-to-date data in the target database. This is particularly useful for scenarios where real-time data availability is critical, such as high-volume transactional systems or analytics workloads. With continuous replication, you can maintain a live replica of your on-premises database in the cloud, facilitating data-driven decision-making and minimizing the time gap between data updates.

 

Easy Integration with AWS Services:

AWS DMS seamlessly integrates with various AWS services, offering a range of options for your migrated data. For relational databases, you can leverage Amazon RDS, Aurora, or Redshift as target databases. For NoSQL databases, Amazon DynamoDB can be utilized. Additionally, you can take advantage of other AWS services like AWS Schema Conversion Tool (SCT) for automated schema conversion and AWS Database Migration Service (DMS) for homogenous database migrations. This tight integration simplifies the migration process and enables you to leverage the full capabilities of the AWS ecosystem.

 

Scalability and Cost-Effectiveness:

By migrating your data to AWS using DMS, you can leverage the scalability and cost-effectiveness of cloud services. AWS provides flexible scaling options, allowing you to scale up or down based on your workload requirements. This scalability eliminates the need for upfront hardware investments and enables you to pay only for the resources you consume, optimizing your cost structure and providing cost savings in the long run.

 

Conclusion:

The AWS Data Migration Service (DMS) empowers organizations to migrate their data to AWS securely, efficiently, and with minimal disruption. From seamless data replication to minimal downtime, data consistency, and easy integration with AWS services, the benefits of using AWS DMS are substantial. By embracing the power of DMS, organizations can unlock the full potential of the cloud, leverage advanced analytics, enhance data-driven decision-making, and embark on their digital transformation journey with confidence.

Enterprise resource planning (ERP) systems deliver an all in one experience to organizations from day-to-day operations to supply chain to human resources and more. With the digital landscape constantly changing, so is the ERP space. On-premise systems are becoming a thing of the past and Cloud-based ERP systems are taking over. Effie Calandra, Director of Cloud ERP Solution Management at SAP, shares an article on Forbes on the advantages that Cloud ERP can have on your business. Based on her findings from the Harvard Business Review paper, “Driving Professional Services Growth with Cloud ERP”, Calandra shares, “Adopting cloud ERP solves legacy software problems by delivering easy-to-use and easy-to-implement process improvements.” This includes identifying errors from invoices to delivering financial documents successfully to decision makers. With a cloud ERP system, Calandra assures that it enables professional services firms to standardize their data and eliminate the routine manual activities that slow them down. Older systems can be fragmented and dependent on manual intervention, often requiring a dedicated IT team to keep up with updates and maintenance, thus seeming costly to your organization’s bottom line. “These challenges lead to delays, data errors, and a lack of visibility that make survival and growth a herculean effort.” says Calandra. With a Cloud ERP system, many of these challenges would go away. Cloud ERP systems include many automated systems, giving your organization the option to dedicate more time and resources to other aspects of the business. Calandra concludes, “The outcome of leveraging standardized processes and increased automations means resources can focus on value-adding and differentiating work, collaborate more effectively with each other, and deliver better experiences and care to clients.”

 

For Full Article, Click Here