Problem:

How do I delete or clear out the IBM WebSphere Application Server (WAS) temporary directories and cached files?

Summary:

This guide explains the process of removing or erasing the temporary directories and cached files in IBM WebSphere Application Server (WAS). It includes the appropriate situations for performing this task, the intended users, and the step-by-step instructions.

When is it necessary to perform this action?

If you encounter issues where the deployment manager, nodeagent, or application server fails to start, you can attempt the following steps. However, unless specifically requested by a support analyst, there is no need to carry out these actions.

 

Who should carry out this task?

System Administrators are responsible for executing these steps.

 

How is this done?

Follow the instructions below for each profile located within WAS_HOME/profiles (including Dmgr01 and AppSrv01, or whichever name your application server profile has).

 

  1. Stop the Deployment Manager, nodeagent, and application servers.
  2. Create backups of the existing configurations:
  3. cd PROFILE_ROOT/bin
  4. Run backupConfig
    1. Unix: ./backupConfig.sh backup_file
    2. Windows: backupConfig backup_file
    3. IBM i: ./backupConfig backup_file
  5. Repeat for every profile you have (Dmgr, AppSrv01, etc.)
  6. Rename the contents of the following directories or rename these temp directories. They will be recreated when you restart the servers.
  7. PROFILE_ROOT/wstemp
  8. PROFILE_ROOT/temp
  9. PROFILE_ROOT/config/temp (*** DO NOT REMOVE THE ENTIRE CONFIG DIRECTORY, JUST TEMP ***)
  10. Repeat for every profile you have (Dmgr, AppSrv01, etc.)
  11. Delete the javasharedresources directory:
  12. Unix and IBM i: /tmp/javasharedresources
  13. Windows:
    1. C:\Windows\System32\config\systemprofile\AppData\Local\javasharedresources
  14. From a command prompt or Qshell prompt, run the following command to initialize the OSGI configuration and clear the OSGI class cache:
  15. cd PROFILE_ROOT/bin
  16. Unix:
    1. ./osgiCfgInit.sh
    2. ./clearClassCache.sh
  17. IBM i:
    1. ./osgiCfgInit
    2. ./clearClassCache
  18. Windows:
    1. osgiCfgInit
    2. clearClassCache
  19. Repeat step 5 for the Dmgr01 profile and any other profiles present on your system.
  20. Start the Deployment Manager, nodeagent, and application servers.

 

Good luck!

There are many ways to approach data archiving and each approach stores data differently. W. Curtis Preston, backup, storage and recovery expert, shares an informative article on Network World of three main approaches to archiving data: traditional batch archive, real-time archive and hierarchical storage management (HSM) archive.

 

Traditional batch archive
“With a traditional batch archive, data serves its purpose for a certain period before being tucked away in a safe repository, awaiting the possibility of being of some use in the future. The main idea behind this type of archive is to preserve data over an extended timeframe, while keeping costs at a minimum and ensuring that retrieval remains a breeze even years down the line. In this kind of archive system, each collection of data selected for archiving is given one or more identities, stored as metadata alongside the archived data. This metadata plays a pivotal role in locating and retrieving the archived information, with details such as project names, tools used for to create the data, the creator’s name, and the creation timeframe all forming part of this digital fingerprint.”

 

Real-time archive
“In this type of archive, data created or stored in the production environment is instantaneously duplicated and sent to a secondary location for archiving purposes. Compliance and auditing are the primary use cases for real-time archives. Take, for instance, the classic example of journal email accounts in the era when on-premises email systems reigned supreme. As an email entered the mail system, an identical copy found its way into the journal mailbox, while the original landed in the recipient’s inbox. This journal mailbox served as a reservoir accessible to auditors and managers seeking information for legal matters or fulfilling freedom of information (FOIA) requests. Access to real-time archives typically occurs through specialized portals equipped with granular search capabilities. It’s important to note that (unlike traditional archive) real-time archives don’t alleviate the pressure on production storage systems – unless, of course, they incorporate the features discussed later in this article regarding hierarchical storage management (HSM).”

 

HSM-style archive
“Among the diverse archive systems, the “HSM-style” archive is a standout. It leverages hierarchical storage management (HSM) to govern data storage – a term that has somewhat gone by the wayside, even though the concept remains. When users no longer require daily access to data, or when data becomes dated but must be retained for compliance, organizations start exploring alternatives like storing this data on scalable object storage systems or dedicated cloud-based cold storage. Additionally, some solutions allow archive data migration to tape for off-site and offline storage, with the notion that tape provides enhanced security by being virtually inaccessible unless explicitly needed. Moreover, tape often offers a lower cost per gigabyte compared to most other storage systems. Tape also excels at long-term data retention. One common implementation of this concept applied HSM to real-time email archives, a prevalent practice in the early 2000s. As user mailboxes swelled with HTML-formatted emails and hefty attachments, organizations were faced with burgeoning storage requirements. HSM-style archives typically relocate data based on age or the last access timestamp. As data migrates from the filesystem to the archive, it often leaves behind pointers or stubs in the source system, facilitating automated retrieval when required.”

 

Your specific needs for accessing your historical data will determine which of the three categories your archive solution option falls into. Whether it’s traditional, real-time or HSM, your historical data will be better stored in an archive system than in simple storage databases.

 

For Full Article, Click Here

In recent news, Vital Concept, a French pioneer in distance selling to professionals in three major markets: agriculture, horses and landscaping, has chosen to deploy Infor CloudSuite Distribution Enterprise to support its international growth. This ERP project is part of Vital Concept’s plan to transform their information systems. Per the press release, the project started with Infor and its partner Hetic3 around the overhaul of the information system in place. The technological environment, built around an ecosystem of heterogeneous solutions, required too many interfaces and was very limited in its ability to adapt to the specific business processes of the company in constant evolution.  The project began in September 2020 and Infor and Hetic3 were selected in December 2021. The deployment of the solution took place in May 2023 in France and Belgium, and will continue in early 2025 in The Netherlands.

 

For Full Article, Click Here

Description:

To resolve the error message, “Restart the Server Express License Manager or License Manager is corrupt.” The errors are listed as compile error messages 191, 192, and 197, follow the troubleshooting steps outlined below.

 

Enter the command ps -ef|grep mfl to see if your License Manager is running. If the License Manager isn’t running, start it. If the License Manager is running, kill and re-start it by moving to the mflmf directory and entering the command sh ./mflmman.

 

If the license database is corrupt, go to the License Manager directory. (Note: The License Manager directory is the location where the license was installed.) Remove the following four files from the mflmf directory: mflmfdb, mflmfdb.idx, mflmfdbX, and mflmfdbX.idx. After these files have been removed, run License Administration Services (mflmadm) and re-install the licenses.

 

Follow these steps if you want to add or re-add developer licenses:

Use the cd command to move to the directory where License Manager was installed.

Execute the mflmadm program by entering the command ./mflmadm.

Press F3 (Install) to install the ServerExpress and/or the MicroFocus Cobol license.

When prompted, enter your key and serial number. ( Note: You must hit the slash ( / ) key twice.) Press Enter to save your key and serial number.

Press F3 (Install) to install and F7 (Refresh) to refresh. Press F5 (Browse) to see your ServerExpress license. Press F6 (More) to see both your ServerExpress and MicroFocus Cobol licenses.

Start the License Manager by going to the mflmf directory and entering the command sh ./mflmman. To verify that the License Manager is running, enter the command ps -ef|grep mfl. (If the License Manager is running, a root mflm_manager process should be returned.)

 

If the License Manager is still corrupt, remove the entire mflmf directory and use the cd command to move into the $COBDIR/lmf directory. Run lmfinstall. Select just the ServerExpress install option. You can either enter your developer serial number and license during this ServerExpress install OR you can enter them after the install has completed.

 

 

Follow these steps if you want to enter your developer serial number and license after your ServerExpress install is complete:

Use the cd command to move to the mflmf directory.

Run ./mflmadm.

Press F3 (Install) to install, and add your serial and license number.

Press F3 (Install) again.

Press F7 (Refresh) to refresh.

Verify that the License Manager is running by entering the command ps -ef|grep mfl. If the License Manager is running, a root mflm_manager process should be returned. If the License Manager isn’t running, move to the mflmf directory and run the command sh ./mflmman to start your License Manager.

Infor recently announced that Nacita, a leading third-party logistics (3PL) company in Egypt, has implemented Infor warehouse management system (WMS) with Infor partner SNS managing the project. Per the press release, Infor WMS allows Nacita to enhance its warehouse and logistics operations. This move aims to bolster key processes like receiving, picking, shipping, and the efficient capture of serial numbers, solidifying Nacita’s standing as an end-to-end logistics solutions powerhouse. Further, Vishal Minocha, Infor VP of product management, commented, “Deep warehousing functionality, ability to handle large volumes and highly experienced consultants, these are a perfect combination to get the maximum throughput from warehouse operations. Infor is proud to be Nacita’s partner on its journey to become end-to-end logistics provider leader.”

 

For Full Article, Click Here

The longer a business runs, the more data they accumulate. Archive storage is a topic to consider even before you reach the point where you need  to clear up storage space from dated historical information. Storage solutions are becoming a hot topic and many organizations don’t realize how many options they have or even the features they need. Samudyata Bhat, Content Marketing Specialist at G2, shares an informative article on the objective, techniques, options and best practices of data archiving. Let’s break down what data archiving is and how your organization can benefit from implementing a useful archive solution

 

Objectives of data archiving
Data archiving goals include effective data management, compliance and regulation policies, preserving digital history, and recovering data from disasters if they occur. Specifically, Bhat says that they concern your organization’s solution for long-term storage, cost optimization in being able to decommission old servers, compliance and regulatory requirements, easy to use historical reference and analysis, efficient data management and knowledge management.

 

Data archive vs. data storage vs. data backup
Data archiving may sometimes get confused with data storage or data backup – but they are all different in regards to your data. Data storage is the immediate data you house in your hard drives – data you are currently using every day. Data backup is exactly what it says – a backup of your current data that you can access should you need to recover lost or compromised data. Lastly, data archiving draws a bit from both storage and back up, with its main job is to preserve older and less-used or read-only data.

 

Benefits of data archiving
According to Bhat, data archiving goes beyond simply keeping data around; the practice lets enterprises improve productivity, maintain compliance, make educated decisions, and secure their digital assets for long-term retention. Bhat explains that when analyzed properly, historical data can provide significant insights and trends. With archived files stashed away, this can improve system efficiency, faster processors, and lower expenses. Data archiving also protects digital assets in a secure environment  to ensure your historical data is safe from unwanted access or breaches. Moreover, archiving is consistent with successful data governance practices and confirms that data is well-organized, easily accessible, and accurately classified.

 

Challenges of data archiving
With benefits, there also comes challenges to data archiving. As your business continues to grow, your volume of historical data will grow along with it. This poses a challenge for organizations to provide scalable archiving solutions to accommodate rising data quantities. Bhat points out that one such task to tackle first would be to make  the difficult choices about which data to preserve and which to purge. This can be helped with regulatory and compliance policies your company may need to follow. Additionally, archived data is often less accessible than current data. Finally, with the ever advancement of technology, data formats and storage methods also change and you will need to stay updated to keep your data accessible and secure.

 

Best practices for data archiving
While you think you may want to just throw all your historical data in an archiving platform, there are some best practices you may want to consider to build a strong data archival strategy. The first, according to Bhat, is to establish clear archiving policies such as the purpose of the archival data, how long it should be kept, and who gets regular access to it. She states that sorting and prioritizing data based on compliance, regulations and purpose is one way to organize your data, as well as establishing security and access measures from the get go for this sensitive information. You should consider focusing on document archiving by keeping records of your data archiving operations, including rules, methods, and reasoning behind data archiving choices. The final step would be choosing the right archiving system that would be scalable and efficient for your needs. Consider how often a user will access this data, the security measures you want in place and any reports you’ll need to run for larger data requests.

 

Data archiving solutions
Once you know what historical data you will be taking with you and the purpose of access, the next step would be selecting the right data archiving solution to house your data. Many archiving solutions have different priorities of focus and function – whether it’s the amount of storage, security measures, retention requirements, cloud storage, ease of access and use, report running, or a combination of any of these, there are many storage software options for your historical data. Amazon hase Simple Storage Service (S3), Google has cloud storage, Azure has Archive Storage, IDDrive uses Online Backup, and Dell has EMC Flash Storage to name a few.  Consulting firms and software companies also offer storage solutions such as Infor’s Data Lake and Nogalis’ APIX Cloud Archive Solution. Whatever your organization chooses, make sure it checks off all the boxes for the purposes of your historical archives.

 

For Full Article, Click Here

If you have several files in the LAWDIR/productline/work directory that are taking up a lot of space and need to clean them up, most can be deleted, but be aware that the files with UPPERCASE file names are often used to transfer data to non-Lawson systems or ACH bank files, and they may be waiting to be used by a future process that has not run yet.

 

The following procedure is to clean up print files, work files, RQC files, user home directory files, and WebSphere trace and dump files by either running a program or by defining a recurring job that calls the program.

 

Automated Cleanup of Print Files, Work Files, and Other Files

Use this procedure to clean up print files, work files, RQC files, user home directory files, and WebSphere trace and dump files by either running a program or by defining a recurring job that calls the program. Before running the program or the recurring job, you must set up configuration files. These files enable you to set the cleanup options, exclude specific files from the cleanup process (by file name or by the user name associated with the files), and to specify date ranges for the files to be deleted.

The types of files to be deleted include:

  • Completed job entries
  • Completed jobs forms and the associated job logs
  • Batch report files
  • All print files from the print manager
  • Files from $LAWDIR/productline/work directory
  • All user-related files from the $LAWDIR/RQC_work, $LAWDIR/RQC_work/log, $LAWDIR/RQC_work/cookies directories
  • WebSphere trace and dump (.dmp, .phd, javacore and .trc) files that are in subdirectories of the <WASHOME>/profiles directory.

To clean up print files, work files, RQC files, and other files:

  1. Configure the prtwrkcleanup.cfg file.

You can edit the parameters in the prtwrkcleanup.cfg file in two ways:

    • By using the Lawson for Infor Ming.le Administration Tools. See Configuring Automated File Cleanupin the Lawson for Infor Ming.le Administration Guide, 10.1. (This option is only available in Lawson for Infor Ming.le 10.1.)
    • By directly editing the file in the $LAWDIR/system directory. See Directly Updating the prtwrkcleanup.cfg File.
  1. Configure the prtwrkcln_usrnames.cfg file.

This configuration file is divided into multiple sections:

    • Section 1: usernames list for print and completed job cleanup
    • Section 2: usernames list for RQC cleanup
    • Section 3: usernames list for users home directory cleanup.

The script uses each different section for a different cleanup job. Make sure to put usernames in the right sections to avoid undesired outcomes.

You can enter multiple usernames in either a comma-separated format or a line-break-separated format.

For example:

Username1,Username2,Username3…

Or

Username1

Username2

Username3

Note: Do not remove the section dividers.

  1. Configure the prtwrkcln_exclude.cfg file.

Use this file to specify a list of file names to be excluded from the work file cleanup process.

You can enter multiple file names in either a comma-separated format or a line-break-separated format.

For example:

Filename1,Filename2,Filename3…

Or

Filename1

Filename2

Filename3

  1. If you want to run the cleanup program just once, open a command line session and follow the substeps below. (Note that a recurring job may be more useful in the long term. See the next main step below.)
    • Ensure that the prtwrkcln executable exists in $GENDIR/bin.
      • In a command line session, navigate to $GENDIR/bin.
      • At the command line, enter the following command:

prtwrkcln.

  1. If you want to run the cleanup program via a recurring job, use the following substeps.
    • In Lawson for Infor Ming.le, navigate to BookmarksJobs and Reports > Multi-step Job Definition.
      • Specify this information to define a multi-step job.

Job Name

Specify a name for the multi-step job.

Job Description

Specify a description for the multi-step job.

User Name

Displays the name of the user defining the job.

Form

Specify prtwrkcln. (This assumes this form ID exists. Use the For ID Definition utility (tokendef) to check if it exists and to add it as an environment form ID if necessary.)

Step Description

Specify a description for the step.

      • Click Addto save the new job.
        • Navigate to Related FormsRecurring Job Definition. Define a recurring job according to the instructions in the “Recurring Jobs” topic in Infor Lawson Administration: Jobs and Reports.