Summary: When setting up a Distribution group so other users can view the print manager of another user, typically a system user like lawson. You need to complete a very important step before being able to add it.

 

  1. In Jobdef when you try and set the distribution group you get the “Distribution List Has Not Been Defined” error (not displayed below, but it’s an example of it being set)
  2. To resolve, go to users environment info shown below in ISS (This can be done in LSA or ISS)
  3. M sure to add the distribution group to the environment info of the Lawson user (assuming you’re distributing a lawson job to other users).
  4. Open jobdef and add it now. NOTE: After adding to Jobdef, you can clear this field and it should still work.

 

Follow the steps below to configure LSA logging for the passive client:

  1. Right mouse Click on the Lawson Security Administrator icon and run as Administrator
  2. On the upper right corner of the Server Connection form, click Settings.
  3. On the Settings form, select the Logging tab then configure the following:

    Configuration Directory

    Specify where LawsonNlog.Config is located. LawsonNlog.Config is the configuration setting for LawsonHttpClient that takes care of logging.

    Enable Logging- toggle logging.

    Log level

    Debug – Log the detailed trace

    Error – Log the exception errors

    Log Directory

    Provide the location where the generated logs will be saved on the same computer where LSA is installed.

  4. Select Apply, and Yes – to apply the settings before re-launching LSA.

 

When migrating schemas to MySQL using the AWS Schema Conversion Tool (SCT), you may encounter the error:

“602: Converted tables might exceed the row-size limit in MySQL.”

This warning means that one or more tables being converted could exceed MySQL’s row-size limit. For the InnoDB storage engine (default in MySQL and Amazon Aurora MySQL), the maximum row length is 65,535 bytes. If your converted table design goes beyond this, MySQL will reject it.


Why This Happens

  • Large VARCHAR or TEXT columns
  • Too many columns in a single table
  • Wide columns that, when combined, exceed the 64KB limit
  • Non-optimized data types during schema conversion

Fixing the Issue

  1. Review and Optimize Column Data Types
  • Reduce overly large VARCHAR lengths. For example:
    • VARCHAR(1000) → VARCHAR(255) if values never exceed that length.
  • Replace TEXT with smaller types (TINYTEXT, MEDIUMTEXT) where possible.
  • Use integers (INT, SMALLINT, TINYINT) instead of BIGINT if values fit.
  1. Normalize the Table

If your schema has too many wide columns:

  • Split them into multiple related tables
  • Use foreign keys to maintain relationships
  • Store large blobs of data separately
  1. Drop Unnecessary Columns

Remove unused or redundant columns before migration to shrink the row size.

  1. Consider Storage Engine Options
  • InnoDB enforces the row-size limit but stores large TEXT/BLOB columns off-page (only a 20-byte pointer remains in the row).
  • If you only need full-text lookups, splitting those into reference tables may help.
  1. Change InnoDB Page Size (Advanced)

By default, InnoDB uses a 16KB page size. MySQL allows compilation with different page sizes (4KB, 8KB, 16KB, 32KB, 64KB).
⚠️ Important: This cannot be changed dynamically. You must recompile MySQL from source with a custom setting.

  • Download MySQL source
  • Modify the UNIV_PAGE_SIZE definition
  • Rebuild and reinstall MySQL
  • Re-import your schema and data

This approach is rarely used in practice because it complicates upgrades and maintenance, and is not supported on managed services like Amazon RDS or Aurora MySQL.


Best Practice in AWS Context

Since you’re working with AWS SCT and likely using Aurora MySQL or RDS, you cannot change InnoDB page size. Instead, the best fix is to:

  • Normalize your schema
  • Adjust column definitions
  • Split oversized tables

Takeaway

The 602 error in AWS SCT isn’t a hard failure—it’s a warning that your schema design may hit MySQL’s row-size limits. While increasing InnoDB page size is technically possible in self-managed MySQL builds, on AWS-managed databases the right solution is schema redesign and column optimization.


Rule of thumb: If you’re migrating from Oracle, SQL Server, or DB2 into MySQL, review the largest tables carefully. Wide schemas that were fine in those engines may not fit in MySQL’s stricter row limits.

 

Problem:

You’re trying to release a Journal Entry on GL40.1 but getting a “Workflow is not enabled” error (see example screenshot below):

 

Solution:

This is a simple resolve. First, on GL10.1, inquire on the company and under the Journals tab, set workflow to No >> Click Change

Go back to GL40.1 and release journal entry. This should remove the error and fix things. You’re done!

 

NOTE: This issue may happen when your GL10.1 form is a custom form and has customizations for the Workflow field.

If you receive this message when working with Lawson Security Administrator:

“The version of this file is not compatible with the version of Windows you’re running. Check your computer’s system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program.”        

Corrective action

The LSA tool should be able to run on both 32-bit and 64-bit systems. Take these steps to check to see if this message is related to a different operating system issue.

  1. Restart your Operating system.
  2. Log in to computer as an Administrator> Run LSA installer as an Administrator > Proceed with LSA installation.
  3. If the error message occurs again, the LSA download file might be corrupted. Download the latest version of the security administration tool and repeat step 2.
  4. If you continue to get the same error, right-click the LSA installation program > Select Troubleshoot Compatibility > Click Try Recommended Setting > Click Start the Program > Proceed with the LSA installation.
  5. If the installation is still not successful, contact Infor Support.

 

Follow these steps to update a user’s DB password via Infor Lawson Cloudsuite

  1. Login to your concierge account and go to Cloudsuite app (this assumes you have access to this else open a ticket with Infor).
  2. Create a Service Request
  3. Select Deployment and Request Type: SQL Manage User
  4. Select Database Server and in the drop down select Change Password
  5. Enter username, the new password for the user, then click Submit

 

All done! Validate with the user. If this is something you’re need assistance managing, Nogalis offers a team of Lawson specialists within a single MSP services contract for customers. Let us know if we can assist you today.

When working with MySQL, you might notice something surprising: a query that runs quickly on its own suddenly becomes slow when you try to save the results into a temporary table.

For example, let’s say you run a SELECT query that completes in about two minutes. But when you try to capture those results with:

CREATE TEMPORARY TABLE table_name

SELECT field1, field2, field3;

the process hangs, forcing you to kill it after five minutes. What’s going on here?


Why It Happens

When you run CREATE TEMPORARY TABLE … SELECT, MySQL is actually doing two jobs:

  1. Running the SELECT – fetching rows from the source tables, applying filters, joins, and any transformations.
  2. Creating and populating the temporary table – defining a new schema on the fly and writing the rows into it.

That second step adds overhead. Here are the most common reasons for the slowdown:

  • Data Volume: Copying millions of rows into a new table takes time, even if the query itself executes quickly.
  • Indexes: Your source query might benefit from indexes, but inserts into the temporary table don’t—MySQL has to write all rows one by one.
  • Storage Engine: By default, temporary tables may spill to disk if they exceed memory limits, which is much slower.
  • Server Resources: CPU, memory, and I/O pressure can cause inserts into the temp table to crawl.
  • Locking/Concurrency: Other queries on the same data can introduce waits and contention.

Creating a Temporary Table in Memory

If your dataset isn’t too large, you can speed things up by creating the temporary table in memory instead of on disk. MySQL supports this via the MEMORY storage engine:

CREATE TEMPORARY TABLE temp_in_memory (

id INT,

name VARCHAR(255)

) ENGINE=MEMORY;

Then, you can insert data directly into this in-memory structure:

INSERT INTO temp_in_memory

SELECT field1, field2

FROM source_table;

Because the table lives in RAM, inserts and reads are much faster—though the tradeoff is size limits (governed by the max_heap_table_size and tmp_table_size settings).


Best Practices

If you’re seeing big slowdowns when creating temporary tables, try:

  • Reviewing and optimizing your SELECT query with proper indexing.
  • Using ENGINE=MEMORY if your temp dataset fits in RAM.
  • Monitoring resource usage during the query (SHOW PROCESSLIST, or performance schema tables).
  • Splitting large inserts into smaller batches.
  • Making sure the temp table schema only includes the columns and datatypes you really need.

Takeaway

A SELECT query that runs fast doesn’t always translate to a fast CREATE TEMPORARY TABLE. The extra overhead of creating and writing the table can introduce new bottlenecks. If you only need the results for lightweight, session-specific work, an in-memory temporary table (ENGINE=MEMORY) can be a game-changer.

By understanding how MySQL handles temporary tables under the hood, you can choose the right approach and keep your queries moving fast.

Problem:

User does not have access to LBI. See screenshot below.

 

 

Resolution:

First, go to Mingle/InforOS, then go to manage users, and then search for the user.

Next, click on the arrow shown below which will take you to the user profile.

Click on the Roles tab:

Then click on the + to add roles:

Next, search for Mingle Role for LBI and click on the +Add & Close:

Finally, Click the Save icon. And you should be good!