Syufy Enterprises, a leading entertainment and leisure company that owns a variety of businesses, including high-end athletic clubs and spas, restaurants, golf venues, shopping centers, public markets and drive-in theaters, has decided to deploy Infor Talent Science to build more successful teams. Utilizing Infor solutions at its VillaSport Athletic Club and Spa division, Syufy will gain access to tools to predictively link behavioral data to real business outcomes. The cloud-based application Infor Talent Science helps drive better business performance through hiring, developing, and retaining the right people. With it, VillaSport will be better prepared to reduce turnover of its hourly employees, improve the quality of hires, and identify career paths for both applicants and existing employees. By elevating the hiring process with data science to find the right employee for the right position, the organization will be able to improve customer experiences while also driving more memberships throughout the club.

 

For Full Article, Click Here

If you change the database server that hosts your LBI data, you will need to point your LBI instance to the new server.  This is done in WebSphere.  Log into your LBI WebSphere console, and navigate to Resources > JDBC > Data Sources.  Click on each data source that needs to be updated (LawsonFS, LawsonRS, LawsonSN).  Modify the server name, click OK and Save.

If the user credentials are different for this new data source, from the data source screen go to JAAS – J2C authentication data and update the credentials there.

Save the configuration changes and synchronize the nodes (if applicable).  Go back to the Data Sources screen and test each connection.

The rise of artificial intelligence (AI) has greatly changed the way we do business. In one such area, customer relationship management (CRM), AI has improved its usability. An article from IT Toolbox shares 5 ways that AI is changing CRM.

  1. Increased Automation – By taking care of administrative tasks like data entry and anomaly detection, AI is making CRM more accessible to employees and less of a chore. Increased automation also helps with lead visibility, streamlining workflows and improving team productivity.
  2. Advanced Data Mining – Whereas old CRM systems gathered data without necessarily making sense of the data, AI mixed with CRM is helping the data in these systems finally become useful.
  3. Sales Process Optimization – Legacy CRMs lack personalization and aren’t very intuitive as far as understanding what drives customer sales. AI baked into CRM helps businesses optimize sales by assisting in price optimization, forecasting, up-selling and cross-selling.
  4. Improved Personalization – AI helps CRM deliver a far more personalized customer experience. This means that CRM paired with AI is starting to offer next best actions, inform users of potential customer upsells, spot customer care issues before they get out of hand, and help CRMs interface with marketing, sales and support.
  5. Enhanced Collaboration – Creativity and collaboration continue to be predominant forces in determining a company’s ability to adapt and compete. Departments can no longer afford to operate as silos — sales teams included. AI-enabled CRM is helping companies meet that challenge around breaking down silos and fostering inter-departmental collaboration.

CRM has been somewhat static for the past few years in terms of new features and functionality. With the rise of AI, that’s changing. AI-enabled CRM is reducing the pain points that used to hamper CRM use, and at the same it is greatly improving the intelligence of the system. This translates into more actionable data, and systems that are easier to use.

It is an exciting time for CRM and the businesses that rely on them.

 

For Full Article, Click Here

To update the database server that your Lawson instance is pointing to, you will need to modify the MICROSOFT (or ORACLE) files for each environment that you are updating.  These files can be found at %LAWDIR%/DATAAREA/MICROSOFT.  Simply change the server name for the DBSERVER value and bounce your Lawson services.

NOTE: This article assumes that your new database server utilizes the same credentials/authorization as the original database server.

 

 

 

To point your Landmark instance to a new database server, you need to update the db.cfg file for each environment.  These files can be found at %RUNDIR%/DATAAREA/db.cfg.  Make sure you update the data source for each data area, including GEN.  Bounce the Landmark services, or reboot your server, and you are done!

 

 

An interesting read on the Trello blog recently covered the topic of time management. The article states “How you manage your time can mean the difference between feeling like you’re busy every minute of the day (but not actually getting anything done) and accomplishing everything you want in your job and your life—and still feeling like you have time to spare.” How can you make sure you’re actually managing your time efficiently? Here are some tips on how to master time management

  • Take a Time Inventory
  • Look For Things To Automate—Both On- And Offline
  • Carve Out Time For A Productivity-Boosting Morning Routine
  • Learn To Say ‘No’ When It Counts
  • Invest Time With The Right Tools
  • Spend Your Time Wisely

We live in a world with so many distractions that it’s easy to let time fly by. With these time management “hacks”, you can take control of your schedule and make efficient use of your time.

Original post by Deanna deBara from Trello.

For Full Article, Click Here

Here at Nogalis we perform managed service for several dozen enterprise customers. Most of our customers are either using Lawson V10 on premise or Infor CloudSuite products in the cloud. Our customers vary in their level of complexity but they almost all have several custom interfaces that support the operation of their businesses. Most of these interfaces are built with IPA (Infor Process Automation) or with ION which are both Infor supported products. Here are some examples just to name a few:

  • Positive pay interfaces to banks
  • Invoice import interface
  • Vendor creation interface
  • Automated user provisioning
  • Employee benefits exports and imports
  • COBRA interface
  • Batch job automation
  • Invoice or Purchase Order Approval interfaces
  • Journal entry imports

And many more

Many of these interfaces were designed and developed years ago and have been modified several times since. Unfortunately, the same cannot be said about the documentation that was once made for them if any. In an upcoming webinar and subsequent article, I plan to discuss how to develop accurate, useful, and easy to maintain documentation. But in this article, I want to focus on the reasons why we need these documents because without knowing why we do something, we’re not likely to do it right. The reasons below serve as guidelines for our documentation:

Reason 1 – Supporting the interfaces. This is the primary reason for creating good documentation. The goal of the any documentation should be that anyone can read it from start to finish and be able to support the existing process when it breaks. Therefore, one of the first things we do for our new managed service customers is to create detailed documentation of all their interfaces on our DOCR documentation portal and give them access to it. You can see some examples here. What’s nice about storing these documents on a web portal is that there is a central place for keeping them updated that everyone can contribute to. For support reasons, we make sure that we have a troubleshooting and a recovery section in each of our interface documents.

Reason 2 – Updates and enhancements. We’re routinely asked to update and enhance existing interfaces. While we can dig through the entire code of the interface to find out what every piece does, it is always helpful if some documentation exists that has an overview of the different components of the interface.

Reason 3 – Change Control. As we make changes to interfaces over the years, it is important to document these changes for change control purposes. Having a web-based documentation portal makes it easy to do this as it is the only version of the document that exists, and it is always updated with the latest changes.

Reason 4 – Application upgrades. As we upgrade our large enterprise applications, we always must study the impact of the upgrade on any custom interfaces that we have. The only way to do this quickly is to review the documents and specifically focus on any sections focusing on dependencies, and general data flow. Having proper documentation during an upgrade process can make the difference between a 1 month upgrade and a 6 month upgrade.

Reason 5 – Training and new hires. We pride ourselves in our one-to-one client to resource ratio in our managed service group. That means that for every new managed service customer, we add at least one new member to our team. This new team member goes through a rigorous training that includes reviewing all existing client documentation. This is an indispensable tool for our team as well as any client newhires.

If you need help creating your interface documentation, or to subscribe to our DOCR documentation portal, please contact us here.

On June 17, 2019, Infor launched its newest addition to the hospitality sector – Infor Hospitaliry Price Optimizer (HPO). Infor HPO is a comprehensive, built-for-the-cloud solution that delivers pricing decisions for hoteliers. It will help hoteliers make better decisions more quickly, confidently price rooms left to sell, and increase bottom-line profits. Infor HPO is capable of providing a strategic price and distribution channel on which to publish this price, factoring into account distribution costs for the channels, which will help to boost revenues and profits. Infor HPO also provides simulators to predict the impact that a change of price on a given day will have on demand and expected revenues. This new addition will also pick out potential competition from a customer perspective. The launch of this new product, built on the backbone of Infor OS, further strengthens Infor’s global platform of cloud solutions for the hospitality industry.

You can find more information on Infor HPO Here.

 

For Full Article, Click Here

Whether you are refreshing your test LBI environment or moving all your data to a new database server, you may eventually need to migrate your report data for LBI. This is a relatively simple process, provided the LBI instances using the data are the exact same version and service pack level.

First, back up your LBI databases on the source server and restore to the destination server (LawsonFS, LawsonRS, LawsonSN).

If you are migrating data for one LBI instance, you just need to point your WebSphere data sources to the destination server.

If you are migrating data for a new LBI instance, or for your test environment, you’ll need to update all the services and references to the old LBI instance.  In the LawsonFS database, ENPENTRYATTR table, you’ll need to search the ATTRSTRINGVALUE column for your old server name, and replace it with the new server name.  For example,

UPDATE ENPENTRYATTR

SET ATTRSTRINGVALUE = REPLACE(ATTRSTRINGVALUE, ‘source-server’, ‘destination-server’)

WHERE ATTRSTRINGVALUE LIKE ‘%source-server%’

 

After you update those strings, you will need to redo your EFS and ERS install validators to set the correct URL.

  • http(s)://lbiserver.comany.com:port/efs/installvalidator.jsp
  • http(s)://lbiserver.comany.com:port/ers/installvalidator.jsp
  • http(s)://lbiserver.comany.com:port/lsn/admin/installvalidator.jsp

Next, log into LBI and go to Tools > Services.  Click on every service definition to look for the source server name, and update with the destination server name.

Make sure your data sources are pointing to the proper ODBC DSNs, and/or add new ODBC connections.  Test and verify all your reports.

If you’re reading this, you must have already become curious about serverless computing or perhaps you just read our other article on the topic. There is no doubt that the future of computing will not have any traditional servers in it. In fact, I would venture to guess that by 2025, nearly every new development project will be a serverless project. This is not very hard to believe given that we at Nogalis have been developing all our enterprise applications for the past year using this paradigm and have yet to come up with a real reason to spin up a server.

There are a few things you need to understand about “Serverless” before you can start your project.

  1. Firstly, your current server-bound applications cannot run serverless without going back to the drawing board. In an ideal Serverless application, each request needs to stand on its own. Each request needs to process within a single function, verify itself and its caller’s authentication and permissions, run to completion, and set the states as needed. Your current application doesn’t do that. Your server is likely holding on to user sessions that have been authenticated and running so that it can handle the user’s next request. Without getting any more technical, the important things to understand is a “Serverless” application has to be designed and developed that way, it cannot be ported over magically. At least not today in 2019.
  2. Most, if not all enterprise applications deal with a database. AWS has made some great strides in the area of “Serverless” databases and at the current time (2019) offers two database options that are serverless:
    • Amazon DynamoDB – Amazon’s own NoSQL database service
    • Amazon Aurora Serverless – an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the database will automatically start up, shut down, and scale capacity up or down based on your application’s needs.

It is obvious that if Microsoft, Oracle, and IBM want to remain competitive in the DB space, they will also have to offer Serverless versions of their database products or go the way of the mainframe. In the meantime, you can build a serverless application, that connects to a server-bound database in order to store its data. We won’t judge you.

  1. In a serverless environment, you don’t have any local storage. Because you don’t have a, ummm, server. So, you will need to figure out your storage as a part of your design. No more writing log file to d:\temp. This is one of the biggest reasons your current server-bound application can’t just go without a server. Luckily, AWS offers several API-enabled storage solutions to deal with this limitation. Our favorite is still S3 because of its ubiquitous edge availability, it’s incredible speed, it’s flexible use cases, and its many other advanced features.
  2. Last on my list is the method for accessing your compute logic. For this AWS provides their API Gateway which can trigger any of your functions with ease.

Of course, the items above are not the entire story. For instance, AWS SNS is a fully managed messaging service that you can use for messaging. Cloudwatch will help you with event monitoring and debugging. A whole host of applications exist that can help with nearly everything else. In the past two years of developing serverless applications, we have yet to have a need that we couldn’t eventually resolve with AWS available services. I say eventually because our developer brains were accustomed to think of every problem in a very server-centric way. Once you stop relying on the server to solve all your development challenges, you’ll think of some innovative ways to develop your applications that will surprise you.

If you’re considering developing a serverless enterprise application, we’d love to help. Please use the contact us page to make an appointment with someone on our team to discuss your serverless project.