On June 17, 2019, Infor launched its newest addition to the hospitality sector – Infor Hospitaliry Price Optimizer (HPO). Infor HPO is a comprehensive, built-for-the-cloud solution that delivers pricing decisions for hoteliers. It will help hoteliers make better decisions more quickly, confidently price rooms left to sell, and increase bottom-line profits. Infor HPO is capable of providing a strategic price and distribution channel on which to publish this price, factoring into account distribution costs for the channels, which will help to boost revenues and profits. Infor HPO also provides simulators to predict the impact that a change of price on a given day will have on demand and expected revenues. This new addition will also pick out potential competition from a customer perspective. The launch of this new product, built on the backbone of Infor OS, further strengthens Infor’s global platform of cloud solutions for the hospitality industry.

You can find more information on Infor HPO Here.

 

For Full Article, Click Here

Whether you are refreshing your test LBI environment or moving all your data to a new database server, you may eventually need to migrate your report data for LBI. This is a relatively simple process, provided the LBI instances using the data are the exact same version and service pack level.

First, back up your LBI databases on the source server and restore to the destination server (LawsonFS, LawsonRS, LawsonSN).

If you are migrating data for one LBI instance, you just need to point your WebSphere data sources to the destination server.

If you are migrating data for a new LBI instance, or for your test environment, you’ll need to update all the services and references to the old LBI instance.  In the LawsonFS database, ENPENTRYATTR table, you’ll need to search the ATTRSTRINGVALUE column for your old server name, and replace it with the new server name.  For example,

UPDATE ENPENTRYATTR

SET ATTRSTRINGVALUE = REPLACE(ATTRSTRINGVALUE, ‘source-server’, ‘destination-server’)

WHERE ATTRSTRINGVALUE LIKE ‘%source-server%’

 

After you update those strings, you will need to redo your EFS and ERS install validators to set the correct URL.

  • http(s)://lbiserver.comany.com:port/efs/installvalidator.jsp
  • http(s)://lbiserver.comany.com:port/ers/installvalidator.jsp
  • http(s)://lbiserver.comany.com:port/lsn/admin/installvalidator.jsp

Next, log into LBI and go to Tools > Services.  Click on every service definition to look for the source server name, and update with the destination server name.

Make sure your data sources are pointing to the proper ODBC DSNs, and/or add new ODBC connections.  Test and verify all your reports.

If you’re reading this, you must have already become curious about serverless computing or perhaps you just read our other article on the topic. There is no doubt that the future of computing will not have any traditional servers in it. In fact, I would venture to guess that by 2025, nearly every new development project will be a serverless project. This is not very hard to believe given that we at Nogalis have been developing all our enterprise applications for the past year using this paradigm and have yet to come up with a real reason to spin up a server.

There are a few things you need to understand about “Serverless” before you can start your project.

  1. Firstly, your current server-bound applications cannot run serverless without going back to the drawing board. In an ideal Serverless application, each request needs to stand on its own. Each request needs to process within a single function, verify itself and its caller’s authentication and permissions, run to completion, and set the states as needed. Your current application doesn’t do that. Your server is likely holding on to user sessions that have been authenticated and running so that it can handle the user’s next request. Without getting any more technical, the important things to understand is a “Serverless” application has to be designed and developed that way, it cannot be ported over magically. At least not today in 2019.
  2. Most, if not all enterprise applications deal with a database. AWS has made some great strides in the area of “Serverless” databases and at the current time (2019) offers two database options that are serverless:
    • Amazon DynamoDB – Amazon’s own NoSQL database service
    • Amazon Aurora Serverless – an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the database will automatically start up, shut down, and scale capacity up or down based on your application’s needs.

It is obvious that if Microsoft, Oracle, and IBM want to remain competitive in the DB space, they will also have to offer Serverless versions of their database products or go the way of the mainframe. In the meantime, you can build a serverless application, that connects to a server-bound database in order to store its data. We won’t judge you.

  1. In a serverless environment, you don’t have any local storage. Because you don’t have a, ummm, server. So, you will need to figure out your storage as a part of your design. No more writing log file to d:\temp. This is one of the biggest reasons your current server-bound application can’t just go without a server. Luckily, AWS offers several API-enabled storage solutions to deal with this limitation. Our favorite is still S3 because of its ubiquitous edge availability, it’s incredible speed, it’s flexible use cases, and its many other advanced features.
  2. Last on my list is the method for accessing your compute logic. For this AWS provides their API Gateway which can trigger any of your functions with ease.

Of course, the items above are not the entire story. For instance, AWS SNS is a fully managed messaging service that you can use for messaging. Cloudwatch will help you with event monitoring and debugging. A whole host of applications exist that can help with nearly everything else. In the past two years of developing serverless applications, we have yet to have a need that we couldn’t eventually resolve with AWS available services. I say eventually because our developer brains were accustomed to think of every problem in a very server-centric way. Once you stop relying on the server to solve all your development challenges, you’ll think of some innovative ways to develop your applications that will surprise you.

If you’re considering developing a serverless enterprise application, we’d love to help. Please use the contact us page to make an appointment with someone on our team to discuss your serverless project.

Until recent years, we treated our servers like pets. We gave them names and assigned a high value to their health and uptime. If a server went down, we did everything in our power to get it back up and running. With the advent of virtualization, the term server became more synonymous with VM (Virtual Machine) and the fact that it was running didn’t really have as much significance simply because we could spin up many more like it within minutes. But that was still a problem. We had to spin up many more just like it. We had gone from treating servers like pets, to treating them more like cattle. It appears the dream of virtualization was realized as we didn’t need to worry about one specific server anymore. If web-007 failed, web-001 through web-006 were still around to handle the traffic and no one would even notice the difference while a new instance was generated. But even in this new virtual reality, the virtual environment (our cattle) had to be up and running all the time, feeding on energy and they needed attention.

This was a problem that didn’t seem to have a solution. It seemed that if you needed to compute a bit of logic, you would have to pass that request to a service that could process it for you and give you a result. So, the cattle were as efficient as we could get for a long while. But realistically, each bit of computing request is just a tiny little request. Surely, we don’t need an entire virtual farm always on standby to fulfill requests that are not even being called on all the time. What if we moved from the cattle model to the bacteria model? Imagine an infinite ocean of tiny little computing units (our bacteria) that would instantly rush to our call whenever we needed them. Now imagine if that ocean was shared by all our applications.

This is the serverless dream. In the cloud ecosystem, we call these container services and major cloud providers like Microsoft Azure and Amazon Web Services offer these container services in an all you can eat ocean of compute that you can tap into whenever and however much you desire. Imagine never having to scale your server infrastructure. Imagine only getting charged for the infinitesimally small amounts of time that the CPU is processing your request. That is what services like AWS Lambda provide. AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. You simply write all the logic of your code within a Lambda function, link the function to an application gateway and you’re ready to invoke your logic from anywhere, on any device, at any time, and as many times as you like. It is easy to see that with this new model of computing, your data center will soon become a thing of the past, and with it the skillset you have developed around the data center model.

So, what will it take to go fully serverless? Can you run your applications on this new computing platform? How do you develop applications in a way that can take advantage of these new technologies? Subscribe to our newsletter to be notified of our upcoming articles that will address all these questions. If you have a serverless development project that you would like to talk to us about, you can contact us directly here. Nogalis has been developing serverless applications using AWS Lambda since 2017 and we’d love to discuss your upcoming projects.

The Nogalis team is always busy with multiple projects, and a way to stay on top of our productivity is by using Trello. Trello is a web based list-making application that is shared by everyone in your team to keep everyone up to date on all tasks, progress, and deadlines. This helps our team stay current with all our projects. Trello also has a blog that features tips on increasing productivity, workflows, and current events and news. We’ll be sharing some of those tips on our own blog from time to time. Take a look at the first tip below!

 

Overcoming Defense Mechanisms

Defense mechanisms were first noted by notable psychologist Sigmund Freud. At their core, defense mechanisms are self-serving. We subconsciously use them to protect ourselves from negative thoughts or feelings such as anxiety or guilt. How do you stop your own defense mechanism from becoming a bigger problem, especially in the workplace? There are a few things you can do. The first step in making any change is to recognize the problem. Analyze your thoughts, emotions, reactions, and exchanges at work to figure out which of the above defense mechanisms you’re using as a crutch. Following this you can practice compartmentalization by segregating different thoughts or portions of your life (i.e. shutting out any personal problems while you’re at work). Next, projection; assign your own thoughts and emotions to others. Then practice undoing by attempting to backpedal a negative behavior with a lot of positive. For example, if your first line of defense is to say something negative to a colleague, then backpedal and instead compliment or say something positive about the given situation. We all have defense mechanisms and we all have the capability to change (not use them as often), especially in the workplace. By eliminating such defensive habits, we could get more work done at the office and increase work productivity.

Original post by Kat Boogaard from Trello.

 

For Full Article, Click Here

 

Stay tuned for more tips from Trello!

 

You might come across the need to update a service or identity configuration in ssoconfig (LSF), especially if you have implemented AD FS and need to update your usernames or service URLs. A quick way to update a ssoconfig service is to load an XML file with the updates. Create and save your XML file, set your environment variables on LSF, then run the command ssoconfig -l <password> <filepath>

Here is the file format for a service and an identity (make sure OVERRIDE is set to true if you are doing an update):

 

<?xml version=”1.0″ encoding=”ISO-8859-1″?>

<BATCH_LOAD FORMAT=”” OVERRIDE=”true”>

  <SERVICE>

<HasCredential>TRUE</HasCredential>

<LoginProcedure>Form based</LoginProcedure>

<ID>DSSOIBITEST</ID>

<SvcEntryAttrList>user,password</SvcEntryAttrList>

<LOGINSCHEME NAME=”Form”>

<PROTOASSERT>Use HTTPS always</PROTOASSERT>

<HTTPURL>https://server.company.com:80/sso/SSOServlet</HTTPURL>

<HTTPSURL>https://server.company.com:443/sso/SSOServlet</HTTPSURL>

<PRIMARYTARGETLOOKUP>Verify passwords in Lawson Security</PRIMARYTARGETLOOKUP>

<LOGIN_RDN/>

<NAMING_ATTR>cn</NAMING_ATTR>

<USERNAMEFIELD>_ssoUser</USERNAMEFIELD>

<PASSWDFIELD>_ssoPass</PASSWDFIELD>

<SERVICEURL>https://server.company.com:443/sso/SSOServlet</SERVICEURL>

<LOGIN_SUBMIT_METHOD>POST</LOGIN_SUBMIT_METHOD>

</LOGINSCHEME>

<IdentityAttrList>user</IdentityAttrList>

<CredentialAttrList>PASSWORD</CredentialAttrList>

     </SERVICE>

</BATCH_LOAD>

 

<?xml version=”1.0″ encoding=”ISO-8859-1″ standalone=”no”?>

<BATCH_LOAD FORMAT=”Opaque” OVERRIDE=”TRUE”>

<IDENTITY SERVICENAME=”SSOP” >

<RDID>lawson</RDID>

<USER><![CDATA[[email protected]]]></USER>

</IDENTITY>

</BATCH_LOAD>

 

To convert LBI to use https, the first step is to make sure that you have valid PKCS 12 certificates installed in the Personal and Trusted Root stores on your LBI server. Export your certificate (or have your system admin do it for you) with the public key and private key, and with the full certificate chain.  During the export, provide a password for the certs.

In WebSphere on your LBI server, go to Security > SSL certificate and key management.  Select Key stores and certificates > NodeDefaultKeyStore > Personal certificates. Replace the default certificate with the cert that you just exported. Do the same for the CellDefaultKeyStore (if applicable). Next, under Key stores and certificates again, select the KeyStore and TrustStore, and select “Exchange Signers…”

Add your new certificate from the KeyStore to the TrustStore and Apply. Save the changes. No need to restart your application server yet, we will do that in a bit.

Make sure that your Virtual Hosts contain an alias for the secure port you plan to use. Note that this port must be the WC_defaulthost_secure port under Ports on your Application Server.

In LSF, update your DSP service for LBI to use the new service URL. The service should be set to “Use HTTPS always” and the new service URL should be “https://lbiserver.company.com:port/sso/SSOServlet.

Restart your LSF application server and your LBI application server.

Open your LBI install validator with https://lbiserver.company.com:secureport/efs/InstallValidator and make sure the system URL is set to the new secure URL. Submit the new URL. If the certificates are not valid, you will receive an error message indicating as such. Otherwise, there should be no failed tests.

As you might have heard, we’ve been working on a fully cloud-based, vendor self service solution for about a year now. We have worked with several customers to create a solution that is simple, intuitive, and brings instant value to any organization dealing with vendors and suppliers. Join us on Thursday June 6th (9am PST) as we do a walk-thru and a public Q&A. The webinar will feature the following modules and functionality:

  • Vendor OnBoarding
  • Vendor Check Requests and Invoice Submissions
  • Purchase Requests
  • Routing, Approvals, Document Management, Audit, Integration, and much more

Infor’s newest partnership is with Destination XL Group (DXLG), the industry’s leader in men’s big and tall apparel. This partnership allows DXLG to increase their market share and top line sales, improve customer segmentation and drive state-of-the-art marketing activities. Infor Alliance partner Three Deep Marketing will work alongside DXLG to leverage Infor’s rich breadth of customer engagement solutions to deepen relationships with existing and new customers by better understanding preferences around promotions, pricing and assortment. DXLG will implement Infor CloudSuite CRM, Infor Marketing Resource Management (MRM), Infor Omni-channel Campaign Management (OCM), and Infor Loyalty powered by CrowdTwist to provide relevant communications across all channels. “With Infor’s end-to-end customer engagement solutions, we can understand our customers better, produce rich and deep customer profiles and drive incredibly smart segmentation to connect and engage with our customers in a more meaningful and targeted way,” said Jim Davey, CMO of Destination XL Group. Infor’s Retail division now supports more than 2,500 global fashion, retail, and grocery brands that work to modernize operations by taking advantage of the latest consumer and business technologies — mobile, social, science and cloud.

 

For Full Article, Click Here

When you configure LSF for ADFS, you will need to make some changes to your LBI configuration so that users will be able to access LBI with the userPrincipalName.

The first thing you need to do is ensure that you have a user in Lawson security where RMID = SSOP = UPN (userPrincipalName).  The RM User that is used to search LSF for LBI users must have an account where RMID and SSOP match.  It is recommended that you have a new AD user created for this purpose (such as lbirmadmin).

Add the new user to Lawson, ensuring that their ID and SSOP values both use UPN.  Also make sure the new user is in the appropriate LBI groups for LBI access.

The next change will take place in the sysconfig.xml file located in <LBI install directory>/FrameworkServices/conf.  The ssoRMUserid should be the UPN of your LBI user mentioned above.  After you make these changes, restart the application server, clear the IOS cache in Lawson, and try logging into LBI.