Month: July 2020
Warehouse Management Systems (WMS) have become increasingly sophisticated in recent years and now cloud computing is set to take them to the next level.
A great WMS gives you a centralised view of the movement of your goods and cloud computing is ideal for this because it allows all the elements of your warehouse operation to be seamlessly integrated. Holding all your information on the internet, rather than at a physical location, means you’re less likely to be held back by a lack of computing power at a particular site. You don’t need to worry about on-site IT support either because you’ll have experts you can call on, as part of your service level agreement.
But the main reason cloud computing is gaining traction is because it gives businesses the kind of control that is essential to staying relevant in the digital age, here’s how:
Happier customers
One of the biggest sources of pressure for warehouses nowadays, is the increasing expectations from customers. Lead times for deliveries are getting shorter to the extent some retailers are even offering same day delivery. Customers also expect to be able to receive their deliveries from a range of locations that are convenient for them. Meeting such demands with traditional warehouse management systems can be stressful but with cloud computing you can rise to challenge.
Because an online-based WMS allows you to get an overview of your entire system, it’s easier to find the stock you need. This is particularly useful if you have multiple warehouses. The sooner you can pick the products you need, the sooner you can send your customer a dispatch confirmation email, which helps build your reputation for reliability and efficiency.
A cloud-based WMS also allows you to track your inventory more easily and make sure you have the stock necessary to fulfil your customers’ orders. If you do experience an unforeseen demand for certain products, cloud computing makes it easier to respond because you can quickly communicate to your suppliers and staff to get the stock you need. Even when staff are on the warehouse floor, they can use a mobile device to access the same information you’re seeing on the cloud.
When everyone can see the same information, you are also less likely to make errors such as duplicating deliveries.
A competitive edge
If you can’t meet your customers’ expectations, chances are they’ll simply move on to someone who can. That’s one reason why a web-based WMS can be critical to giving you an edge over your competitors.
Many businesses now operate on a multi-channel level which requires making complex decisions about the most economical and efficient way to fulfil orders. A cloud-based WMS makes it possible to base such decisions on evidence because it not only brings together all the data you need, it also presents it in a way that makes it easier to interpret. Running an efficient multi-channel business will make you a true player in the digital age.
Cloud computing can help make you more efficient at every aspect of your warehouse management from ordering to storing and shipping. This can make a huge difference when you’re running a start-up for instance, because it can help free up time for you to concentrate on the numerous other tasks you’re likely to be juggling.
Long-term sustainable growth
If long-term sustainable growth is your goal, then investing in a cloud-based WMS is likely to pay dividends.
You don’t have to worry about the cost of switching over to cloud computing because you’ll have no upfront costs. You can pay via a monthly subscription instead.
You’ll also save money because having more control over your inventory means you’ll be less likely to order surplus stock. This means you’ll have money to spend on other areas of the business, like infrastructure.
If you’re looking to grow your business, then having access to the latest and best technology can make a real difference. If you switch to a cloud-based WMS you’ll always have the latest technology because your subscription will pay for automatic system upgrades too.
Growing your business is also much easier when you have the support of a motivated and happy workforce. A cloud-based WMS is likely to help your warehouse employees enjoy their job more because it will make it less stressful. The system will take some of the pressure off them by automatically dealing with back orders for instance.
Allowing them to automate painstaking or repetitive tasks, like manually counting stock, will allow them to concentrate on the more satisfying aspects of their jobs. Automating such tasks also means errors will occur less frequently and you’ll be able to operate more efficiently.
Conclusion
Switching to a cloud-based WMS has the power to step change the way your entire business operates, enabling you to stay competitive and ultimately drive up the profitability of your business.
Andy Richley

Latest Posts:
Tags: Cloud Computing, Industries, Manufacturing
In the past few months’ two major issues have brought into question the relationship between in-house IT staff and external technology consultants. The WannCry attack, which impacted much of the NHS and the outage at BA, could both have been avoided with more effective partnerships, which set a clear division of labour and play to the respective strengths of external consultants and in-house staff.
Traditionally, many firms have taken one of two approaches to the provision of IT and technology. It has either been a ‘keep everything inside’ approach or ‘outsource the lot’ approach.
In the former, internal IT staff takes complete responsibility for the tech provision of a company and manages everything on their own. In the later, the firm completely outsources their IT provision to an external service provider, and in effect, forgets about it (until something goes wrong).
Both these approaches are inherently flawed.
Keeping everything in-house tends to isolate a firm’s tech staff from the latest thinking, developments and approached to technology. It is next to impossible for a modern company to keep at the cutting edge of technological development without external support. Keeping all provision in-house simply means your competitors will outstrip your tech offer.
Equally, the total outsource model will not deliver for a firm. Technology service providers and consultants are experts in their field; they bring together the best of new developments and cross sector expertise, with experience of developing solutions for firms in multiple sectors. However, what outside consultants are weaker at, is understanding the unique requirements and business strategy of a given firm. External tech consultants are experts in the world of technology, the kings of ‘planet tech’; they are not experts in a firm’s business strategy and only visitors to ‘planet company’.
So instead, the most efficient model, which provides the best results, is a partnership between a strong, but cost efficient, internal IT department and external consultants and service providers with best-in-the-business technological knowledge.
The role of an in-house IT department should be to bridge technology and business strategy, to integrate a company’s tech into the business objectives of that firm. It is here that a strong relationship between in-house IT and external contractors can provide the most value to a company. The cross-sector expertise and world leading staff that external technology consultancies can bring to bear on any given project will help ensure that the end result of a tech project is world class. At the same time, in-house IT professionals can provide the necessary oversight to ensure that the technology in development is provided has a real use-case for that company.
In the modern enterprise environment, technology needs to be agile and move quickly. Challenges appear and need to be solved with speed. To do so firms have to utilise the best available solutions and have deep expertise in multiple tech areas.
To scale up to face challenges is often not achievable with only in-house staff. Firm’s tech requirements fluctuate, it is next to impossible to garner the required expertise at a moments notice by hiring staff. Rather, the approach of choosing the best technology partner or partners from external companies allows for firm’s to ensure they have the best people from around the world working on their technology for as long as needed, and not a single day more.
External firms, often with offices in multiple global locations are able to scale a team up or down depending on a client’s requirements. It is even possible, though not desirable, for the external technology partner, for the relationship to start and stop as required.
An effective relationship between internal IT departments, that is well versed in modern technology, but totally focused on business objectives. And an external, reliable technology partner that brings to bear world-leading technology provision is the most effective model for a firm’s technology. Working in partnership, it allows a firm to have their tech focused completely on helping achieve business objectives, but does not sacrifice the deep and rich expertise in cutting edge tech that is necessary.
In-house tech leaders know what their company needs; external consultants know the tech to get there. Together, they can ensure that technology is successful for a firm.
Eugene Veselov

Latest Posts:
Tags: IT Transformation, Security, Technology
Cybercriminals are increasingly targeting enterprise through email scams that pretend to be from senior executives.
These and similar phishing exploits are estimated to have cost U.S. companies $3.1 billion in less than two and a half years.
But the way we work is changing.
Last month worldwide internet usage on mobile devices surpassed the desktop for the first time.
This is reflected in the fact that group collaboration and messaging apps for mobile now rival email as the dominant form of communication in the workplace. A study by Nielsen group found that 97% of workers use some form of team messaging.
Taking this into account, it should come as little surprise that fraudsters are aiming their latest phishing campaigns at mobile devices.
Since the start of 2016, Apple iMessage, Facebook Messenger and Whatsapp users have all been targeted.
In April Apple iPhone and iPad users were sent fake messages with the aim of tricking them into handing over their iCloud login details. Armed with this information the scammers would proceed to access all personal information stored in the cloud.
More recently Facebook Messenger users were warned of a scam that could steal passwords and hijack accounts.
But it is brand leader WhatsApp that has been repeatedly under fire.
Starting in January with a fake message about a missed audio memo, WhatsApp’s relatively poor security and popularity with millions of users makes it a favourite with fraudsters.
Other scams have included fake invitations – such as WhatsApp Gold and video calling – to download new versions of the app and bogus promotions like the Emirates competition.
With 75% of workers (Nielsen) using their mobile to send important and work-related documents it all adds up to a severe risk to corporations.
Here are the top three threats to enterprise from mobile phishing scams:
Reputational damage
With scams like the Facebook Messenger one (see above) the extent of the threat is not confined to a single device. The program first captures a victim’s online banking login details and other personally identifiable information (PII). It then spreads by sending a rogue link to the victim’s contacts.
It is highly likely that some of these will be turn out to be professional contacts such as clients or business partners.
Needless to say spamming business contacts is regarded as highly unprofessional. Consequences can range from being blacklisted to long-term damage to an enterprise’s reputation.
Malware threat
Many of the scams use links that infect mobile devices with malware.
Once on the device, a malicious program can pave the way for a host of Trojans and other malware. In a work situation these could potentially be transferred from a mobile device on to the corporate network.
One of the programs believed to have been spread in recent scams is the Locky ransomware virus.
If opened on a networked PC, Locky will rename all the important files on the system and demand a ransom for the decryption keys.
Once this happens it can cause untold harm to the organization since there can be no guarantee that files will be decrypted even after a payment has been made.
CEO fraud
Between October 2013 and February of this year, the FBI received reports from 17,642 victims of what it calls “business email compromise” scams, where employees are tricked into transferring large sums of money to people posing as the CEO of the company.
Already it has cost at least one CEO their job.
As email scams spread to mobile messaging apps CEOs need to prepare themselves for first collaborative group chat data breach.
Unless they take positive steps to secure and regain control of group messaging, employees may inadvertently expose companies to confidentiality or compliance risks.
In this event the buck stops at the CEO. It will be their j
ob on the line if mobile phishing causes a major data breach on their watch.
Secure group messaging and collaboration
As business moves ever more mobile-oriented, group messaging and collaboration platforms like NURO are one sure way to prevent mobile phishing campaigns from taking a foothold.
NURO is purpose-built for enterprise including built-in protection against rogue links by restricting use solely to authenticated users. Just as important is centralised IT administration for complete control.
This ensures that when you see a message from the CEO you can be sure it’s real and not from an imposter.
In summary, the rise of mobile phishing scams should serve as a warning to enterprise not to allow group chat in the workplace to carry on unchecked.
It is a ticking time bomb and employers need to act quickly to take back control.
Mike Foreman

Latest Posts:
Tags: Mobility, Security, Technology
The more digitally connected we get, the bigger the amount of data we have to find ways to manage gets.
As Mary Meeker, one of the world’s most influential technology watchers, points out, we generate 2.5 quintillion bytes of new data every day. That means data is increasing at a 50% compound annual rate.
Where is all that data going? Into the cloud. After all, while date volumes are increasing, cloud storage costs are coming down by 20% per year. However, while cloud might appear an inexpensive and simple solution to the question of what to do with all this data; in reality, if we let our grip on our digital assets slip, we lose visibility, accountability and control.
That’s especially true when it comes to deletion policies. The classical enterprise discipline of ILM, Information Lifecycle Management, may not be in fashion currently, but its principles of classifying, managing and storing information over time remain completely relevant. Analyst firm Gartner defines it as follows: “An approach to data and storage management that recognises that the value of information changes over time and that it must be managed accordingly… ILM seeks to classify data according to its business value and establish policies to migrate and store data on the appropriate storage tier and, ultimately, remove it altogether.
Criticality – a forgotten, but core, data management concept
I think that’s a timely reminder that the value of data is a function of time. Not all data needs to be retained permanently – and in fact for good governance, it needs to be deleted at mandated times. Hence the term ‘criticality’ – the value of your data to you and your business.
Cloud and software-defined storage may have taken the spotlight off the need for ILM, but such criticality should still be on your radar: what data needs to be kept, what needs to be deleted, and how can you prove that to the regulator? Obviously, different sorts of data have different levels of criticality; some data will be far more sensitive than others. But if you don’t know where your data is how can you manage it and track what has actually been deleted?
The problem with public cloud is that it obscures the issue, because by its very nature, cloud data resides in different geographies on a host of different media. Of course the cloud is convenient, but the concern is how do you know what data has been deleted permanently?
Public cloud-multi-tenancy is a particular issue here in controlling data. Any responsible company will have full visibility and security of the management of data residing in the cloud, especially around archiving and retention of information. This means ensuring that your cloud provider has the capabilities to do this.
Meanwhile, new legislation – the European General Data Protection Regulation (GDPR) – is also something you need to address. Organisations will need to start determining the risks to be managed, including having a full view of what data they have, where it is stored and what needs to be protected and secured before it becomes law in 2018.
GDPR has serious teeth, and organizations that fall foul of it can be fined heavily, and ILM should be a major lens through which to view this task, especially if you are a major cloud user.
The importance of the SLA
Any organization that has information at scale would be advised to consider a private cloud or a hybrid solution, and should be focused on smart Service Level Agreements (SLAs) to manage their relationship with business and service partners.
Last but not least, with cloud, ILM and compliance, the longevity of the supplier is an important consideration; what happens if they go out of business a few years down the line? If you are a financial institution with a big pensions database, this would create enormous problems. While no-one can predict the future, you need to try and mitigate your ILM and cloud risk as much as you can. A clear statement in your contract stating that your data remains your property and that you can have it returned in any format at any time is important.
To sum up, if data needs to be deleted in your business process, you must be aware of where and the way it is stored. This will undoubtedly become a more complex issue the more digitally connected we get – and the more demanding the regulatory environment becomes, whatever new trading relationships and therefore new legal jurisdictions we end up having.
Howard Frear

Latest Posts:
Tags: Cloud Computing, Data Center, Technology
It’s easy to forget why we started to examine the deeper channels and architectural workings of the software-defined datacentre in the first place. As much as we focus on explaining how and why data storage, compute processing and network intelligence resources all now exist in a more controllable definable space — just exactly why did we decide to undertake this shift to a more virtualised and abstracted world?
It’s all about the application
Ultimately, of course, it’s all about the application. That is to say, it’s all about serving the applications around which we hinge our business models so that they can operate at an unrestrained capacity and, at lower levels, at their most efficient capacity.
Fundamentally, it comes down to creating an environment where the application can control its resources. In practical terms we should point out that the network engineers oversee the actual throughput of resources, but the application has a potentially new prominence and efficiency in the software-defined world.
A new dynamic agility
The software-defined datacentre provides us with a computing infrastructure layer that facilitates a new dynamic agility. The dynamism here comes about as a result of our ability to programmatically define and form both the layers of the network and its mechanical component parts including switches and areas connected to automation intelligence and management.
An inherently connected elastic dynamism also exists in terms of the software-defined datacentre’s ability to manage connectivity, aspects of security and overall performance management. These are features that we would previously have expected to find existing as embedded hardware components — today they can be provided through software so they can be delivered in a more defined and controlled manner.
But step back again and ask why we have a need to be more defined and controlled. Exactly why is this? Because a more modular, more specialised, more efficient and more programmatically configurable network allows up to create a platform where applications are the first class citizens of the total IT stack. Once again, it’s all about the applications.
Simple economics
It’s a simple question of economics, better performing applications equals better customer service equals better profits.
As we bring the benefits of the software-defined datacentre online, the DevOps network engineer and system administrator (sysadmin) function be of key importance as they control and shape the computing fabric our business models depend upon. In practice this will mean engineering a variety of elastically provisioned and intelligently partitioned resources into a dynamically manageable pool of power with a defined degree of remote programmability for onward management.
There’s a lot of hardware in software-defined
Of course the software-defined datacentre still runs on hardware. Indeed the hardware vendors who have identified this trend are now building hardware appliances that ship with the required amount of pre-installed, pre-certified, pre-configured and pre-tested software appliances on board to fulfill the needs of the software-defined cloud computing landscape currently forming.
What we can surmise from all of this discussion is that we are building a better computing world for our applications if we engineer these technologies correctly. There is still much complexity here, but the signs point to application dividends in the long run.
Adrian Bridgwater

Latest Posts:
Tags: Data Center, Technology
Internet of Things (IoT) devices are excellent for adding functionality to our lives. Sadly, however, a kettle that you can boil on the way home from work can also add to a hacker’s repertoire.
Late in August 2016, the source code for the Mirai worm was dumped on the internet. That malware allows hackers to create a botnet from insecure IoT devices located in homes all over the world. The result is that hackers have a vast array of devices under their control, with which to launch Distributed Denial of Service (DDoS) attacks. DDoS attacks have increased in power, frequency, and effectiveness since the Mirai malware was leaked to the web.
IoT products
IoT products often ship with weak default passwords such as Admin or 12345. These passwords are supposed to be updated when consumers buy the item. Sadly, this isn’t always the case. With smart home devices left with insecure passwords and default port settings – despite the best efforts of firms to educate people – IoT products are vulnerable to the Mirai malware.
In November, for example, news emerged that Loxone smart homes had been falling victim to hackers looking to extend the Mirai botnet. When I spoke to Loxone’s Managing Director, Philipp Schuster, he told me that the documentation that Loxone smart home devices ship with clearly informs people about the need to secure those products.
Specifically, Loxone user guides specify that consumers must update passwords and are recommended to use “a non-standard port, for instance, 7777 or even better one greater than 50000.” Sadly, the message just isn’t getting through to people, despite the firm going to great lengths to probe people’s devices and send out reminders.
This is just one example of people not securing their IoT devices. Incredibly, stats show that one in every six IoT devices has a security issue. According to Gartner, that means that in 2016 over one billion connected devices were insecure and could be harnessed by the Mirai worm for launching DDoS attacks. This is where it gets scary.
Surge in Attacks
Following the Mirai malware’s appearance online, security experts noticed a sudden surge of DDoS attacks around the world. On 26 September it was reported that the French web host OVH was attacked at rates of rate of between 1Tbps and 1.5Tbps. Those are massive attack rates that had never been experienced before.
Then, in October, the Domain Name System (DNS) provider Dyn was also DDoS attacked with the Mirai botnet. On that occasion, the websites of Twitter, Pinterest, GitHub, PayPal, Spotify, Amazon, Reddit, and Netflix were all brought down for a number of hours due to the attack on vital web infrastructure provider Dyn.
That is a concerning blow, and it has left security experts worried that 2017 could see much more widespread problems. If, for instance, Mirai was used to launch coordinated attacks on various DNS providers – and perhaps the Internet Corporation for Assigned Names and Numbers (ICANN) too – it is possible that we could see large-scale internet blackouts.
In the case of especially well-coordinated attacks, it is possible we could see internet blackouts of up to 24 hours at a time. This could hit firms hard and wreak havoc on the world’s financial markets. Consider, for example, the possibility of self-learning, AI-controlled attack interfaces working alongside the Mirai botnet to launch carefully timed and targeted attacks. Imagine it happening on Black Friday and you get the picture.
This might sound more like something out of the movie Terminator than reality. Sadly, that isn’t the case. Companies like BT Americas are working hard to create hacking tools based on human neural networks. Furthermore, the Defense Advanced Research Projects Agency (DARPA)’s Cyber Grand Challenge last year proved that automated and self-learning systems are effective at doing things quickly that it would take a human a long time to do.
Those particular efforts are known to us. The efforts of clandestine hacking groups and state-sponsored actors are not. As such, it is hard to tell what hackers have ‘out in the wild.’ It seems reasonable to assume, though, that the attack on Dyn was only the beginning and that we can expect to see much worse internet blackouts in the coming months and years – at ever increasing and alarming rates.
Conclusion
With more and more IoT products being purchased every day, the number of mal-configured devices available to hackers is only going to increase. Sadly, because the threat is removed from consumers directly – who don’t think DDoS attacks on the likes of Dyn affect them – it is hard to convince them to take responsibility..
Perhaps when we actually see a 24-hour blackout, and people lose access to their precious Facebook or Instagram, we will finally see them wake up to their position of responsibility within the world’s growing internet security nightmare.
Ray Walsh

Latest Posts:
Tags: IT Transformation, Security, Technology
The new Dell Wyse 3040 thin client is here, and with it an assurance that the VDI (Virtualized Desktop Infrastructure) world has matured significantly. So, it’s time to sit up and take notice.
A triple-entente of factors are behind this: virtualized graphics now satisfy the most demanding of graphics users including designers and engineers; hyper-converged technology has dramatically reduced the cost and complexity of deployment; and the virtual desktop has been redefined thanks to the latest thin client endpoints. And as these thin clients have become more capable – even as they get smaller – virtualizing the desktop has become a more compelling option when replacing old PCs.
However, nothing stands still in technology and while the VDI world has been maturing, our demands of desktop computing have increased – and will continue to in the future. The Dell Wyse 3040 recognises this and builds on Dell’s thin client leadership to present an entry-level (subtext: highly affordable) thin client with capabilities designed for today and tomorrow’s typical desktop computer user.
The reasons for the fanfare surrounding Dell Wyse 3040 are multi-factored. It’s the lightest, smallest and most powerful entry-level thin client, ever. Including an Intel x86-based quad-core processor in a thin client at this level is an industry first, and its small physique belies its capabilities for today’s worker who multitasks across applications while communicating with colleagues over Skype for Business.
Very low power consumption, silent operation, and a size less than half that of a paperback book make it a very friendly partner for your desk; in fact, it will almost disappear if you use its ability to drive two large, high resolution (2560 x 1600) displays. With these capabilities, it’s not difficult to understand why some commentators claim the 3040 blurs the line between entry PCs and thin clients.
The Wyse 3040’s heritage is a long line of the brand’s best-selling thin clients which have delivered applications for customers around the world for more than 20 years. It’s not unusual to be standing at a check-in desk for a hotel or car hire company and realize that mounted on the back of the agents’ display is a Wyse thin client. And these are just the ones you see. Offices, hospitals, schools, manufacturing plants, trading floors and many, many more locations are home to Wyse thin clients. With its small size, you will have to look a little harder to see the Wyse 3040, but you can be sure that, as virtualization of our applications and desktops continues to accelerate, it will be there.
David Angwin

Latest Posts:
Tags: Technology, Virtualization