Wednesday 12 November 2014

Tech News On Electronic Products | Components & Design

Electronica 2014 – Latest News Roundup

Toshiba launches ADAS image recognition processors


At Electronica Toshiba announced the expansion off its line-up of image recognition processors with the launch of the TMPV760 series.

Supported by 14 hardware-based image recognition accelerators, the first device in this new series will support the implementation of next generation Advanced Driver Assistance Systems (ADASs).

The TMPV7608XBG supports standard ADAS features such as AEB (Autonomous Emergency Braking), TSR (Traffic Sign Recognition), LDW (Lane Departure Warning) / LKA (Lane Keeping Assist), HBA (High Beam Assistance), FCW (Forward Collision Warning). It also supports a number of new applications that will become part of the Euro NCAP testing program in 2018, including TLR (Traffic Light Recognition), and pedestrian detection at night-time.


ADAS applications are processed concurrently within a typical time window of 50ms inside the image recognition processor and with relatively low power consumption due to the purpose-built hardware accelerators and media processing units.


The TMPV7608XBG integrates two new Enhanced CoHOG accelerators[1] that provide far higher image recognition accuracy especially in low light and night-time conditions. The device greatly improves night-time pedestrian detection rates using colour-based gradient analysis of images supplied by Full HD (2 megapixel) connected cameras.

In addition, the TMPV7608XBG supports a SfM(Structure from Motion) accelerator[2] that detects objects not part of a pre-defined library.

The processor can handle multiple applications simultaneously in real time using its Heterogeneous Multi-Core architecture. The device features newly-integrated image processing accelerators and eight MPEs (Media Processing Engines) supported by FPUs that perform double precision floating point arithmetic calculations.

The chip is housed in a P-FBGA 796 ball package measuring 27mm x 27mm. Ball pitch is 0.8mm.

To ensure optimal power management, Toshiba is also launching the TC9580FTG power management IC, designed specifically for the TMPV760 series. The IC provides all the voltages necessary for the system components and contributes in achieving the functional safety level required for the ADAS.

- See more at: http://www.electronicsweekly.com/news/business/toshiba-launches-adas-image-recognition-processors-2014-11/#sthash.4aBthj5R.dpuf

Tuesday 11 November 2014

UK Skills Programme Tackles Skills Deficit | Training With Sofcon

The universities of Leeds and Sheffield are the latest universities to join the UK’s electronics engineering skills initiative run by the UK Electronics Skills Foundation (UKESF).

The UKESF programme is an industry initiative to address the overall decline in the number of British students applying and registering for Electrical and Electronic Engineering degrees.
By 2020 it is anticipated that the UK electronics systems industry will generate an additional 150,000 highly skilled jobs to support the industry.

“We need more young people to aspire to careers in this sector, but of great concern at the moment is the decline in UK university applicants for electronics,” said Indro Mukerjee, chairman of the UKESF strategic advisory board.

Whilst there has been a rise in demand for engineering and technology courses since 2002, there has been a 26% drop in UK-based applicants to electrical engineering courses between 2002 and 2013.
“The UK electronics systems industry is estimated to contribute £78bn to the economy with the potential to grow and generate 150,000 new and highly-skilled jobs by 2020,” said Mukerjee.
To address this the programme runs a scholarship scheme designed to help students to find work placements with up to 25 member companies, including ARM, AWE, CSR, Dialog Semiconductor, Imagination Technologies, Thales and XMOS.

“The work the UKESF does to encourage more young people into electronic engineering degrees through its schools programme is just one of the reasons we are very pleased to have joined them.” commented Professor Ian Robertson, Head of the School of Electronic and Electrical Engineering at the University of Leeds.

Since 2010, employers have awarded 174 UKESF scholarships to students at its university partners.
Students can now apply for summer vacation and one-year industrial training placements through UKESF.

“We are able to offer more industry-sponsored scholarships, showing prospective students there is a demand for electronics graduates and that, in the current climate of high tuition fees, an engineering education will prove a rewarding investment,” said Robertson.

Professor Geraint Jewell, head of the University of Sheffield’s Electronic and Electrical Engineering department believes that being able to draw on the knowledge and expertise of the industry will allow the university to better prepare students for the commercial world.

There are now 13 universities supporting the skills initiative: Leeds and Sheffield join Bristol, Cardiff, Edinburgh, Glasgow, Imperial College, Manchester, Newcastle, Nottingham, Southampton, Surrey and York.

Source:-: http://www.electronicsweekly.com/news/general/uk-skills-programme-tackles-skills-deficit-2014-09/


Top tips for making your Embedded Linux device secure

 The internet of things (IoT) offers endless possibilities for smart devices and their applications. So it’s no wonder that the IoT is as equally tempting to hackers, as it is to developers, keen to showcase their latest developments.

A lack of security issues doesn’t mean you’re OK – you’re probably just not being targeted yet.

This paper is designed to help anyone who is developing an internet-enabled Linux device for personal or business use. It highlights the main areas to consider and provides a practical checklist for developing applications for Embedded Linux.

Linux -based systems are increasingly used in networked devices, as Linux offers a solid POSIX base for API and other conventions, which supports a permissions model conducive to a secure system and has industry-wide support.

The ability to create and remotely manage smart devices for utility services, traffic control, or reading meters can have very positive application benefits for business and personal use – however, there are some drawbacks.

High cost of development

From a business perspective, smart devices come at a cost and are much more expensive than their ‘dumb’ counterparts. For example, the price of a Wi-fi LED light bulb is almost 50 times the price of a standard LED equivalent (and 500 times the price of a non LED bulb).
To make these smart products attractive despite the wide cost differential, they need to provide either substantial unique consumer benefits (such as unrivalled convenience or even the kudos of being an early adopter) or significant operational cost savings (such as removing the need to take personal meter readings).

Security compromises

The next logical step towards success in the mass-market will be to narrow the price gap. Lower prices will lead to increasing competition and product designers and manufacturers looking for ways to lower the cost of development or improve economies of scale. In some cases, the desire to get to market quickly or cut costs may result in product de-scoping – either of which may adversely affect the attention paid to device security.

Hackers have already proven that Wi-Fi light bulbs, baby monitors and even pacemakers can be vulnerable to attack. Whilst the roll out of smart meters will enable energy companies to make significant operational cost savings, it is not unthinkable that hackers could find a way to switch all of the meters off – leaving thousands of homeowners and businesses without energy in an all too literal Denial of Service.

Not only would the damage to reputation be enormous but the costs of addressing the issue would be even more significant than the savings that had been generated. The security breach would need to be identified, solutions determined (for immediate fix and a more permanent solution, if required) and customers would need to be reconnected – as safety standards require each meter to be switched on manually (necessitating an engineer’s visit)!

At best this may cause a few customers minor issues, at worst, it could cut energy to millions of customers and jeopardise the business.

But it’s not just organisations designing and developing devices with embedded systems; many thousands of enthusiasts and students are looking to put their own Linux-based devices online – and they can be just as vulnerable!

Every program is a potential target. Vulnerabilities can be found and used to:
- Crash your software
- Learn your secrets
- Gain control – whether that’s to show off or to use your product maliciously
Therefore, it makes sense to build in security from the start.

The Linux security onion
There are various layers that need to be considered in the ‘Linux security onion’.
- The network layer – the connected environment such as the internet or IoT
- The environment layer – the Linux operating system
- The application layer – the device’s physical system, code and application scripted onto the device by the developer

Securing a device means understanding how and why problems occur and how to address each of these specific layers. For example, C and C++ are not secure languages – they can be subject to format string attacks, buffer overflows or stack and heap overflows – but they are the defacto choice for development on Linux.

Even using a high level language such as Python does not mean that developers can be complacent and assume they are safe from malicious actors. Developers need to take more effective action to secure their devices online.

The Zen of Hacking

There are a few simple ways that a device can be hacked.

Firstly, it is possible to trick the device into consuming more input than it allocated memory for, to cause a buffer overflow. Once the buffer that lives on the program overflows either:

- the stack is ‘smashed’. This allows an overwrite to another stack variable which can be used to take control of the device – often by aiming the CPU at memory you don’t control; or
- the heap is corrupted by fooling the system about how it tracks memory. Once corrupted, it is possible to trick the program into writing to arbitrary places in the memory.
Security checklist

The following tips provide a useful checklist for developers wanting to secure the application layer in the Linux security onion.

Authentication and some best practice suggestions

1. Use an authentication mechanism that cannot be bypassed or tampered with. When
implementing authentication, ensure that it cannot be bypassed trivially – hardcoded passwords
cause issues all the time, as well as “secret” admin pages for web enabled devices where you
only need to know the URI

2. Make sure you authorize after you authenticate. Understand the difference between
authorisation and authentication and what that means on your platform: Authorisation – What
the user can do (Discretionary/Mandatory access controls, read/write access to files etc).
Authentication – Who the user is (Username + SSH Keys/Password)

3. Strictly separate data and control instructions, and never process control instructions received
from untrusted sources. If you require privileged status to perform functionality – separate the
reading/writing of the raw data from the parsing/logic. This prevents bugs and exploits in the
data processing side (think XML, JPEG etc) from interfering with the control logic. This is
paramount when handling data from unverified sources.

4. Define an approach that ensures all data are explicitly validated and identify sensitive data and
how they should be handled. Following on from the previous point – always validate and verify
the files entering your system – If you’re processing an XML file which consists of a million
nested elements, what will happen to your parser? Consider a verification/fuzzing strategy and
assume hostile intent!

5. Understand how integrating external components changes your attack surface. The more
components added to your system, the larger your attack surface. Think about what happens if
you add USB support, do bugs in the USB stack open you up to unexpected strategies? What
about userspace applications? Consider these effects when competing in the features race.

6. Be flexible when considering future changes to objects and actors. Take the view that some of
the software on your platform will have flaws and it may not always be in the controlled
conditions it was originally designed for – always consider an upgrade/patch strategy for your embedded devices

7. Use “safe” string functions. For example, avoid ‘strtok’ and use ‘strtok_r’ or ‘strtok_s’ with – std=c11 instead, in order to prevent buffers being modified or performing ‘out of character’

8. Always know the size of the string and allocate a string large enough to hold the output, including NULL

9. Be wary of NULL and control characters in data you’re handling

10. Know the memory model – who allocates, who frees – the caller or the callee?

11. Always allocate enough memory for the expected input and watch out for magic numbers or out of range values!

Architecture and data tips
12. Knowing how your architecture works is fundamental to understanding how it can be used against you – sometimes it can be fun to have a “breakdown session” to see how secure your product is.
13. Shellcode isn’t that hard to write… when you know how. Take some time to learn how to at least read it and how it works
14. GDB and objdump are free and highly powerful tools – learn how to use them to understand not what your code should do but what it can do.
15. New exploit techniques are always being developed – stay on top of them by tracking the CVE lists and ensure you have an update strategy.
16. Always check what data you’re being given – eg. gif/jpeg/mp3/wav etc – Do you trust the values given to you? What does your code do when it opens a JPEG that’s -100000 by -1000000?

Language, file paths and other coding tips
17. C and C++ are not secure languages so remember to do formal verification when using them – bonus points if it’s part of your continuous integration strategy
18. Even understanding how a binary gets into memory in the first place will give you an advantage over other programmers
19. Try not to hard code values – what if you update in one location and not the other?
20. Remember that all command line arguments are in control of the user launching it – are you using getopt or have you rolled your own? Is it secure?
21. Be careful about working with shared files – Who else can read/write to the file?
22. Filepaths can contain .. and … so be wary of directory traversal attacks
23. Think about file operations. For example, try to avoid API calls that take a path name and prefer those that take a file descriptor instead – this will help mitigate race conditions. And watch out for hard/soft links
24. Don’t be afraid to use open-source libraries – Most are under the LGPL which allows dynamic linking without requiring you to open-source your code.
25. Learn what tools are available for your environment – if you aren’t willing to discover them, there’s a hacker or saboteur, who will!

Source:-http://www.electronicsweekly.com/news/design/embedded-systems/top-tips-making-embedded-linux-device-secure-2014-11/


Six Decisions You Must Get Right Before Upgrading Your Automation System

There are six decisions you must get right before upgrading your automation system.
It doesn’t matter whether you are just upgrading equipment – like PLCs, RTUs, or HMIs – or upgrading larger automation systems – like compressor stations, pumping stations, or master station SCADA systems. We have found that automation upgrades often fail because buyers fail to make a few critical decisions early in the upgrade process.

This report identifies the critical decisions you must make early and correctly in order for your upgrade project to be cost effective, achieve your goals, and reduce the risk of incorrect startup and operation.

1. Decide on a clear purchasing and evaluation plan
Make sure you have a plan for identifying your needs, for documenting those needs, and for identifying the best vendors to approach for proposals and/or bids. Whether you hire a consultant or do this yourself, the purchasing and evaluation plan needs to include a time line for at least the following major tasks:
 
  • Requirements development
  • Procurement process
  • Bidder selection process
  • Bidder evaluation process
  • Vendor award process
  • Configuration
  • Factory acceptance test
  • Start-up
The plan should also indicate what percentage of each stage will be performed in-house and what percentage will be performed by the consultant, vendors, or others who will be involved.
This is your road map. As the philosopher once said, if you don’t know where you’re going, how will you know when you’ve arrived?

2. Decide early which stakeholders will be on the project team, and put only one person in charge

If your project’s scope warrants it, put in place a project team that participates in the upgrade process from the very beginning stages. This team should include all of the major stakeholders. Depending on the scope of the project, this might include field engineers, technicians, analysts, operations personnel, IT people, and management.
The field and operations people know what will and won’t work in the field. For a larger project, the IT people know the requirements of the back office or host system that might need to be interfaced with the field operations. And, management is best suited to keeping an eagle eye on the business goals and bottom line. Don’t wait to get these people on board. Get them involved from the beginning.

Once you have a project team, make sure that a single person is in charge. That person must have the authority to make decisions and to ensure that the system meets both the company’s technical and business goals – especially the company’s business goals. When one person has his or her reputation on the line, the project is more likely succeed.

3. Decide why you want to upgrade (your goals)

Why do you want to upgrade? If you can’t identify compelling reasons, don’t do it.
The absolute worst answer is “because it’s time.” Maybe your system’s components have reached end-of-life and are hard to repair, but they still work. Maybe they no longer represent cutting-edge technology. Maybe you’re tired of your competition whipping out photos of their shiny new system while you are embarrassed to show worn-out photos of your ancient, toothless system.

These are not usually compelling reasons to upgrade.
In fact, the only good reason to upgrade is because it achieves one or more tangible business goals. The exact goals will vary from company to company. What’s important is that those goals be identified and quantified, and that they become the most significant criteria in selecting a replacement system.

4. Decide what it will take to achieve your upgrade goals

Will a newer version of your current system achieve your stated goals? Do you need to replace all your hardware to achieve those goals? What new technologies can be applied to achieve the company’s business goals?
Typically, the answers to these and other questions end up in a specification that is ultimately put out for bid. And, just as typically, these documents are either too vague or too detailed for their own good.

Overly vague bid documents lead to overly vague bids. Telling vendors that you want to improve usability, maximize flexibility, and provide a platform for future expansion could mean just about anything. Vendors want to know exactly what you are trying to achieve so that they can provide you with a bid that will best meet your goals.

However, overly detailed bid documents can lead to overly costly projects – not bids, projects. If you tell vendors exactly how to do their job, they’ll give you low-ball bids that put all the risk back on you. Unless you know exactly what you want and how to go about getting it, you can’t think of everything this early in the upgrade process. If you try and you’re wrong, you’ll likely end up with low bids and lots of change orders. And you know what that means – project cost overruns.

What’s an engineer to do?

If you know exactly what you want, then you can write what we call a procedural specification that states exactly how you expect to accomplish your goals. Such a document is often loaded with lots of technical specifications for networks, timeouts, software, RTUs, PLCs, protocols, radios, etc. This will lock you into very narrow options. But if you’ve done your homework and know that this is the best way to go (i.e. no fear of change orders), then go for it.

But you may find that it’s best to write what we call a performance specification. The performance specification or request for proposal states exactly what you want to accomplish and asks vendors to propose ways to achieve those goals. You’ll get lots of ideas, and you may discover an approach worth considering that you would never have thought of otherwise. We have also found that taking this approach usually results in minimal (or no) change orders.

5. Decide what you want from your vendor before you start looking

Evaluating vendors and their responses to your questions should be a combination of art and science – a combination of corporate chemistry and bottom-line common sense.
Of course, cost is an easy factor to quantify. But it’s not so easy to compare less quantifiable factors like customer references and project methodology. It’s especially difficult calculating the effectiveness of a long-term relationship with the vendor.

But if your project is to succeed, those factors must be at the top of your list for consideration.
For example, if your project is large, you should consider examining each potential vendor’s project execution methodology. A documented project execution methodology can help ensure that all members of the team understand their role, understand company procedures, and understand the best way to accomplish major milestones.

How can you test that methodology short of actually hiring the vendor? First, ask each vendor staff member that you meet where they fit into the project lifecycle. Do they seem to act more like independent contractors or more like a team with a common sense of mission?
Next, talk to the vendor’s customers. Do their customers see tangible evidence that the vendor has a project execution plan? 

Does the vendor actually follow the plan, or is it just to impress prospects? Does the vendor provide written reports on a regular schedule? Are key vendor contacts available when needed? Can you talk to them and get straight answers? How is the after-project support? Go on site visits and see the installed systems first hand.
Similar questions can be applied to other intangible factors.

6. Decide what criteria will be used to judge the vendor and system

Very often we see very dense and complex vendor selection criteria. It seems that scoring bids will take a lot more work and time than it took to create the bid. You have to wonder how consistent scoring can be with such complex criteria. Most of it still boils down to a judgment call, and those judgments can’t possibly be consistent with so many criteria to consider.
If your selection criteria seems complex, then it’s probably not the fault of the selection criteria, but rather the specification and/or bid documents.
Make sure that all the information you ask from a vendor will be useful to you. Ask yourself how you plan to evaluate it, its significance to your goals, and whether or not it contributes any tangible value to your evaluation. You should also determine whether or not the information can be evaluated, and if so, whether or not you can easily and fairly compare vendor answers.

Source:-http://www.automation.com/library/articles-white-papers/general-automation-articles/six-decisions-you-must-get-right-before-upgrading-your-automation-system


Monday 10 November 2014

Databases – The Perfect Complement to PLCs

By Steve Hechtman, President, Inductive Automation



PLCs? Okay, you’ve tackled PLCs and now you can program ‘em with one hand behind your back. So what’s next? What’s the next logical challenge? Think SQL and relational databases. Why? You’d be amazed the similarity. It’s the next logical progression.

You might ask how it is they’re even related. For one thing, relational databases can sort of be an extension of PLC memory. Live values can be mirrored there bi-directionally. Historical values and events can be recorded there as well. But operators and managers can interact with them too.

It’s been over twenty years of working, living, breathing and thinking PLCs, but over the last six years I’ve delved heavily into SQL and learned a lot about relational databases. I’ve discovered that working with SQL is remarkably similar to working with PLCs and ladder logic.

SQL has four basic commands and about a hundred different modifiers that can be applied to each. These can be applied in various ways to achieve all types of results. Here’s an example. Imagine effluent from a wastewater plant with its flow, PH and other things being monitored and logged. That’s what you typically see.

But now let’s associate other things with these, such as, discrete lab results, the name of the persons who did the lab work, the lab equipment IDs and calibration expiration dates, who was on shift at the time and the shift just prior, what their certification levels were, what chemicals where added and when, who the chemical suppliers were, how long the chemicals sat before use, and so forth ad infinitum. All of this becomes relational data, meaning that if it’s arranged properly in tables you can run SQL queries to obtain all types of interesting results. You might get insight into the most likely conditions which could result in an improper discharge so it can be prevented in the future.

In my explorations of SQL, I found myself looking at the layout of my tables and evaluating the pros and cons of each layout. I massaged them, turned them on their side, upside-down, and finally ended up with the most appropriate arrangement for my application. And similar to PLC programming, I explored innumerable what-if scenarios. I was struck by the amazing similarity in my approach to developing solutions for PLCs. This has been a lot of fun – in fact exhilarating – just like PLCs used to be. It’s the next logical progression you know.

SQL is a high level language that isn’t very hard to learn and you can be very clever with it. I prefer to think of it as a natural extension to my PLC programming skills. Now that you have the machinery running, what did it do? Furthermore, relational databases and SQL pull people and processes together. Machines don’t run alone.

They’re merely part of a containing process and that process was devised by people. SQL and relational databases form the bridge to integrate processes, machinery and people together. I don’t believe a COTS (commercial-off-the-shelf) package can do it any more than you could offer a COTS palletizer program and have it be of any use. It just doesn’t work that way. Every machine is different. And every business process is different.

That’s where the SQL comes in. It has to duplicate or augment existing process flows and these are intimately connected to the machinery. And that’s why the PLC programmer is best suited to implement solutions involving PLCs and relational databases.

So where do you start? I would suggest picking up a book at the bookstore like one of those dummies books. Then download and install the open-source MySQL database server along with the MySQL Administrator and Query Browser.

It only takes a few minutes to install and then start playing. You can read about a LEFT JOIN or INNER JOIN but typing one in and observing the results is worth about 1000 words. At the end of an evening you’ll probably be very excited with all of your new found knowledge and be thinking of endless ways to employ it in your own field of practice. Happy SQLing! 

Coder’s Corner: PLCopen Standards Architecture & Data Typing | Sofcon Training India Pvt Ltd

By Dr. Ken Ryan, Alexandria Technical College

Dr. Ken Ryan is a PLCopen board member and an instructor in the Center for Automation and Motion Control at Alexandria Technical College. He is the founder and director of the Manufacturing Automation Research Laboratory and directs the Automation Systems Integration program at the center.
This is the first in a series of articles focused on writing code using the IEC 61131-3 programming standard. The first few articles will focus on orientation to the architecture of the standard and the data typing conventions. After covering these, this series will explore code writing for a diverse field of application situations.
THE IEC 61131-3 SOFTWARE MODEL
Figure 1
The IEC 61131-3 standard has a hierarchal approach to programming structure. The software model in Figure 1 depicts the block diagram on this structure. Let’s decompose this structure from the top down.
Configuration:
At the top level of the software structure for any control application is the configuration. This is the “configuration” or the control architecture of the software defining the function of a particular PLC in a specific application. This PLC may have many processors and may be one of several used in an overall application such as a processing plant. We generally discuss one configuration as encompassing only one PLC but with PC-based control this may be extended to include one PC that may have the capability of several PLCs. A configuration may need to communicate with other configurations in the overall process using defined interfaces which provide access paths for communication functions. These must be formally specified using standard language elements.
Resource:
Beneath each configuration reside one or more resources. The resource supplies the support for program execution. This is defined by the standard as:
‘A resource corresponds to a “signal processing function” and its “man-machine interface” and “sensor and actuator interface” functions (if any) as defined in IEC 61131-3’.
An IEC program cannot execute unless loaded on a resource. A resource may be a runtime application existing in a controller that may exist in a PLC or on a PC. In fact, in many integrated development environments today, the runtime system can be used to simulate control program execution for development and debug purposes. In most cases a single configuration will contain a single resource but the standard provides for multiple resources in a single configuration. Figure 1 shows 2 resources under one configuration.
Task:
Tasks are the execution control mechanism for the resource. There may be no specifically defined task or multiple tasks defined for any given resource. If no task is declared the runtime software needs to have a specific program it recognizes for default execution. As you can see from Figure 1 tasks are able to call programs and function blocks. However, some implementations of the IEC 61131-3 standard limit tasks to calling programs only and not function blocks. Tasks have 3 attributes:
1.  Name
2.  Type – Continuous, Cyclic or Event-based
3.  Priority – 0 = Highest priority
The next article in this series will focus exclusively on tasks and their configuration and associations to programs and function blocks. For now we will continue our decomposition of the software model.
Program Organization Units:
The lower three levels of the software model are referred to collectively as Program Organization Units (POUs).
  • Programs
  • Function Blocks
  • Functions
Programs:
A program, when used as a noun, refers to a software object that can incorporate or ‘invoke’ a number of function blocks or functions to perform the signal processing necessary to accomplish partial or complete control of a machine or process by a programmable controller system. This is usually done through the linking of several function blocks and the exchange of data through software connections created using variables. Instances (copies of a program can only be created at the resource level. Programs can read and write I/O data using global and directly represented variables. Programs can invoke and exchange data with other programs using resource-level global variables. Programs can exchange data with programs in other configurations using communication function blocks and via access paths.
Function Blocks:
The real workhorses of this hierarchal software structure are the function blocks. It is common to link function blocks both vertically (one function block extends another) or horizontally (one function block invokes another) in order to create a well structured control architecture. Function Blocks encapsulate both the data (as internal variables and the input and output variable that interface the function blocks to other software objects) and an encoded algorithm that determines the value of internal and output variables based on the current value of input and internal variables. The key differentiator between function blocks and functions is the retention of values in memory which is unique to function blocks and not an attribute of functions. Since a function block can have a defined state by virtue of its memory, its class description can be copied (instantiated) multiple times. One of the simplest examples of a function block is a timer. Once the class object “timer” is described multiple copies of the class can be instantiated (timer1, timer2, timer3… etc.) each having a unique state based on the value of its variables.
Functions:
The ‘lowest’ level of program organization unit it the function. A function is a software object which when invoked and supplied with a unique set of input variables will return a single value with the same name and of the same data type as those of the function. The sine qua non of a function is the behavior that returns the same value anytime the same input values are supplied. The best example of a function is the ADD function. Any time I supply 2 and 2 to the ADD function inputs I will receive a 4 as the return value. Since there is no other solution for 2+2 then there is no need to store information about the previous invocation of the ADD function (no instantiation) and thus no need for internal memory.
Access paths:
The method provided for exchange of data between different configurations is that of access paths. Access paths supply a named variable that through which a configuration can transfer data values to/from other remote configurations. The standard does not define the lower layer protocol to be used for this transfer but rather defines the creation of a construct (‘container’) in which the data can travel.
Global Variable:
Finally we come to the variables which are declared to be “visible” to all members of a specific level on the hierarchy. If a global variable is declared at the program level then all programs, function blocks and functions that are members of this program have access to this data. We say that the data is within their scope. Likewise, a global variable declared at the resource level will be available to ALL programs located on this resource.
Conclusion:


The IEC 61131-3 software model is modular and hierarchal. We have outlined its major components in this first tutorial. Next time we will look at details of the task mechanism. Later tutorials will focus on specifics of the POUs with emphasis on the differences between programs, function blocks and functions. Access paths will be the focus of another tutorial along with the concepts of data typing.

Source:-http://www.automation.com/library/articles-white-papers/programmable-control-plc-pac/coder146s-corner-plcopen-standards-architecture--data-typing

The PLC: New Technology | Greater Data Sharing | Training With Sofcon


What is a PLC?

Programmable Logic Controllers (PLC) continue to evolve as new technologies are added to their capabilities. The PLC started out as a replacement for banks of relays. Gradually, various math and logic manipulation functions were added. Today they are the brains of the vast majority of automation, processes and special machines. PLCs now incorporate smaller cases, faster CPUs, networking and various internet technologies.

You can think of PLC technology as a small industrialized computer that has been highly specialized for reliability in the factory environment. At its core, a PLC looks at digital and analog sensors and switches (inputs), reads its control program, makes mathematical calculations and as a result controls various hardware (outputs) such as valves, lights, relays, servo motors, etc. in a time frame of milliseconds.

While PLCs were very good at quickly controlling automation, they did not share data easily. At best, PLCs would exchange information with operator interfaces (HMI) and Supervisory Control and Data Acquisition (SCADA) software packages on the factory floor. Any data exchange with the Business Level of the company (information services, scheduling, accounting and analysis systems) had to be collected, converted and relayed through a SCADA package.

Typical of most PLCs, the communication networks were unique to the brand and limited in speed. With the acceptance of Ethernet, communication network speeds have increased but are still sometimes using proprietary protocols.

Trends: More Power, Wider Data Sharing

Overall, PLCs are getting faster, smaller, cheaper and more powerful. As a result, they are gaining capabilities that used to be the exclusive domain of the Personal Computer (PC) and workstation arena. This translates into critical data quickly and cheaply being shared directly between the PLCs on the Factory Floor and the Business Level of the company. These are not your father’s PLCs.

Some of the features that a PLC can bring to your automation projects are Web Servers, FTP Servers, sending E-mail and Internal Relational Databases. The following is a brief overview of these features and some of their uses.

Web Server

Example web-based HMI screen PLCs can host a Web site on the internet or your company intranet. So what’s that going to do for you? How about give you access to Real Time Data and Data Logging for starters. Do you need a backup Human Machine Interface (HMI) for a machine(s) or work cell? How about as a tool for your Maintenance group? Did you know that with some PLCs you can store documentation with the web server that lets you view machine drawings, schematics, maintenance and operator manuals and short video clips? They are all just a mouse click away with your web browser.
Web servers in PLCs are probably the most varied and widely used of the newer technologies. PLC web server capabilities vary depending on the manufacture and model from a single “canned” page to full blown sites using XML and JAVA based technology.

JAVA web servers can provide a high degree of versatility for interacting with a PLC. Three of the JAVA classes of small programs that enhance web server technology are; Applets, Modlets and Servlets. In general terms, the “lets” let you view, manipulate and transfer data faster.
Applets are small JAVA application programs that are sent from the web server to your web browser the first time you open a web page the speed up the data transfer for values.

Modlets are JAVA modules that run independently of the PLC control program for servicing non-process event driven tasks of data handling or updating calculations separate of the PLC control program. Modlets are very useful for parallel processing functions that interact with the PLC database.

Servlets run in the web server after they are requested by your web browser. They are very useful for displaying live data and dynamically creating data log files (CSV).

Send E-mail

A Send E-mail functions automates and simplifies exporting critical and production data out of a PLC. Production data and material usage reports, status changes, alarms, internal PLC data and maintenance requests can be issued from within a PLC control program. With a little time and imagination you can send your alarm messages to the Maintenance personnel who carry alphanumeric pagers or cell phones.

FTP Server

File Transfer Protocol (FTP) is your tool for easily and quickly moving or copying files in and out of a computer through an TCP/IP ethernet connection. Now it is available in some PLCs. While on the surface it does not sound like a big deal, this handy tool can be a major time saver. Why walk out to the PLC to copy files when you can access it though a network from your desk? How much time would you save by dialing into the Ethernet network the PLC is on (or a stand alone modem/router) if the PLC is in another city, state or country?

Internal Relational Database

One of the most exciting and useful new features that just showed up in the market from SoftPLC Corporation is the “Internal Relational Database” embedded in a PLC. As an internal database it allows crucial data to be accessed in one program scan (milliseconds) rather than having to wait for it from an external source (another computer or PLC) sending the data through a communications port.

This feature opens the door for a whole host of cost savings. For example in a sorting conveyor with a bar code reader, the bar code reader usually connects to a PC. The PC looks up the bar code in the database that it hosts and then sends the resulting information to the PLC. Only then can the PLC use the data to control a diverter, gate or bin. There are usually a minimum of 6 steps to get the information to the PLC.
  1. Scan the bar code.
  2. Send the data to the PC.
  3. PC decodes the bar code.
  4. PC looks up the resulting information from the database.
  5. Move the data to the PC communications port.
  6. Send the information to the PLC across a serial communication connection.
  7. Move the data from the PLC communications buffer to the PLC program memory and use it (in some cases).
Using a PLC with an internal relational database reduces this to four steps:
  1. Scan the bar code.
  2. Send the data to the PLC.
  3. PLC decodes the bar code.
  4. PLC looks up the resulting information from the internal relational database and use it.
Using a PLC with an internal relational database eliminates the weak link of the communications from the PC to the PLC. A PC will usually perform the database lookup faster than a PLC. However, moving the data from the PC database across a serial network connection (usually limited to 19.2k baud) rate is much slower that a PLC that can retrieve usable data in one scan of the control program.

With this sorting conveyor example the first cost savings is reduced computer hardware by eliminating the PC with the database and the database software. A second savings is realized by eliminating the integration time required to get the PC and the PLC communicating. Another cost savings is from the lack of needing the Information Technology department to continuously maintain, backup and upgrade the PC.

There are many different applications that a PLC with an internal relational database can control in a more cost effective way for both the Integrator as well as the End User. Manufacturing machines and processes that assemble product based on recipes or build multiple products on the same machine are all good candidates for this technology. Also, projects that require setting “environment variables” for configuration of a machine or custom written instructions and drivers as well as fast keyed look ups for control values would find the power of a internal relational database especially useful.

Summary

The technologies discussed are available in as many flavors and options as there are PLC manufactures. This overview only scratches the surface of the ongoing improvements in PLCs. It can’t begin to cover all the features and uses available among these four technologies let alone the important related subjects of networking, security and the continuing shift to Open Architecture.


The significance of the technologies is that control system complexity is rapidly being simplified for greater company wide information exchange so there no longer needs to be “islands of automation”

Source:-http://www.automation.com/library/articles-white-papers/programmable-control-plc-pac/the-plc-new-technology-greater-data-sharing