Saturday, October 19, 2013

Blog Move...

We've started a new blog over here>> Intelli-Community.com/blog

Monday, February 7, 2011

Want Info More Often?

We would like to direct you to our main blog or follow us on our much more active Facebook page. See you there! Thanks!

Thursday, September 24, 2009

Intellect 3.0 Security

Intellect 3.0, the server in particular, is a critical tool for creating value in many applications. Recently we put together a proposal for a system that will model, predict production and optimize entire oil fields in real-time. To a great extent, many of the operations of Intellect don't have an impact on Health, Safety and the Environment (HSE), but some operations do have a significant impact on the businesses' bottom lines... high quality, high volume production (yes, high enough to impact the bottom-lines of some countries). As such, the Intellect 3.0 Server, the "nerve center" of this capability, does need to be secured to avoid unauthorized persons from tampering with settings and to also provide an audit trail of who did what when.

Enter Intellect 3.0's multi-tiered security model. It is part based on either NTLM (NT Lan Manager) or Kerberos (these two comprise Windows Security) and also Intellect's own application-level security model. Users need to have local or domain accounts established on the Intellect 3.0 system's computer(s) in defined roles or groups (in Windows' terms) to be able to access the Intellect 3.0 server and sub-components. These authenticated users then are allowed to participate in Intellect's application security system.

Intellect 3.0's application-level security model is comprised of three elements... Users, Roles and Rights. When a user authenticates with an Intellect 3.0 server, they are granted certain Rights based on the Roles they are assigned to by Administrator(s). These Rights enable/disable certain capabilities, such as the ability to create tasks, start/stop operations, make changes to settings or to merely look. These Rights also auto-configure applications to provide or hide capabilities. Isolating Rights and Roles enables administrators to create all sorts of Roles that have varying Rights. Roles typically include administrators, solution designers, engineers, supervisors, operators and the like, but with a few mouse-clicks could include office staff for reporting, plant management, etc.

The reason we don't just use Windows security for all this is because some customers would rather not have custom roles/groups set up in Windows itself just for a particular application, and we prefer to grant Rights rather than at a Roles level (Windows only allows Users and Roles, aka Groups) giving us more control and isolation/abstraction between the user and the system. Also, user switching (log-out, then log-in) can be done at the application level, avoiding having to log out of Windows, then in again in a different account just to get different Rights.

Intellect 3.0 is a critical tool for creating significant value, sometimes in the $100's of millions per year and because of this, it is flexibly secured to protect this value.

Sunday, August 23, 2009

Intellect 3.0 Architecture: The Server

The Intellect 3.0 Server is the Intellect core production run-time system that hosts and manages Intellect Tasks and various important Intellect capabilities. It is most often launched by the Intellect 3.0 Service, an automatic "Windows Service", to enable the Intellect Server to automatically start, load and resume execution on a computer reboot and run in the background 24 hours a day, 7 days a week, converting raw data into intelligent results. The Intellect 3.0 Server has a Console application which can be used for pure technical administration purposes.

The Intellect 3.0 Server automatically creates and starts not only its assigned Tasks, but also some key internal services, including an email server for sending and receiving email messages, logging services to keep logs of activities, the Historian which archives data or any object sent to it, and even an NLP (Natural Language Processing) sub-system so in the future we can have interactive discussions with Intellect to solve problems and get answers to questions ("Why was the last batch of product bad?"). This list of "infrastructure" services will grow over time. The Intellect 3.0 Server also manages Tasks that have been assigned to it and can start/stop/pause/resume/save and load their state. Additionally, the Intellect 3.0 Server establishes connections to "Clients" out on the network (other computing nodes), to which it can assign work. These "Clients" can run as dedicated computing resources or on a "voluntary" basis, through the use of Intellect 3.0 screen savers. This way, if people leave their computers, either for a while or overnight, Intellect 3.0 can make use of the otherwise idle compute power to do sophisticated analysis.

Applications based on Intellect 3.0 can connect to the Intellect 3.0 Server from anywhere on the network to "chat", ask questions, define work, or receive results (information). These results might include predictions, or recommendations to improve performance, or perhaps optimization results, or maybe alerts to abnormal conditions it has detected, or any other types of results produced.

The Intellect 3.0 Server concept is not new. It was first created in Intellect 2.0 in the early 2000's, and we've extrapolated its power much further based on that success.

More later...

Monday, July 27, 2009

Intellect 3.0 Architecture: An Introduction

Intellect 3.0 is based on a sophisticated information technology architecture and to explain it will take numerous posts. So let's get started!

In brief, the Intellect 3.0 architecture is an object-based task-oriented extensible bi-directional pipe-lined message-passing synchronous and asynchronous, multi-threaded distributed multi-processing information manufacturing architecture. That is certainly a brain-numbing mouthful, so let's break it down...

The "object-based" is pretty much expected these days, but rather than describing a "Customer" or "Order", these objects are data crunchers. They take in data of any agreed form (objects, events, arrays, etc.), perform processing on the data and broadcast it on to other objects.

"task-oriented" means that the architecture is designed to do tasks... discrete, independent units of work, such as getting data, doing math calculations, making a prediction, optimizing something, writing data, etc.

"extensible" means that at various points, 3rd parties can create their own tasks with capabilities that integrate into the Intellect architecture to solve specific unique needs and challenges.

Through the "bi-directional pipe-lined message-passing" capabilities tasks can be linked together using subscriptions to one another in any structure; chains, loops, branching trees, converging reverse-trees, in all sorts of pathways. The "bi-directional" part means that in the normal direction an object gets, processes and sends information, or in the reverse direction it receives information requests from downstream objects and if it does not have the information it needs to fulfill the request, passes its own request back up the chain. This somewhat novel back-linking is a "Just In Time" (JIT) concept applied to information processing. The "pipe-lined" aspect means there are more than one subscription "channel", presently one for data and another for "command and control" of the tasks. That way, an application can message an object to start/stop/pause/resume without stepping into an overwhelming data stream torrent that may be flowing.

"synchronous and asynchronous" means that tasks can run independently or wait for other tasks to complete their work.

"multi-threaded distributed multi-processing" enables Intellect 3.0 to make use of more than just threads in a multi-core CPU, but also multiple CPUs in the same computer as well as multiple computers on the network. This provides Intellect 3.0 the ability to draw upon vast compute power if needed. It also is architected to not have a single point of messaging or processing coordination, which eliminates bottle-necks in the design that might limit performance.

"information manufacturing" is one key purpose of Intellect... to convert data through various tasks ("workcenters" in manufacturing terms), into more valuable information, whether this is to perform cause and effect analysis, make predictions, estimate probabilities, recognize interesting or abnormal situations, do optimizations, raise alerts, take actions and so forth.

The features, advantages, benefits, not to mention the future potential, are outstanding. Architected in this way, Intellect 3.0 is capable of implementing very simple solutions (get data, compute a result, write it somewhere) to solutions of near infinite complexity through creating chains, loops, branching trees of any size or length, up to the limit of the machine(s) employed.

How did we come up with this? Well, it's been an evolution starting back in 1997 when we created our first distributed multi-processing application... not easy... but a highly informative learning experience. Then, in Intellect 2.0, one of our bright engineers came up with a messaging-passing task-oriented architecture for linking tasks in the Intellect 2.0 server, and we expanded it from there. This historical tidbit is to suggest that this is not new, but based a lot of experience over more than a decade.

More later... There are such things as servers, clients, historians, meta data management and other aspects to the Intellect 3.0 architecture you should know about.

Sunday, April 19, 2009

Data Typing

Data comes in all forms. Names, ages, birth dates, addresses, temperatures, day of the week, time, price, pressure, categories (such as M/F for gender), scores, zip codes, rankings ("on a scale of 1-10, how much do you like X?"), flow rates, keywords in news, size of a package, tracking number, lot number, and on and on and on.

Sometimes data is a unique identifier for other data, like tracking information by Employee Number or Social Security Number, or Date/Time, or sequence number, or lot number, ... etc.. In these cases, the data is an identifier for a row or block of data and could be in most any form and can have sub-meanings as well (employee numbers or lot numbers may be sequential... or maybe not).

Generally, when processing data, it is good policy to keep the data together. That is to say, if you are computing something based on a person's age and zip code, it is smart to not allow these to get separated from the person's identifier or allow a stock's price to be separated from its date. The data might look like this:

Name,Age,Zip Code,Yearly Salary
Tom Jones, 38, 55439,34504.44
Betty Mabel, 23,97404,45,505.00
...

Name is an identifier, but not necessarily unique and could be a gender proxy (most of the time, but not always... Sam... man or woman?) and age is clearly a number (or maybe age is really a date difference: years since birth, truncated) and zip code is numeric, but not really a scalar number, and salary is a floating point or "money" type. So, the point is, even at this point we have a hard time telling the purpose, use and meaning of the data, and even though it seems so obvious, really it is not. It depends on what it is and how we intend to use it.

Let's say we wanted to pass along an array of this data for processing by various tasks. We would need to "cast" the data array to a type (string, integer, long, money, date, ...). Hummm... things get sticky because most arrays are of a single type so the data might have to be split into multiple arrays of different types but that type depends on the meaning assigned to the data at that time (and the presumptions of the person doing the data typing). So "Tom" might be stuck into a string array while his age might be put into an array of what... int's? doubles? floats? decimals? monies? Humm.... Just create an array of "object" you say? Possible, but in many cases Microsoft will "type" the object with the first assignment, so no, it might get messed up. In just those 4 very common variables, there are many interpretations and data types that could be used!

Sooo.... In Intellect 3.0, we take the easy, but very effective, way out... In those cases where we are sending arrays of data around the system amongst data processing tasks we keep it as a string array. The Intellect 3.0 tasks convert the data into the type suitable for what that task is doing, whether that be decimal or double, or float, or int, or category, or enumeration, or date, or keep it as a string, or whatever makes sense at that moment. This allows us to keep "Tom" in the same array with his age, zip and income and stock prices with their dates and times.

Downside: Lots of conversions going on from string to a multitude of data types.

Upside: We keep data together and the data is properly used, in different ways, for different purposes at any time in the process. We can be flexible.

Blessing: In .NET Microsoft has a super job in string processing, conversion, memory allocation that makes this lightning fast, even on millions of rows. In Visual Studio 6, if you even had a 50 column by 150,000 row string array in RAM, the application would likely crash. No problem in .NET.

Now, handing array data as strings is not required in Intellect 3.0, actually tasks are shuttling messages containing objects of a declared type, but that is a topic for another post another day.

Thursday, April 2, 2009

Welcome!

This blog will cover the technical aspects of the IntelliDynamics Intellect 3.0 software architecture and tools. For general information and news, please head over to our home site at IntelliDynamics.net or the IntelliDynamics blog which looks at things from a management perspective.