6 min read

(For more resources on Microsoft Dynamics, see here.)

NAV process flow

Primary data such as sales orders, purchase orders, production orders, financial transactions, job transactions, and so on flow through the NAV system as follows:

  • Initial Setup: Entry of essential Master data, reference data, control and setup data. Much of this preparation is done when the system (or a new application) is first set up for production use.
  • Transaction Entry: Transactions are entered into a Journal table; data is preliminarily validated as it is entered, master and auxiliary data tables are referenced as appropriate. Entry can be manual keying, an automated transaction generation process, or an import function which brings transaction data in from another system.
  • Validate: Provide for additional test validations of data prior to submitting the batch to Posting.
  • Post: Post the Journal Batch, completing transaction data validation, adding entries as appropriate to one or more Ledgers, including perhaps a register and a document history.
  • Utilize: Access the data via Forms and/or Reports of various types as appropriate. At this point, total flexibility exists. Whatever tools are available and are appropriate for users’ needs should be used. There are some very good tools built into NAV for data manipulation, extraction, and presentation. In the past, these capabilities were considered good enough to be widely accepted as full Online Analytical Processing (OLAP) tools.
  • Maintenance: Continue maintenance of Master data, reference data, and setup and control data, as appropriate. The loop returns to the beginning of this data flow sequence.

Microsoft Dynamics NAV 2009 Development Tools

The preceding image provides a simplified picture of the flow of application data through a NAV system. Many of the transactions types have additional reporting, more ledgers to update, or even auxiliary processing. However, this is the basic data flow followed whenever a Journal and Ledger table are involved.

Data preparation

Prepare all the Master data, reference data, and control and setup data. Much of this preparation is done initially, when an application is first set up for production usage.

Naturally, this data must be maintained as new Master data becomes available, as various system operating parameters change, and so on. The standard approach for NAV data entry allows records to be entered that have just enough information to define the primary key fields, but not necessarily enough to support processing. This allows a great deal of flexibility in the timing and responsibility for entry and completeness of new data.

This system design philosophy allows initial and incomplete data entry by one person, with validation and completion to be handled later by someone else. For example, a sales person might initialize a new customer entry with name, address, and phone number, saving the entry with just the data entered to which they have access. At this point, there is not enough information recorded to process orders for this new customer.

At a later time, someone in the accounting department can set up posting groups, payment terms, and other control data that should not be controlled by the sales department. This additional data may make the new customer record ready for production use. As in many instances data comes into an organization on a piecemeal basis, the NAV approach allows the system to be updated on an equally piecemeal basis providing a flexible user friendliness many accounting-oriented systems lack.

Transactions entry

Transactions are entered into a Journal table; data is preliminarily validated as it is entered, master and auxiliary data tables are referenced as appropriate.

NAV uses a relational database design approach that could be referred to as a “rational normalization”. NAV resists being constrained by the concept of a normalized data structure, where any data element appears only once. The NAV data structure is normalized so long as that principle doesn’t get in the way of processing speed. Where processing speed or ease of use for the user is improved by duplicating data across tables, NAV does so.

At the point where Journal transactions are entered, a considerable amount of data validation takes place. Most, if not all, of the validation that can be done is done when a Journal entry is made. These validations are based on the combination of the individual transaction data plus the related Master records and associated reference tables (for example lookups, application or system setup parameters, and so on). Here also you find the philosophy of allowing entries, which are incomplete and not totally ready for processing, to be made.

Testing and Posting the Journal batch

Any additional validations that need to be done to ensure the integrity and completeness of the transaction data prior to being Posted are done either in pre-Post routines or directly in the course of the Posting processes. The actual Posting of the Journal batch occurs when the transaction data has been completely validated. Depending on the specific application function, when Journal transactions don’t pass muster during this final validation stage, either the individual transaction is bypassed while acceptable transactions are Posted, or the entire Journal Batch is rejected until the identified problem is resolved.

The Posting process adds entries as appropriate to one or more Ledgers and sometimes a document history. When a Journal Entry is Posted to a Ledger, it becomes a part of the permanent accounting record. Most data cannot be changed or deleted once it is resident in a Ledger except by a subsequent Posting process.

During the Posting process, Register tables are also updated showing what transaction entries (by ID number) were posted when and in what batches. This adds to the transparency of the NAV application system for audits and analysis.

In general, NAV follows the standard accounting practice of requiring Ledger corrections to be made by Posting reversing entries, rather than deletion of problem entries. The overall result is that NAV is a very auditable system, a key requirement for a variety of government, legal, and certification requirements for information systems.

Accessing the data

The data in a NAV system can be accessed via Pages and/or Reports of various types as appropriate, providing total flexibility. Whatever tools are available to the developer or the user, and are appropriate, should be used. There are some very good tools in NAV for data manipulation, extraction, and presentation. Among other things, these include the SIFT/Flowfield functionality, the pervasive filtering capability (including the ability to apply filters to subordinate data structures), and the Navigate function. NAV 2009 added the ability to create page parts for graphing,with a wide variety of predefined graph page parts included as part of the standard distribution. You can create your own chart parts as well, but that discussion is outside the scope of this book. There is an extended discussion and some tools available in the NAV Blog community.

There are a number of methods by which data can be pushed or pulled from an NAV database for processing and presentation outside NAV. These allow use of more sophisticated graphical displays, or the use of other specialized data analysis tools such as Microsoft Excel or various Business Intelligence (BI) tools.

Ongoing maintenance

As with any database-oriented application software, ongoing maintenance of Master data, reference data, and setup and control data is required, as appropriate. Of course at this point, the cycle of processing returns to the first step of the data flow sequence, Data Preparation.


Subscribe to the weekly Packt Hub newsletter. We'll send you the results of our AI Now Survey, featuring data and insights from across the tech landscape.

LEAVE A REPLY

Please enter your comment!
Please enter your name here