Working In Uncertainty

Controls for e-business processes

First published 23 December 2002.

Contents

e-biz controls – the big picture

E-business applications, such as retail sales and supply chain integration over the Internet, have special characteristics with big implications for the techniques needed to control them.

Whether we are designing, reviewing, or promoting the idea of controls around a specific e-business process we often need to be able to work out a tailored architecture of controls of all types and justify our beliefs to decision makers and others. We must be able to explain our grand design – our big picture – of controls for the e-business process and convince others of our view.

This means asking searching questions about the characteristics commonly associated with e-business that we know drive controls design, and using the answers to rapidly sketch out the scheme of controls we would recommend or expect. Initially this means just identifying what types of control mechanisms to emphasise and where, rather than specifying every individual control. For example, it might be obvious immediately that many edit checks will be needed but require weeks of work to list them all.

From the high level controls scheme will flow more detailed design or review work.

This paper considers the questions we should ask, and what the answers imply. It provides a starting point for designing a cost-effective, culturally and strategically appropriate set of controls.

New knowledge

As this paper explains, the controls that often come to the fore when designing for e-business processes include some that have not been so important in the past.

Controls designers and auditors need to be comfortable with these new controls and develop greater knowledge of them to meet this new demand.

e-business processes

e-selling

The extent to which e-business is likely to transform a market tends to depend on the nature of the product or service being bought and sold. Retailing using a web site is more appropriate for products or services that are well defined, widely known, of relatively low unit value, and especially those which can be delivered electronically.

Extent of e-business/the process used

Examples

Initial research of products/suppliers only.

Expensive, tailored consulting.

Detailed investigation of products and services too.

Computer and network equipment.

And goods also ordered across the net.

Food, clothes.

And goods also delivered across the net.

Music, video, news.


Internal controls are important in all these applications, but particularly when there is a sale made. The following analysis usually assumes a sale is being made.

e-buying

Purchasing via a web site is more common in business-to-business applications. Control is again vital. The control priorities flow from the factors listed below rather than from the fact that the context is business-to-business.

Thinking about the risk issues

Perhaps not surprisingly, research sponsored by companies offering trust seals, security software, and payments systems has consistently pointed to security fears being the one factor above all else that is holding back commerce over the net.

Yet, when consumers complain it is rarely about security failings. The picture is much more complex and the range of problems much wider.

A friend of mine had a very bad introduction to buying over the Internet. Several purchases of greetings cards were made in one session. The cards were to be sent direct to the recipients with appropriate messages within. None of the items was delivered correctly. The most upsetting and confusing was a card that should have read ‘Get well soon’ but actually said ‘Congratulations!’

We need to think widely about the issues if we are to design and implement comprehensive control solutions.

Thinking about the control issues

The starting point for effective controls design is to think about what is special, or distinctive, about the company involved, the project, and the business process to be controlled. These characteristics are the drivers of controls design and their implications for risks, economics, and culture need to be drawn out.

From this it is possible to see how a generic, un-tailored scheme of controls should be modified to meet the special needs and opportunities of the business process to be implemented.

At the initial stage it is not necessary to identify every control. Some controls might be obvious but generally it is enough to identify the type of control mechanism to emphasise at different points in the process. The key requirement is to get beyond control objectives and specify the type of mechanism.

The following sections suggest some e-business characteristics and their implications.  I hope you find them thought provoking and helpful in practice.

e-business characteristics

Observation 1: Many ‘dot.com’ businesses are new and rapidly growing.

Observation 2: Historically, poor reliability has been accepted from Internet services.

Implication 1: For new e-business companies, as opposed to established companies implementing an e-business process, the control environment is often weak or non-existent and strict controls are unlikely to be accepted by management or staff. The strategic imperative for ‘dot.com’ companies is usually rapid expansion and growth, often with diversification. Their share price soon becomes a major factor in decision making. Consequently, risks of inefficiency or slightly irritating customer service are not particularly important to management, but reputation damaging risks, and risks with the ability to seriously impede growth are a high priority.

Initially, management believe they know what is going on in their company. As the company grows this soon becomes a fantasy and the lack of formal controls during a period of incredible growth becomes a major risk factor.

Implication 2: There will be conflict between the desire to launch new services quickly and the desire to launch reliable, well supported services. Normally, the conclusion will be that speed matters more, but over time, depending on management, this may change. Here's Theresa Gattung, CEO of Telecom New Zealand (TNZ), writing in PricewaterhouseCoopers Infocom Review about TNZ's introduction of Web-based services:

‘How could we, as a large, lumbering telco, become Internet-capable and extend our market position as an innovator? We knew at the outset that we couldn't have a telco manager running our ISP. Instead, we hired an entrepreneur who was Internet-savvy, and he got it going for us and well on the way to critical mass.

‘The culture change also was vital. Those of us who've come from the public-switched telephony world believe fervently in ubiquity and total reliability: the network must never go down. Of course, this isn't the Internet model, in which problems are common and help desks are often so congested that consumers can't always get quick fixes and answers when they have problems.

‘In addition, speed to market is a critical aspect of this business, and, traditionally, telcos have been slow moving and risk averse, going for the grand sweep rather than incremental change. We've tried to be more like the IT industry, getting modifications out quickly. We've concentrated on bite-sized chunks starting with product information, then onto purchasing products, moving through to online provisioning of services, getting them to market quickly, cheaply, and effectively, dropping those that don't work, and then moving onto the next phase.

‘Eighteen months after launch we decided to put in charge an executive with a telecom background in order to cope with the issues brought on by rapid growth. Slowness in answering customer inquiries and a lack of reliability were potentially damaging to our core brand.’

It is not clear how much longer customers will tolerate poor reliability, so for many ‘dot.com’ companies it may be necessary to achieve rapid improvements from a poor starting point. Attitude as well as procedures and technology will have to be changed.

Implication 3:  The prevailing culture in a new ‘dot.com’ company may be strongly opposed to ‘old fashioned’ control techniques. There may be a high proportion of influential young people in the company, with idealistic notions of freedom, equality, and trust. Controls based around tight supervision, segregation of duties, sign offs, and analysis of reports by ‘management’ may not be acceptable.

Where management are trying to empower their people it is crucial to avoid creating the impression that people are not trusted. Many traditional controls, explained and justified in the traditional way, give a clear message: ‘You are not trusted.’

These are some ideas for introducing controls within this culture:

  • Make sure that managers know that fraud, theft, internal politics, and errors will happen, even in their company, but that these can be tackled without making people feel like criminals.

  • Avoid using the word ‘control’. Try alternatives like ‘quality check’, ‘coaching review’, ‘written guidance’, ‘memory jogger/checklist’, and ‘team performance report’.

  • Talk most about the effect of controls on errors, even when there is also an effect on fraud risk (but ensure that adequate coverage of fraud risk is achieved).

  • Avoid obstructive controls directed against customers – they send the wrong messages to everyone.

  • Distribute monitoring information to whole teams rather to team leaders only. Call them ‘team performance reports’, ‘service level reports’, etc.

  • Choose sensible sign off limits and consider post hoc review rather than pre-commitment authorisation as a way of raising the limits further.

Implication 4: Customer complaints will be a very important feedback channel. Although the company may not be concerned about service failings that are only a minor irritant to customers, really serious failings do need to be eliminated. In the absence of other quality information the customer complaints channel is just about all there is. (See also Observation 12, below, regarding lack of face-to-face contact with customers.)

Implication 5: Monitoring control based on comparing actual results with ‘expectations’ and past results or forecasts are likely to be weak. This is not just because of the control environment. Expectations, inevitably, are based on past results and the more quickly the business is growing and changing the less precise expectations will be.  Crucially, with many new income types being introduced rapidly there is a high risk of system and reference data errors being missed because they were there from the outset.  Also, smaller companies typical have more variable results.

To avoid this problem it is important to use monitoring controls that look for consistency between measures for the same period, including both financial and non-financial measures.

Observation 3: It is relatively easy to gather statistics about web site use.

Implication 6:  Monitoring control based on these statistics is likely to be attractive because the numbers are easy to obtain and there is a great deal of interest in them anyway.

Ideally, monitoring controls should look for consistency between these non-financial measures, and for consistency between them and financial results.

However, at present there is a problem: Web site statistics are not always correct. In future this may improve as more packages become available with good management information. Consequently, a system of controls over the metrics is needed as with Implication 28 below.

Observation 4: Where an established company is introducing an e-business process it is often to replace one based on other technology and there is a desire to achieve rapid acceptance of the new process.

Implication 7:  This may be associated with a better control environment, but rapid acceptance implies rapid take up as people switch from the old process to the new one. This means that within a very short time the new process is dealing with large volumes. This is an extremely powerful risk factor.

New processes are almost always less reliable initially than was intended and, with effort, can usually be made much more reliable over a few months, with further gains over a period of years.

Provided the volumes grow gradually this initial tendency towards errors is easily managed. However, if high volume is expected almost immediately the result is often huge backlogs and the need for punishing overtime and many temporary staff. The process gets caught in a system of vicious circles around error and delay that can lead to an accelerating build up of data errors. A full blown data quality meltdown can occur if monitoring is not in place or the response to problems is inadequate. Once this occurs the only recourse will be a costly data cleanse operation. In some cases data cleansing has cost more than the initial implementation.

This risk means companies must push for maximum reliability at go-live and ensure that controls and monitoring are rehearsed in advance and properly staffed from the beginning. Monitoring controls looking at process health (e.g. load, errors, backlogs) are vital and there should be frequent reporting of end-to-end statistics, with analysis and commentary to a group which owns the whole process. The focus should be on increasing the inherent reliability of the process to eliminate the reasons for errors and backlogs.

In general, if rapid volume increases are expected for a business process this raises just about every other risk.

Observation 5: The business process is often entirely computer supported.

Implication 8:  Obviously, this means that general computer controls, of all types, will be particularly important. The risks are magnified by the fact that some of the technology is still immature, with controls lagging behind functionality.

General computer controls include controls over computer operations, security, changes to software and hardware, and control of projects to implement new systems. The goal is to maximise the inherent reliability of the process.

More specific implications arise below.

Implication 9:  The business impact of systems being down is usually immediate and significant, with turnover freezing instantly. The ability to recover computer systems or keep business going in the event of some serious failure is important, but it is often better value to focus on avoiding the failures.

Many factors contribute to achieving operational resilience. One particularly important control is pro-active monitoring that looks ahead at possible future events that may create risks or demands the existing systems and processes are not capable of dealing with.

Operational resilience aims to maximise system availability, and achieve a consistently high level of performance. Performance is vital since, obviously, the faster the system works the more sales it can take. Also, for a web site, the slower the response the less attractive the site. Research into computer response times indicates that:

  • 0.1 seconds is about the limit for having the user feel that the system is reacting instantaneously.

  • 1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay.

  • 10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish.

Survey results published in Computer Weekly during the winter of 1999/2000 show that some companies were failing to provide acceptable levels of operational resilience. The surveys used two tests. The first test was how long it took to download the home page of the site. Results included: Safeway 25.6s, Harrods 18.9s, and National Express 9.73s. The second test was how many of 1,000 attempts to connect to the site failed. Results included: BP Amoco 256, Hyder 129, Harrods 126, and Safeway 62.

People used to much worse performance than this should bear in mind that the limitations of the site are not the only source of delay. Only BT can explain why their site's search facility sometimes takes so long to appear that it is timed out. I have asked for an explanation but not received any response at all so far.

Observation 6: The computer systems used are highly connected to others, and are usually connected directly, or indirectly, to the public Internet.

Observation 7: The web browsers commonly used are extensible, making them inherently difficult to secure.

Implication 10: The systems are physically connected to well over 200 million people, some of them criminals, some of them just warped. The public Internet is the most hostile computer environment that has ever existed. Ironically, though it was originally designed with war in mind, it was a conventional, physical war, not the cyber-war the Internet is now so vulnerable to. Culturally, for many people involved with the Internet in its early days, security has been deliberately ignored. Consequently, computer security techniques such as encryption and digital signatures are vital, but so is the security of web servers.

Gene Spafford, Director of the Computer Operations, Audit, and Security Technology (COAST) project at Purdue University, USA and author of one of the best books on Internet security said that:

‘Using encryption on the Internet is the equivalent of arranging an armoured car to deliver credit-card information from someone living in a cardboard box to someone living on a park bench.’

Furthermore, the typical operating system is so complex that a proficient cracker, once in, can quickly leave doors open for future attacks. Spafford again:

‘Basically, once you're in a network, you're in. It only takes one mistake, and then you've let them in, and then you're more or less dead, because if they know what they're doing you'll never get them out.’

Another factor that changes with the Internet, and has been increasing in importance over the last twenty years is that the difference in knowledge of security between the most and least sophisticated people connected to the net has increased enormously. Keeping up to date with all the latest security warnings, installing the latest security patches, and carrying out all recommended security procedures is enormously time consuming and most people, even in the largest organisations, do not have time to do it. Where possible, the job has been delegated to specialist companies such as those that provide anti-virus software and firewalls. However, there are still many aspects of security that most companies do not understand or attend to.

Despite this, one of the main causes of computer security weakness has not changed in twenty years. Although new attacks and defences are being developed all the time, the problem of requiring large populations of users to maintain passwords has not been solved in most cases.

In the book ‘@ large’, David Freedman and Charles Mann described the activities of one of the most prolific crackers ever documented. Using the names ‘Phantom Dialler’, ‘phantomd’, and ‘Infomaster’ an obsessive teenager obtained tens of thousands of user names and passwords and cracked every system to which he tried to gain access. Yet ‘Infomaster’ couldn't program, typed poorly (due to a viral problem), and never used an original attack in his cracking career. His success was based largely on patiently checking for well known, careless security slips, particularly passwords that were easy to guess.  He found them at NASA, Sun Microsystems, and defence establishments, to name but a few.

Nor is penetration the only worry. ‘Denial of service’ attacks involve flooding a site with traffic to either bring it down or degrade its performance. Attacks in 2000 succeeded against well known sites with powerful computer systems. The attacks used batteries of cracked computers at various innocent sites to generate traffic and trick the victim into responding from messages from itself.

Clearly, controls over computer security have to be given high priority.

Implication 11:  Not only is the web server in danger, but so too is the web client. This is not just because of data sent to the server. Browser software is inherently insecure in the sense that browsers cannot screen out all potentially dangerous content in web pages and downloads without screening out too much of the content that users want. A server that has been attacked successfully can be used to attack clients in turn.

This risk has increased along with the increased volume of downloading, and the emergence of ‘push’ information services that send information to clients without receiving specific requests for it.

Observation 8: Personal information is often gathered and stored.

Observation 9: That personal information has usually been gathered across the public Internet.

Implication 12:  Privacy laws apply and are becoming more stringent over time. A strong system of controls around personal data is now essential.

Implication 13: Personal data has to be kept confidential across the net, and that requires encryption.

Observation 10: Increasingly, payment over the net may use electronic forms of money.

Implication 14: Digital money, particularly digital cash is an obvious target for crime. Counterfeiting and theft can be attempted with digital money just as with paper. The technical security techniques used with these systems are extremely complex, as they need to be.

Observation 11: The e-business system may include programmed decision making, including making contracts.

Implication 15:  Where decisions are made by software as opposed to people the pattern of errors is likely to be different. When decisions are taken by people they are generally patchy and inconsistent. Different individuals may have different habits and levels of skill, while their motivation and energy may vary from moment to moment.

Decisions taken by computer will usually be consistent, but could be consistently wrong, though perhaps for reasons that are difficult to see when the software is being designed. For example, it can be very difficult to identify all the factors that influence a human decision maker, and to get the program to use even the factors identified. Also, programmed decision making is more often on the basis of mathematical models and mistakes are often made in the mathematics. During development these errors might be overlooked by others on the project who are put off by the "rocket science" involved and impressed by the apparent mastery of the people doing the work.

Examples of bad decision making by computer include:

  • Daft business rules, enforced by computer (e.g. credit decisions).

  • Foolish use of very sophisticated technology e.g. neural networks, ‘intelligent’ agents, adaptive systems, case-based reasoning.

  • Subtle or perhaps even dramatic failure because decision making excludes some relevant factors.

  • Problems because a mathematical model is used whose assumptions are inappropriate (e.g. ignoring transaction costs or tax, assuming constancy when something is variable, assuming a normal distribution when the real distribution is skewed, ignoring rounding errors, the LTCM hedge fund fiasco).

  • Implementing flawed algebra, though perhaps based on a reasonable model (e.g. in an Internal Rate of Return or APR calculation).

  • Unforeseen interactions between computers reacting to each other's decisions (e.g. deadlock, positive feedback loops or unstable equilibrium leading to extreme behaviour such as the 1987 London stock market crash).

  • A web server could easily be capable of taking orders much faster than the business can deliver them, unless suitable safeguards are applied.

  • Another related error can occur where the software assumes that a person is supervising it, but does not check that this is so. A London newspaper reported in 1999 that a city trader left work without switching his PC off. Half an hour later colleagues realised what had happened but not before the PC had entered into trades that lost over £300,000. The report does not indicate if human supervision would have reduced the losses.

Observation 12: There is no face-to-face contact between buyer and seller.

Observation 13: Buyer and seller can more easily be in different countries.

Implication 16: If a user of a web site gets confused or has a question or objection not anticipated by the site designer there is usually nobody to help them. This implication relates to the usability implication covered in more detail below.

A research study by NFO Interactive found that retailers should consider implementing technologies that provide surrogates for the ‘in store’ level of customer service, such as Internet telephony, or chat, to enable a direct, live conversation with a company representative directly off the web site. The study indicated that just over 20% of online buyers would buy more if this type of help were available. Also, 15% of the frequent buyers stated that they would become a loyal shopper of the site if it provided this live connection.

Implication 17:  The company is insulated from a powerful source of feedback and learning, leading to an increased risk of bad decisions by the human management. Good management decisions tend to be based on a good understanding of what is going on. Making decisions on the basis of a few personal experiences is folly since they may not be representative, but decisions purely on statistics are also dangerous. Many people draw false conclusions from statistics. The sickening hype surrounding e-business in the early days increased the risk of losing touch with reality.

Even in companies where there is face-to-face contact it is common to find senior management running the business on reports, insulated from any feedback that might show them they are wrong.

To make sense of numbers people usually need something that gives them insight into what the numbers really mean, and this is the value of direct, personal contact.

Where this risk is present an effort is needed to create the personal, face-to-face experiences that management need. For example, they could visit their customers and have a coffee with them while the customer tries to use the company's web site.

In addition, feedback from customer complaints should be attended to very closely.

Implication 18:  It is easier for fraudsters to impersonate others. Controls that establish the authenticity of customers and suppliers are needed. These usually involve using certification authorities to link verified names and geographical addresses with public keys, but may rely on passwords or other techniques. This is a complex area.

Implication 19: It is easier for fraudsters to present themselves in a false light. They may pretend to be more credit worthy, or more capable than they really are. Considerable thought may be needed to find ways to verify claims they make about themselves if these present a risk.

Implication 20:  Where the parties are in different countries it may be more difficult to resort to law in the event of a dispute. Fraudsters may be beyond the reach of law for all practical purposes.

Implication 21:  Where another party is in a different country the laws of that country may apply, creating unexpected liabilities and other risks.

Implication 22:  Where the other party is in a different country the correct tax treatment (e.g. VAT rate) may be different.  This is a common problem area. If sales contracts are to be made it is necessary to check the country in which the other party is located.

Observation 14: Buyers often cannot see the product before purchase.

Implication 23: The level of returns may be higher than if goods can be seen in advance. This factor is shared with mail order. Some published material can be experienced over the net before purchase.

Observation 15: Data capture/sales order entry is done by the customer/supplier.

Implication 24:  Research shows that many people find computer interfaces difficult to use and that claims of intuitive, user friendly interfaces are largely false. Many users get lost in hypertext systems (e.g. the Web), while interacting with databases can be even more difficult. If users get confused, there is nobody to recognise the signs. If their situation is an unusual one the customer can struggle to relate it to the screens and fields presented by the web site.

E-business strongly favours users who are intellectually suited to interacting with computers, especially where searching databases is done rather than just following hypertext links. The more likely it is that visitors to the web site will be ordinary people, the more important usability will be.

Although there are great differences in aptitude even between people with equal levels of education (though not necessarily the same type of education) it may also be that more educated people do better with computers. For example, according to Alan Cooper of Cooper Interaction Design: ‘We're in the process of creating a divided society: those who can use technology on one side, and those who can't on the other. And it happens to divide neatly along economic lines.’

Usability problems are extremely common, so much so that we accept them as a normal part of computer use. However, the effects of mediocre usability include missed sales, incorrect sales details, accidental purchases, and a general feeling of tension for the people involved. This results in more complaints and costs of reversing sales and correcting errors.

One reason that usability problems are so common is that people who develop and promote computer systems tend to be intellectually suited to the common user interface techniques and find them easy. They rarely appreciate that others do not think the same way and might have difficulty.

Another reason is that many people believe the clever graphics of modern software make the software easier to use. In reality, comparative experiments show that the style of a user interface is not important to usability. Software needs a consistent and natural underlying logic and clear language. This is often lacking in even the prettiest and most impressive interfaces.

Usability engineering, including usability testing, has developed to provide a rigorous but cost effective way of ensuring that user interfaces can be used reliably by the intended users.

The following information comes from Thomas K Landauer's book ‘The trouble with computers’ and is derived from a series of studies of usability testing in practice:

  • User centred design typically cuts errors in user-system interactions from 5% down to 1%, and reduces training time by 25%.

  • The average interface has around 40 usability defects in need of repair. (About 50% of flaws found get fixed successfully, typically.)

  • Two usability evaluations or user tests usually will find half the flaws; six will find almost 90%. This work will only take a day or two.

  • After six tests, one can estimate accurately the number of remaining flaws and the rate at which they are being found.

  • Usability assessment has very large benefits relative to cost. The work efficiency effect of a software system can be expected to improve by around 25% as a result of a single day of usability testing. Intensive user-centred design efforts have typically improved efficiency effects by about 50%. (However, fundamentally flawed system specifications can lead to minimal gains from user-centred design.)

  • While specialists are better at usability design and at finding flaws, both systematic inspections and user tests can be done effectively by people with modest training.

These results, and experience as well, indicate that usability testing can reduce the difficulty and time for development while contributing dramatically to quality.

In ‘Usability engineering’, Jakob Nielson surveys a wide range of usability testing techniques. These do not include releasing a beta test version and going ahead if nobody complains bitterly enough! The most important techniques include:

  • Thinking aloud – a representative user is asked to perform representative tasks using the software and says aloud what they are thinking as they do so. This can given insights into confusions that did not lead to an error for that person but would lead some people to make errors at least some of the time.

  • Retrospective testing – after the user has finished a task the experimenter asks them to go back over the experience and report the problems and confusions they experienced.

  • Coaching approach – the user performs the tasks as usual, but can ask for explanations or instructions if they get into difficulty. This helps to identify the information that would improve the user interface.

  • Heuristic evaluation – this is different in that there is no user and no task. Reviewers inspect the interface in detail using a checklist of common usability faults as a guide.

Observation 16: The information/data shown on web pages may be complex and difficult to maintain.

Implication 25:  Brochureware, terms and conditions, discount information, prices, product information, technical support material, and so on are complex forms of information. Their lack of a simple, regular structure (as in a database) makes data entry and maintenance more difficult. Change control through reviews and checks is required. Also, it may be helpful to constrain the format and style in such a way that variation is reduced and situations that frequently lead to error are eliminated.

Using a web site instead of printing paper documents makes publication cheaper and easier. Companies may begin to change their publications more frequently, leading to more opportunities for error. They may begin to assume frequent updates and start making statements with a short shelf life. Here again, excluding statements that are only true for a certain time will help to avoid dangerous errors. There should also be a rolling programme of checks to ensure that information remains up to date.

Observation 17: Web technology often interfaces with the internal systems of the company.

Implication 26:  Controls are needed to ensure this interface works reliably and continues to work reliably, with any problems being resolved quickly and without loss of data. The risk of error in this interface is increased by the fact that the systems involved will often be from different generations of technology, while the people involved may be from different generations of technologist.

Interpersonal differences could prove more difficult than technology differences.

Implication 27:  The result of the interface may be that people visiting the site are interacting with a system that was originally designed for use by employees of the company only. This is another threat to usability.

One well known example is the Tesco shopping site that presented categories of product such as ‘dry goods’ rather than the categories more familiar to ordinary shoppers. Also, the lack of pictures of products and adequate descriptions contributes to the common error of buying the wrong product, e.g. cooked pasta rather than dry and mixer size cans of cola rather than ordinary sized cans.

Observation 18: Where a digital product is being delivered across the net there is no physical stock to be depleted.

Implication 28:  This means that one of the basic accounting means of verifying sales (i.e. checking the arithmetic against stock movements) is not available. Until cash is received, what evidence of a sale exists at all? (The same problem applies to records of clicks used as the basis for advertising revenue.)

This is similar to the situation in telecoms, for example, where there is no stock depletion to compare with reported usage of the network (i.e. sales). In telecoms, it is not uncommon for 5% of revenue to be unbilled because of system and data glitches.

Clearly the process for recording sales needs to be one in which people have confidence. A variety of control techniques can be used to tackle this. Despite complete automation it is not safe to assume that no errors will occur. Measurement of leakage and end-to-end monitoring of process health statistics provide the top layer with sequences numbers and reconciliations being the main forms of detailed checking.

Observation 19: In business-to-consumer sales the orders are usually fulfilled by delivering goods to customers' homes.

Implication 29:  This is similar to other forms of home shopping, all of which have grown significantly in the last decade.  Despite this growth, delivery remains unreliable and frustrating for customers, particularly those who are not often at home.

A study in the USA by Dataquest Inc. in September 1999 found that about 20% of online shoppers experience problems with orders and customer service. Of households reporting problems, 49% said they had placed orders that did not arrive, though more than half the time they were billed for the non existent goods. For a quarter of the households that experienced problems the main problem was the inability to contact the merchant's customer service department via e-mail.

Towards the end of 1999, PricewaterhouseCoopers in the UK sent a bottle of champagne to every employee's home address as a Christmas and New Year present, using a leading delivery service. Based on a sample of 750 across the UK, more than 13 % failed to arrive. This was for a variety of reasons.

Observation 20: Large parts of the business service may be performed by third parties.

Implication 30: The e-business company that relies on suppliers to perform vital parts of its business must apply strong controls to monitor and influence the service it receives. This is another complex area. Examples are an ISPs to run the web site, delivery company to do deliveries, storage company, value added network provider to support EDI (perhaps interfacing messages between companies by translating between formats).

Observation 21: Reconciliations designed to protect a company from mistakes by suppliers and customers may be removed.

Implication 31: In the bad old days customers and suppliers didn't trust each other. They each spent a lot of time checking anything received from the other to protect themselves against error and fraud. In the enlightened, reengineered, net era this is seen as inefficient. If the purpose of the checks is to confirm that information has been transferred correctly, surely it is enough for one company to do a reconciliation and share the results?

Blind trust may not be enough. A system of monitoring and automated checks and controls, backed by the creation of natural incentives reinforced by contract terms is a better bet.

Observation 22: The only evidence of a contract may be electronic.

Implication 32: If a company supplies something to someone and the ‘customer’ says they didn't order it, how can the disagreement be resolved? If the only record is electronic, then how can the supplier ever prove that a genuine order was received? Surely the ‘customer’ could claim the electronic ‘record’ is a phoney, typed in by the supplier. The solution is called ‘non-repudiation’ and involves a number of sophisticated computer security techniques being used together.

In PGP, for example, the idea is that if A wants non-repudiation for messages from B, then B must prepare messages as follows: perform a digest of the message to be sent (e.g. the order) and then encrypt the result of the digest using B's private key. A stores the encrypted digest because only B could have created it and only with the message as input.

Further reading

‘Empowerment takes more than a minute’, by Ken Blanchard, John P Carlos, and Alan Randolph, 1996.

‘The trouble with computers’, by Thomas K Landauer, 1996.

‘Usability Engineering’, by Jakob Nielsen, 1993.

http://www.useit.com/ by Jakob Nielsen

‘@ large’, by David H Freedman and Charles C Mann, 1997.

‘Why we buy: the science of shopping’, by Paco Underhill, 1999.

‘Web security & commerce’, by Simson Garfinkel with Gene Spafford, 1997.

‘Web security’, by Lincoln D Stein, 1998.






Made in England

 

Words © 2002 Matthew Leitch. First published 23 December 2002.