File and client server 1s. File database bottlenecks - how to avoid (from recent experience). What needs to be done to switch to client-server mode of operation

How to speed up work in 1C: Accounting 8.3 (edition 3.0) or disable routine and background tasks

2019-01-15T13:28:19+00:00

Those of you who have already switched to the new edition of 1C: Accounting 8.3 (edition 3.0) have noticed that it has become slower than 2. Some strange slowdowns, endless background tasks several times a day, which no one asked her to perform without our knowledge.

My accountants told me immediately after the transition that the new edition of 1C: Accounting 3.0 is downright slow compared to the previous ones! And it’s simply impossible to work.

I started looking into it and very soon found out that the main cause of freezes and subsequent user dissatisfaction are routine and background tasks, many of which are enabled by default, although for the vast majority of accountants there is no need for them.

Well, for example, why do we need to run the “Text Extraction” task a hundred times a day if we do not carry out a full-text (accountants, don’t be alarmed) search across all objects in our database.

Or why constantly download currency rates if we do not have currency transactions or we do them occasionally (and before that we ourselves can click the download rates button).

The same applies to 1C’s constant attempt to connect to the site and check and update bank classifiers. For what? I myself will press the button to update the classifiers if I don’t find the right bank by its BIC.

How to do this step by step below.

1. Go to the "Administration" section and select "Maintenance" () in the action panel:

2. In the window that opens, find and select “Routine and background tasks”:

3. Open each task that has "On" in the "On" column. there is a daw.

4. Uncheck "Enabled" and click the "Save and Close" button.

5. Do this with each of the included tasks and enjoy the new edition. Overall, in my opinion, it is much better than two.

At the same time, the platform will still enable some of the scheduled tasks you disabled.

Recently, users and administrators are increasingly beginning to complain that new 1C configurations developed on the basis of a managed application work slowly, in some cases unacceptably slow. It is clear that new configurations contain new functions and capabilities, and therefore are more resource-demanding, but most users do not understand what primarily affects the operation of 1C in file mode. Let's try to correct this gap.

In ours, we have already touched on the impact of disk subsystem performance on the speed of 1C, but this study concerned the local use of the application on a separate PC or terminal server. At the same time, most small implementations involve working with a file database over a network, where one of the user’s PCs is used as a server, or a dedicated file server based on a regular, most often also inexpensive, computer.

A small study of Russian-language resources on 1C showed that this issue is diligently avoided; if problems arise, it is usually recommended to switch to client-server or terminal mode. It has also become almost generally accepted that configurations on a managed application work much slower than usual. As a rule, the arguments are “iron”: “Accounting 2.0 just flew, and the “troika” barely moved. Of course, there is some truth in these words, so let’s try to figure it out.

Resource consumption, first glance

Before we began this study, we set ourselves two goals: to find out whether managed application-based configurations are actually slower than conventional configurations, and which specific resources have the primary impact on performance.

For testing, we took two virtual machines running Windows Server 2012 R2 and Windows 8.1, respectively, giving them 2 cores of the host Core i5-4670 and 2 GB of RAM, which corresponds to approximately an average office machine. The server was placed on a RAID 0 array of two, and the client was placed on a similar array of general-purpose disks.

As experimental bases, we selected several configurations of Accounting 2.0, release 2.0.64.12 , which was then updated to 3.0.38.52 , all configurations were launched on the platform 8.3.5.1443 .

The first thing that attracts attention is the increased size of the Troika’s information base, which has grown significantly, as well as a much greater appetite for RAM:

We are ready to hear the usual: “why did they add that to this three,” but let’s not rush. Unlike users of client-server versions, which require a more or less qualified administrator, users of file versions rarely think about maintaining databases. Also, employees of specialized companies servicing (read updating) these databases rarely think about this.

Meanwhile, the 1C information base is a full-fledged DBMS of its own format, which also requires maintenance, and for this there is even a tool called Testing and correcting the information base. Perhaps the name played a cruel joke, which somehow implies that this is a tool for troubleshooting problems, but low performance is also a problem, and restructuring and reindexing, along with table compression, are well-known tools for optimizing databases for any DBMS administrator. Shall we check?

After applying the selected actions, the database sharply “lost weight”, becoming even smaller than the “two”, which no one had ever optimized, and RAM consumption also decreased slightly.

Subsequently, after loading new classifiers and directories, creating indexes, etc. the size of the base will increase; in general, the “three” bases are larger than the “two” bases. However, this is not more important, if the second version was content with 150-200 MB of RAM, then the new edition needs half a gigabyte and this value should be taken into account when planning the necessary resources for working with the program.

Net

Network bandwidth is one of the most important parameters for network applications, especially like 1C in file mode, which move significant amounts of data across the network. Most networks of small enterprises are built on the basis of inexpensive 100 Mbit/s equipment, so we began testing by comparing 1C performance indicators in 100 Mbit/s and 1 Gbit/s networks.

What happens when you launch a 1C file database over the network? The client downloads a fairly large amount of information into temporary folders, especially if this is the first, “cold” start. At 100 Mbit/s, we are expected to run into channel width and downloading can take a significant amount of time, in our case about 40 seconds (the cost of dividing the graph is 4 seconds).

The second launch is faster, since some of the data is stored in the cache and remains there until the reboot. Switching to a gigabit network can significantly speed up program loading, both “cold” and “hot”, and the ratio of values ​​is respected. Therefore, we decided to express the result in relative values, taking the largest value of each measurement as 100%:

As you can see from the graphs, Accounting 2.0 loads at any network speed twice as fast, the transition from 100 Mbit/s to 1 Gbit/s allows you to speed up the download time by four times. There is no difference between the optimized and non-optimized "troika" databases in this mode.

We also checked the influence of network speed on operation in heavy modes, for example, during group transfers. The result is also expressed in relative values:

Here it’s more interesting, the optimized base of the “three” in a 100 Mbit/s network works at the same speed as the “two”, and the non-optimized one shows twice as bad results. On gigabit, the ratios remain the same, the unoptimized “three” is also half as slow as the “two”, and the optimized one lags behind by a third. Also, the transition to 1 Gbit/s allows you to reduce the execution time by three times for edition 2.0 and by half for edition 3.0.

In order to evaluate the impact of network speed on everyday work, we used Performance measurement, performing a sequence of predetermined actions in each database.

Actually, for everyday tasks, network throughput is not a bottleneck, an unoptimized “three” is only 20% slower than a “two”, and after optimization it turns out to be about the same faster - the advantages of working in thin client mode are evident. The transition to 1 Gbit/s does not give the optimized base any advantages, and the unoptimized and the two begin to work faster, showing a small difference between themselves.

From the tests performed, it becomes clear that the network is not a bottleneck for the new configurations, and the managed application runs even faster than usual. You can also recommend switching to 1 Gbit/s if heavy tasks and database loading speed are critical for you; in other cases, new configurations allow you to work effectively even in slow 100 Mbit/s networks.

So why is 1C slow? We'll look into it further.

Server disk subsystem and SSD

In the previous article, we achieved an increase in 1C performance by placing databases on an SSD. Perhaps the performance of the server's disk subsystem is insufficient? We measured the performance of a disk server during a group run in two databases at once and got a rather optimistic result.

Despite the relatively large number of input/output operations per second (IOPS) - 913, the queue length did not exceed 1.84, which is a very good result for a two-disk array. Based on this, we can make the assumption that a mirror made from ordinary disks will be enough for the normal operation of 8-10 network clients in heavy modes.

So is an SSD needed on a server? The best way to answer this question will be through testing, which we carried out using a similar method, the network connection is 1 Gbit/s everywhere, the result is also expressed in relative values.

Let's start with the loading speed of the database.

It may seem surprising to some, but the SSD on the server does not affect the loading speed of the database. The main limiting factor here, as the previous test showed, is network throughput and client performance.

Let's move on to redoing:

We have already noted above that disk performance is quite sufficient even for working in heavy modes, so the speed of the SSD is also not affected, except for the unoptimized base, which on the SSD has caught up with the optimized one. Actually, this once again confirms that optimization operations organize information in the database, reducing the number of random I/O operations and increasing the speed of access to it.

In everyday tasks the picture is similar:

Only the non-optimized database benefits from the SSD. You, of course, can purchase an SSD, but it would be much better to think about timely maintenance of the database. Also, do not forget about defragmenting the section with infobases on the server.

Client disk subsystem and SSD

We analyzed the influence of SSD on the speed of operation of locally installed 1C in, much of what was said is also true for working in network mode. Indeed, 1C quite actively uses disk resources, including for background and routine tasks. In the figure below you can see how Accounting 3.0 quite actively accesses the disk for about 40 seconds after loading.

But at the same time, you should be aware that for a workstation where active work is carried out with one or two information databases, the performance resources of a regular mass-produced HDD are quite sufficient. Purchasing an SSD can speed up some processes, but you won’t notice a radical acceleration in everyday work, since, for example, downloading will be limited by network bandwidth.

A slow hard drive can slow down some operations, but in itself cannot cause a program to slow down.

RAM

Despite the fact that RAM is now obscenely cheap, many workstations continue to work with the amount of memory that was installed when purchased. This is where the first problems lie in wait. Based on the fact that the average “troika” requires about 500 MB of memory, we can assume that a total amount of RAM of 1 GB will not be enough to work with the program.

We reduced the system memory to 1 GB and launched two information databases.

At first glance, everything is not so bad, the program has curbed its appetites and fit well into the available memory, but let’s not forget that the need for operational data has not changed, so where did it go? Dumped into disk, cache, swap, etc., the essence of this operation is that data that is not needed at the moment is sent from fast RAM, the amount of which is not enough, to slow disk memory.

Where it leads? Let's see how system resources are used in heavy operations, for example, let's launch a group retransfer in two databases at once. First on a system with 2 GB of RAM:

As we can see, the system actively uses the network to receive data and the processor to process it; disk activity is insignificant; during processing it increases occasionally, but is not a limiting factor.

Now let's reduce the memory to 1 GB:

The situation is changing radically, the main load now falls on the hard drive, the processor and network are idle, waiting for the system to read the necessary data from the disk into memory and send unnecessary data there.

At the same time, even subjective work with two open databases on a system with 1 GB of memory turned out to be extremely uncomfortable; directories and magazines opened with a significant delay and active access to the disk. For example, opening the Sales of goods and services journal took about 20 seconds and was accompanied all this time by high disk activity (highlighted with a red line).

To objectively evaluate the impact of RAM on the performance of configurations based on a managed application, we carried out three measurements: the loading speed of the first database, the loading speed of the second database, and group re-running in one of the databases. Both databases are completely identical and were created by copying the optimized database. The result is expressed in relative units.

The result speaks for itself: if the loading time increases by about a third, which is still quite tolerable, then the time for performing operations in the database increases three times, there is no need to talk about any comfortable work in such conditions. By the way, this is the case when buying an SSD can improve the situation, but it is much easier (and cheaper) to deal with the cause, not the consequences, and just buy the right amount of RAM.

Lack of RAM is the main reason why working with new 1C configurations turns out to be uncomfortable. Configurations with 2 GB of memory on board should be considered minimally suitable. At the same time, keep in mind that in our case, “greenhouse” conditions were created: a clean system, only 1C and the task manager were running. In real life, on a work computer, as a rule, a browser, an office suite are open, an antivirus is running, etc., etc., so proceed from the need for 500 MB per database, plus some reserve, so that during heavy operations you do not encounter a lack of memory and a sharp decrease in productivity.

CPU

Without exaggeration, the central processor can be called the heart of the computer, since it is it that ultimately processes all calculations. To evaluate its role, we conducted another set of tests, the same as for RAM, reducing the number of cores available to the virtual machine from two to one, and the test was performed twice with memory amounts of 1 GB and 2 GB.

The result turned out to be quite interesting and unexpected: a more powerful processor quite effectively took on the load when there was a lack of resources, the rest of the time without giving any tangible advantages. 1C Enterprise (in file mode) can hardly be called an application that actively uses processor resources; it is rather undemanding. And in difficult conditions, the processor is burdened not so much by calculating the data of the application itself, but by servicing overhead costs: additional input/output operations, etc.

conclusions

So, why is 1C slow? First of all, this is a lack of RAM; the main load in this case falls on the hard drive and processor. And if they do not shine with performance, as is usually the case in office configurations, then we get the situation described at the beginning of the article - the “two” worked fine, but the “three” is ungodly slow.

In second place is network performance; a slow 100 Mbit/s channel can become a real bottleneck, but at the same time, the thin client mode is able to maintain a fairly comfortable level of operation even on slow channels.

Then you should pay attention to the disk drive; buying an SSD is unlikely to be a good investment, but replacing the drive with a more modern one would be a good idea. The difference between generations of hard drives can be assessed from the following material: .

And finally the processor. A faster model, of course, will not be superfluous, but there is little point in increasing its performance unless this PC is used for heavy operations: group processing, heavy reports, month-end closing, etc.

We hope this material will help you quickly understand the question “why 1C is slow” and solve it most effectively and without extra costs.

  • Tags:

Please enable JavaScript to view the

The question arises: which DBMS to choose for 1C - file or SQL?

Let's try to figure out what a file database is and what a client-server SQL is.

DBMS is a database management system. The 1C Enterprise platform supports the following DBMS options:

  • File (built into 1C)
  • MS SQL Server
  • Oracle
  • IBM DB2
  • PostgreSQL

The file option is the easiest way to implement 1C Enterprise. It does not require installation of additional software. The file option is a publicly accessible database file that can be accessed from anywhere on the web.

Get 267 video lessons on 1C for free:

Advantages of the file option:

  • Easy to set up.
  • Does not require additional software.
  • Cheap and cheerful.

Flaws:

  • There is no security. Any system user can copy the database file.
  • Low system scalability - in some cases, the system starts to work slowly even with 5-7 users. This is due to the increased level of transaction isolation.
  • Some program functions do not work in file mode (for example, routine tasks).
  • Limited in size (maximum 4-12GB).

Client-server DBMS for 1C

This architecture option is good for increased fault tolerance and security. A very large number of users (up to 5000 or more) can work simultaneously in a client-server system.

Pros of use:

  • Increased fault tolerance.
  • Allows a large number of users to work simultaneously.
  • The size of the database is unlimited.
  • There are free DBMSs (PostgreSQL).
  • Not all DBMS are free; the best ones (MS SQL Server) cost quite a lot of money.
  • SQL server administration is required.

Instructions for migrating from a file database to SQL

If you decide to transfer the 1C 8.3 (8.2) database from file to client-server mode, follow the following instructions:

  1. Create a new 1C database in SQL;
  2. Upload the *.dt file from the file database (Configurator - Administration - Upload infobase);
  3. Upload the resulting file to the new database (Configurator - Administration - Upload infobase).

conclusions

The 1C system occupies a dominant position in the automation market for small and medium-sized businesses. If a company has chosen the 1C accounting system, then usually almost all employees work in it, from ordinary specialists to management. Accordingly, the speed of the company’s business processes depends on the speed of 1C. If 1C works at an unsatisfactory speed, then this directly affects the work of the entire company and profit.

Actually exists three 1C acceleration methods:

  • Increase in hardware capacity.
  • Optimization of operating system and DBMS settings.
  • Optimization of code and algorithms in 1C.

The first method requires the purchase of equipment and licenses, the third requires a lot of work for programmers and, as a result, both ways result in significant financial costs. First of all, you need to pay attention to the program code, since no increase in server capacity can compensate for incorrect code. Any programmer knows that with just a few lines of code it is possible to create a process that will completely load the resources of any server.

If a company is confident that the program code is optimal, but it still works slowly, management usually decides to increase server capacity. At this point, a logical question arises: what is missing, how much and what needs to be added in the end.

The 1C company gives a rather vague answer to the question of how many resources are needed; we wrote about it earlier in our posts. And therefore, you have to independently conduct experiments and figure out what 1C performance depends on. Experiments with program performance at EFSOL are described below.

When working with 1C 8.2, especially with configurations that use managed forms, a strange fact was noticed: 1C works faster on a workstation than on a powerful server. Moreover, all the characteristics of the workstation are worse than those of the server.



Table 1 - Configurations on which initial testing was carried out

The workstation shows 155% more performance than a 1C server with superior characteristics. We began to figure out what was going on and narrow down the search.

Figure 1 – Performance measurements at the workstation using the Gilev test

The first suspicion was that Gilev's test was inadequate. Measurements of opening forms, posting documents, generating reports, etc. using instrumentation tools showed that Gilev’s test produces an assessment proportional to the actual speed of work in 1C.

Number and frequency of RAM

An analysis of the information available on the Internet showed that many write about the dependence of 1C performance on memory frequency. It depends on the frequency, not on the volume. We decided to test this hypothesis, since we have a RAM frequency of 1066 Mhz on the server versus 1333 Mhz on the workstation, and the amount of RAM on the server is already much higher. We decided to immediately install not 1066 Mhz, but 800 Mhz so that the effect of the dependence of performance on memory frequency was more clear. The result is that productivity fell by 12% and amounted to 39.37 units. We installed memory with a frequency of 1333 Mhz instead of 1066 Mhz on the server and received a slight increase in performance - about 11%. Productivity was 19.53 units. Accordingly, it’s not a matter of memory, although its frequency gives a slight increase.

Figure 2 – Performance measurements on a workstation after lowering the RAM frequency


Figure 3 – Performance measurements on the server after increasing the RAM frequency

Disk subsystem

The next hypothesis was related to the disk subsystem. Two assumptions immediately arose:

  • SSDs are better than SAS drives, even if they are in raid 10.
  • iSCSI is slow or incorrect.

Therefore, a regular SATA disk was installed in the workstation instead of an SSD, and the same was done with the server - the database was placed on a local SATA disk. As a result, performance measurements did not change at all. Most likely, this happens because there is a sufficient amount of RAM and the disks are practically not involved in any way during the test.

CPU

The processors on the server are, of course, more powerful and there are two of them, but the frequency is slightly lower than on the workstation. We decided to check the effect of processor frequency on performance: there were no processors with a higher frequency at hand for the server, so we lowered the processor frequency on the workstation. We immediately lowered it to 1.6 so that the correlation became clearer. The test showed that performance dropped significantly, but even with a 1.6 processor, the workstation produced almost 28 units, which is almost 1.5 times more than on the server.

Figure 4 – Performance measurements on a workstation with a 1.6 Ghz processor

Video card

There is information on the Internet that the performance of 1C can be affected by the video card. We tried using the workstation's integrated video, a professional Nvidia NVIDIA® Quadro® 4000 2 Gb DDR5 adapter, and an old GeForce 16MbSDR video card. During the Gilev test, no significant difference was noticed. Perhaps the video card still has an effect, but in real conditions, when you need to open managed forms, etc.

At the moment, there are two suspicions why the workstation works faster even with noticeably worse characteristics:

  1. CPU. The type of processor on the workstation is better suited to 1C.
  2. Chipset. All other things being equal, our workstation has a newer chipset, perhaps this is the issue.

We plan to purchase the necessary components and continue testing in order to finally find out what 1C performance largely depends on. While the approval and procurement process is underway, we decided to perform optimization, especially since it costs nothing. The following stages were identified:

Stage 1. System setup

First, let's make the following settings in the BIOS and operating system:

  1. In the server BIOS, we disable all settings to save processor power.
  2. Select the “Maximum performance” plan in the operating system.
  3. The processor is also tuned for maximum performance. This can be done using the PowerSchemeEd utility.

Stage 2. Setting up SQL server and 1C:Enterprise server

We make the following changes to the DBMS and 1C:Enterprise server settings.

  1. Setting up the Shared Memory protocol:

    • Shared Memory will be enabled only on the platform starting from 1C 8.2.17; on earlier releases, Named Pipe will be enabled - slightly inferior in operating speed. This technology only works if 1C and MSSQL services are installed on the same physical or virtual server.
  2. It is recommended to switch the 1C service to debug mode, as, paradoxically, this gives a performance boost. By default, debugging is disabled on the server.
  3. Setting up SQL server:

    • We only need the server, the other services that relate to it and, perhaps, someone uses them, only slow down the work. We stop and disable services such as: FullText Search (1C has its own full-text search mechanism), Integration Services, etc.
    • We set the maximum amount of memory allocated to the server. This is necessary so that the SQL server calculates this amount and clears memory in advance.
    • We set the maximum number of threads (Maximum worker threads) and set the increased server priority (Boost priority).

Stage 3: Setting up a production database

After the DBMS server and 1C:Enterprise are optimized, we move on to database settings. If the database has not yet been expanded from the .dt file, and you know its approximate size, then it is better to immediately indicate the initialization size to the primary file with “>=” of the database size, but this is a matter of taste, it will still grow during expansion. But Auto-increase size must be specified: approximately 200 MB per base and 50 MB per log, because The default values ​​– growth by 1 MB and 10% slow down the server’s work very much when it needs to increase the file every 3rd transaction. Also, it is better to specify the storage of the database file and the log file on different physical disks or RAID groups if a RAID array is used, and limit the growth of the log. It is recommended to move the Tempdb file to a high-speed array, since the DBMS accesses it quite often.

Stage 4. Setting up scheduled tasks

Scheduled tasks are created quite simply using the Maintenance Plan in the Management section, using graphical tools, so we will not describe in detail how this is done. Let's look at what operations need to be performed to improve productivity.

  • Defragmentation of indexes and updating statistics must be done daily, because if index fragmentation is > 25%, it dramatically reduces server performance.
  • Defragmentation and updating statistics is done quickly and does not require disconnecting users. It is also recommended to do it daily.
  • Full re-indexing – done with the database blocked, it is recommended to do it at least once a week. Naturally, after complete reindexing, the indexes are immediately defragmented and statistics are updated.

As a result, with the help of fine-tuning the system, SQL server and working database, we managed to increase productivity by 46%. The measurements were carried out using the 1C KIP tool and using the Gilev test. The latter showed 25.6 units versus 17.53 which were originally.

Brief conclusion

  1. 1C performance does not depend much on RAM frequency. Once a sufficient amount of memory is reached, further expansion of memory does not make sense, since it does not lead to an increase in performance.
  2. 1C performance does not depend on the video card.
  3. 1C performance does not depend on the disk subsystem, provided that the disk read or write queue is not exceeded. If SATA drives are installed and their queue is not exceeded, then installing an SSD will not improve performance.
  4. Performance is quite dependent on the processor frequency.
  5. With proper configuration of the operating system and MSSQL server, it is possible to achieve an increase in 1C performance by 40-50% without any material costs.

ATTENTION! A very important point! All measurements were performed on a test base using the Gilev test and 1C instrumentation tools. The behavior of a real database with real users may differ from the results obtained. For example, in the test database we did not find any dependence of performance on the video card and the amount of RAM. These conclusions are quite questionable and in real conditions these factors can have a significant impact on performance. When working with configurations that use managed forms, a video card is important and a powerful graphics processor speeds up work in terms of drawing the program interface, visually this is manifested in faster work of 1C.

Is your 1C running slowly? Order IT maintenance for computers and servers by EFSOL specialists with many years of experience or transfer your 1C to a powerful and fault-tolerant 1C virtual server.

System integration. Consulting