PC SOFT

ONLINE HELP
FOR WINDEV, WEBDEV AND WINDEV MOBILE

Home | Sign in | English US
  • Overview
  • Managing the logged replication for a data file
  • Consequences of implementing the logged replication
  • Required or recommended conditions
  • Directory for creating the files required by the logged replication
  • Generating the analysis
WINDEV
WindowsLinuxUniversal Windows 10 AppJavaReports and QueriesUser code (UMC)
WEBDEV
WindowsLinuxPHPWEBDEV - Browser code
WINDEV Mobile
AndroidAndroid Widget iPhone/iPadApple WatchUniversal Windows 10 AppWindows Mobile
Others
Stored procedures
Implementing the logged replication: Modifying the analysis
HFSQLAvailable only with this kind of connection
Overview
The logged replication can be implemented in the data model editor:
  • when creating a file. The wizard for file creation asks whether the logged replication must be managed for this file.
  • on an existing file (no matter whether the application was distributed or not).
Note: From version 19, HFSQL is the new name of HyperFileSQL.
Managing the logged replication for a data file
To implement the logged replication on a file:
  1. Open your analysis in the data model editor.
  2. For all the files in replication, select "Logged replication" ("Various" tab of file description). Messages are displayed if your files do not comply with the conditions for using the replication (see the next paragraph).
    Caution: the log process is automatically implemented when the logged replication is implemented. Indeed, the log process allows you to easily find out all the operations performed on the files in order to apply them to the different databases. See The log process for more details.
  3. Generate your analysis.
Consequences of implementing the logged replication

Required or recommended conditions

When implementing the logged replication on a data file, WINDEV and WEBDEV:
  • Propose to use an automatic identifier on 8 bytes in the files in logged replication:
    To simplify the logged replication, we recommend that you use an automatic identifier in your data files. This automatic identifier is entirely managed by WINDEV and WEBDEV. To avoid the duplicate errors during the replication, ranges of identifier numbers are allocated to each site involved in the replication.
    The use of an automatic identifier is highly recommended but not mandatory. Indeed, you always have the ability to manage a custom identifier containing, for instance, an automatic identifier and the initials of the site or person creating the record.
  • Implement the log process for the selected data file:
    Indeed, the log process allows you to easily find out all the operations performed on the files in order to apply them to the different databases. The "Log of write operations + history of accesses" log process is automatically implemented when "Logged replication" is checked.

Directory for creating the files required by the logged replication

The logged replication is closely linked to the log process. The log process is automatically enabled when the logged replication is implemented in an application.
For all the files used (data files, log files or replication files), the paths of the corresponding physical files can be defined in the data model editor or by programming.
By default, the tree structure of the data files is as follows (example for the SalesMgt application):
The following table presents the default value for these different directories and their configuration mode in the data model editor or by programming.
Default valueWhere to configure it in the data model editor?How to change the default value by programming?
Directory of data filesDirectory of the executable.
  • For all the data files of the application: "Details" tab of the analysis description
  • For each file: "Info" tab of the file description.
To modify the default directory for the data files of the application, use HSubstDir.
When creating the files for the log process and for the replication, the created directories will be relative to this new directory.
Note: To modify the directory for one of the data files only, use HChangeDir.
Directory of the files for replicationRPL sub-directory of the default directory of the data files."Log\Replica" tab of the analysis description.Specify the requested directory in HCreateMasterReplica and HCreateSubscriberReplica.
Directory of files for log processJNL sub-directory of the default directory of data files.
  • For all the data files of the application: "Log\Replica" tab of the analysis description.
  • For each file: "Various" tab of the file description.
To modify the directory of log files, use HChangeLogDir. This function is used to:
  • change the directory of Log file (JNL file)
  • change the directory of the JNL file and the directory of the files for the log process (JournalIdentification and JournalOpération files).
Caution: To manage the logged replication in an application, the table of log operations (JournalOpération.fic file) must be identical for all the files in replication in the same analysis.
Generating the analysis
To take into account the implementation of the logged replication in the analysis, you must generate the analysis.
If the application already handles data files, this analysis generation triggers an automatic modification of the data files in order to:
  • take the logged replication into account,
  • take the log process into account (if necessary)
  • implement the automatic identifiers.
This modification of the data files will also have to be performed when installing the application (if the application was already deployed).
Caution: If logs already existed, these logs are automatically cleared during the automatic modification. Before starting the automatic modification, the existing logs can be saved by WDLog.
Minimum required version
  • Version 12
This page is also available for…
Comments
Click [Add] to post a comment