Open Access

Research on high precision motion control method of automated production line based on adaptive control

 and   
Mar 21, 2025

Cite
Download Cover

Introduction

Automated production line is an indispensable part of modern industry, which refers to the production line that utilizes advanced robots and electronic equipment to complete the manufacturing and assembly of products through automated processes [1-3]. Automated production line can improve production efficiency, reduce costs, precise control of product quality and other advantages make it become an important means of industrial development. Highly intelligent, flexible and precise are its main features [4-6]. In the automated production line, motion control plays a vital role.

In automated production line, motion control realizes precise and reliable control of machine or equipment motion to improve production efficiency and product quality [7-8]. Motion control is characterized by flexibility and programmability and multi-axis synchronization in addition to high accuracy and reliability. Flexibility and programmability means that it can be autonomously adjusted and optimized according to different processes and production demands, making the automated production line adaptable to different products and specifications [9-11]. Multi-axis synchronization refers to the motion control can realize multi-axis synchronous movement, to ensure that the relevant equipment in accordance with the predetermined path Li speed to carry out accurate and coordinated work [12-15]. Motion control has many advantages in automated production lines, making it the first choice in the industrial field. It can improve production efficiency, product quality, working environment and reduce production costs [16-18]. In addition, through the collection, monitoring and analysis of production data, intelligent management can be realized [19]. It can be seen that it is necessary to optimize the automated production line motion control with high precision.

In this paper, based on the principle of AGV kinematic modeling algorithm, combining machine vision and industrial robot, a set of industrial robot vision guidance system for automated production line is designed and developed. After that, based on the principle of AGV motion control combined with HALCON vision algorithm, it is proposed to use fuzzy adaptive PID control method to realize the adaptive position adjustment of AGV, complete the design of the fuzzy controller, and establish the fuzzy adaptive PID control model. In order to ensure the control characteristics and accuracy in the motion of the automated production line, the adaptive fuzzy PID control method is used to adjust the parameters of the PID controller in real time adaptively to maintain the stability and performance of the system, and to realize the precise control in the dynamically changing environment.

Motion control methods for automated production lines under adaptive control
Overall program design of automated production line

Aiming at the current situation of positioning and flexibility requirements of the assembly line, this project independently developed an automated production line based on machine vision positioning. The automated production line is equipped with 3 conveyor systems and 5 main working stations, which can sequentially carry out workpiece loading, gluing, precision assembly, screw fastening, and unloading tasks.

Hardware Composition of Visual Guidance System and Equipment Selection

Industrial Camera

In this paper, a CMOS industrial camera is used to collect static object images. Experimental visual inspection object is self-designed irregular parts, horizontal size of 65mm, vertical size of 30mm, tolerance requirements of 0.2mm, the edge of the software repeatability empirical value of 0.7, the camera selection process to calculate the parameters are as follows:

Pixel equivalent: Res = 0.02/0.7 = 0.02857mm/pixel

Short side vision: H = 30 × 1.5 = 45mm

Long side view: W = 45/(3/4) = 60mm

Short-edge resolution: Hlr = 45/0.02857 = 1575.02362 pixel

Long-edge resolution: Wtr = 60/0.02857 = 2100.03150 pixel

Optical lens

In this paper, we choose BarlerC125-2522-5MF2.2f25mm lens which can be used with Daheng image and surface array camera. The main performance parameters are as follows: lens focal length of 25.0mm, aperture type Manual, lens interface is C port, resolution of 5 megapixels, target surface size 1/2.5, aperture F2.2-F22.0, working distance of 200mm.

Calibration of machine vision systems

Coordinate system and transformation relationship

Assume that the physical dimensions of a pixel (μ, v) in the pixel coordinate system in the direction of x and y axes of the image plane are dx and dy respectively, and that the μ and v axes are perpendicular to each other. The coordinates of this pixel in the image coordinate system are expressed as follows: { μ=xdx+μ0 v=ydy+v0$$\left\{ {\begin{array}{*{20}{l}} {\mu = \frac{x}{{dx}} + {\mu _0}} \\ {v = \frac{y}{{dy}} + {v_0}} \end{array}} \right.$$

For the convenience of subsequent coordinate transformation calculations, Eq. (1) is rewritten as: [ μ v 1]=[ 1dx 0 μ0 0 1dy v0 0 0 1][ x y 1]$$\left[ {\begin{array}{*{20}{c}} \mu \\ v \\ 1 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\frac{1}{{dx}}}&0&{{\mu _0}} \\ 0&{\frac{1}{{dy}}}&{{v_0}} \\ 0&0&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{l}} x \\ y \\ 1 \end{array}} \right]$$

Due to the uneven placement of the object, lens distortion [20] and other factors will lead to μ axis and v axis are not perpendicular to each other, θ is the angle between the two axes, then in this case the image coordinate system and pixel coordinate system in the point of the coordinate transformation relationship is as follows: { μ=xdxycotθdx+μ0 v=ydvsinθ+v0$$\left\{ {\begin{array}{*{20}{c}} {\mu = \frac{x}{{dx}} - \frac{{y\cot \theta }}{{dx}} + {\mu _0}} \\ {v = \frac{y}{{dv\sin \theta }} + {v_0}} \end{array}} \right.$$

Rewriting equation (3) into matrix form yields: [ μ v 1]| 1dx 1dxcotθ μij 0 1dysinθ ti 0 0 1|[ x y 1]$$\left[ {\begin{array}{*{20}{c}} \mu \\ v \\ 1 \end{array}} \right]\left| {\begin{array}{*{20}{c}} {\frac{1}{{dx}}}&{ - \frac{1}{{dx}}\cot \theta }&{{\mu _{ij}}} \\ 0&{\frac{1}{{dy\sin \theta }}}&{{t_i}} \\ 0&0&1 \end{array}} \right| \cdot \left[ {\begin{array}{*{20}{l}} x \\ y \\ 1 \end{array}} \right]$$

A point (x,y)$$\left( {x,y} \right)$$ in the image plane can be transformed from the image coordinate system to the camera coordinate system through equation (5), and this transformation relationship can be derived from the phase center principle of the imaging of the pinhole model: { x=fxZc y=fZc$$\left\{ {\begin{array}{*{20}{l}} {x = \frac{{fx}}{{{Z_c}}}} \\ {y = \frac{f}{{{Z_c}}}} \end{array}} \right.$$

Eq. (5) where f is the focal length, rewrite Eq. (5) in matrix form: Zc[ x y 1]=[ f 0 0 0 0 f 0 0 0 0 f 1][ Xc Yc Zc 1]$${Z_c} \cdot \left[ {\begin{array}{*{20}{c}} x \\ y \\ 1 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} f&0&0&0 \\ 0&f&0&0 \\ 0&0&f&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{X_c}} \\ {{Y_c}} \\ {{Z_c}} \\ 1 \end{array}} \right]$$

Transforming a point from the world coordinate system to the camera coordinate system belongs to the rigid transformation [21], this rigid transformation consists of translation vectors and rotation matrices, the transformation relationship between the world coordinate system and the camera coordinate system is: [ Xc Yc Zc 1]=[ R3×3 T3×1 0 1][ Xw Yw Zw 1]=M2[ Xv Yw Zw 1]$$\left[ {\begin{array}{*{20}{c}} {{X_c}} \\ {{Y_c}} \\ {{Z_c}} \\ 1 \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{R_{3 \times 3}}}&{{T_{3 \times 1}}} \\ 0&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ {{Z_w}} \\ 1 \end{array}} \right] = {M_2} \cdot \left[ {\begin{array}{*{20}{c}} {{X_v}} \\ {{Y_w}} \\ {{Z_w}} \\ 1 \end{array}} \right]$$

In Eq. (7), R3×3 = R(α, β, γ) is a rotation matrix, and in R3×3, α, β, and γ are the rotation angles around the x, y, and z axes of the camera coordinate system, respectively; T3×1=(tx,ty,tz)T$${T_{3 \times 1}} = {\left( {{t_x},{t_y},{t_z}} \right)^T}$$ is a 3D translation vector; M2 is a 4 × 4 matrix, and α, β, γ, and tx, ty, and tz are the six parameters that constitute the external parameters of the camera.

Equation (8) in f, sx, sy, μ0 and v9 belong to the internal parameters of the camera, which can be used to determine the transformation relationship between the world coordinate system and the image coordinate system in the camera coordinate system, and the internal and external parameters of the industrial camera can be determined by the industrial camera calibration. Zc[ μ v 1]=[ 1dx 1dxcotθ μ0 0 1dysinθ v0 0 0 1][ f 0 0 0 0 f 0 0 0 0 f 1 ][ R3×3 T3x1 0 1][ Xw Yw Zw 1] =[ fdx fdxcotθ μ0f μ0 0 fdysinθ v0f v0 0 0 f 1][ R3×3 T3×1 0 1][ Xw Yw Zw 1] =[ sz sxcotθ μ0f μ0 0 sysinθ v0f v0 0 0 f 1][ R333 T3×1 0 1][ Xw Yw Zw 1] =M1M2[ Xw Yw Zw 1]$$\begin{array}{rcl} {Z_c} \cdot \left[ {\begin{array}{*{20}{c}} \mu \\ v \\ 1 \end{array}} \right] &=& \left[ {\begin{array}{*{20}{c}} {\frac{1}{{dx}}}&{ - \frac{1}{{dx}}\cot \theta }&{{\mu _0}} \\ 0&{\frac{1}{{dy\sin \theta }}}&{{v_0}} \\ 0&0&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} f&0&0 \\ 0&{}&{} \\ 0&f&0 \\ 0&{}&{} \\ 0&0&f \\ 1&{}&{} \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{R_{3 \times 3}}}&{{T_{3x1}}} \\ 0&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ {{Z_w}} \\ 1 \end{array}} \right] \\ &=& \left[ {\begin{array}{*{20}{c}} {\frac{f}{{dx}}}&{ - \frac{f}{{dx}}\cot \theta }&{{\mu _0}f}&{{\mu _0}} \\ 0&{\frac{f}{{dy\sin \theta }}}&{{v_0}f}&{{v_0}} \\ 0&0&f&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{R_{3 \times 3}}}&{{T_{3 \times 1}}} \\ 0&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ {{Z_w}} \\ 1 \end{array}} \right] \\ &=& \left[ {\begin{array}{*{20}{c}} {{s_z}}&{ - {s_x}\cot \theta }&{{\mu _0}f}&{{\mu _0}} \\ 0&{\frac{{{s_y}}}{{\sin \theta }}}&{{v_0}f}&{{v_0}} \\ 0&0&f&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{R_{333}}}&{{T_{3 \times 1}}} \\ 0&1 \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ {{Z_w}} \\ 1 \end{array}} \right] \\ &=& {M_1}{M_2} \cdot \left[ {\begin{array}{*{20}{c}} {{X_w}} \\ {{Y_w}} \\ {{Z_w}} \\ 1 \end{array}} \right] \\ \end{array}$$

Camera imaging model

In practice, due to the optical industrial lenses in the lens processing and manufacturing precision and camera assembly process there are errors, resulting in the projection of points in three-dimensional space to the image coordinate system is not the position of the aberration of the position of a slight offset.

Generally optical lens tangential aberration is small, where p1 and p2 are radial aberration coefficients. The amount of specific correction can be expressed in equation (9): { σwi=2p1x+p1(r2+2x2) σwi=2p2x+p1(r2+2y2) r2=(x2μwi)+(y2vn)2$$\left\{ {\begin{array}{*{20}{l}} {{\sigma _{wi}} = 2{p_1}x + {p_1}\left( {{r^2} + 2{x^2}} \right)} \\ {{\sigma _{wi}} = 2{p_2}x + {p_1}\left( {{r^2} + 2{y^2}} \right)} \\ {{r^2} = \left( {{x^2} - {\mu _{wi}}} \right) + {{\left( {{y^2} - {v_n}} \right)}^2}} \end{array}} \right.$$

k1, k2, k3, p1 and p2 are also internal camera parameters. The transformation relationship between the ideal coordinates and the actual coordinates is shown below: { x=x+(xμ0)(k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2) y=y+(yv0)(k1r2+k2r4+k3r6)+2p2xy+p1(r2+2y2)$$\left\{ {\begin{array}{*{20}{l}} {x = x\prime + \left( {x\prime - {\mu _0}} \right)\left( {{k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}} \right) + 2{p_1}xy + {p_2}\left( {{r^2} + 2{x^2}} \right)} \\ {y = y\prime + \left( {y\prime - {v_0}} \right)\left( {{k_1}{r^2} + {k_2}{r^4} + {k_3}{r^6}} \right) + 2{p_2}xy + {p_1}\left( {{r^2} + 2{y^2}} \right)} \end{array}} \right.$$

HALCON-based industrial camera calibration

In this paper, the calibration model is created under the conditions of the HALCON vision algorithm [22] library; the images of calibration plates with different placement positions are acquired; the industrial camera is calibrated, and the internal and external parameters of the camera are acquired to correct the images. The final parameter that will be solved for the industrial robot in the EIH system is the chi-square transformation matrix of the camera coordinate system with respect to the robot end coordinate system.

In the EIH system, the following system of equations holds for two arbitrary bit positions during robot movement: { ewHi*ceH=gwH*cgHi ewHj*ceH=gwH*cgHj$$\left\{ {\begin{array}{*{20}{l}} {_e^w{H_i}*_c^eH = _g^wH*_c^g{H_i}} \\ {_e^w{H_j}*_c^eH = _g^wH*_c^g{H_j}} \end{array}} \right.$$

Multiply both sides of the equal sign of the equation above the agenda group by (cgHi)1$${\left( {_c^g{H_i}} \right)^{ - 1}}$$, and both sides of the equal sign of the equation below the agenda group by (cgHj)1$${\left( {_c^g{H_j}} \right)^{ - 1}}$$: { ewHi*ceH*(cgHi)1=gwH ewHj*ceH*(cgHj)1=gwH$$\left\{ {\begin{array}{*{20}{l}} {_e^w{H_i}*_c^eH*{{\left( {_c^g{H_i}} \right)}^{ - 1}} = _g^wH} \\ {_e^w{H_j}*_c^eH*{{\left( {_c^g{H_j}} \right)}^{ - 1}} = _g^wH} \end{array}} \right.$$

Combining the two equations above and below the agenda group and eliminating the constant matrix gwH$$_g^wH$$ yields: ewHi*ceH*(cgHi)1=ewHi*ceH*(cgHi)1$$_e^w{H_i}*_c^eH*{\left( {_c^g{H_i}} \right)^{ - 1}} = _e^w{H_i}*_c^eH*{\left( {_c^g{H_i}} \right)^{ - 1}}$$

The above equation is obtained by multiplying both sides of the equal sign simultaneously left and right: (cwHj)1*cwHi*ceH=ceH*(cgHj)1*(tgHt)$${\left( {_c^w{H_j}} \right)^{ - 1}}*_c^w{H_i}*_c^eH = _c^eH*{\left( {_c^g{H_j}} \right)^{ - 1}}*\left( {_t^g{H_t}} \right)$$

Substitute (egHj)1=(geHj)$${\left( {_e^g{H_j}} \right)^{ - 1}} = \left( {_g^e{H_j}} \right)$$ and (egHi)=(geHi)1$$\left( {_e^g{H_i}} \right) = {\left( {_g^e{H_i}} \right)^{ - 1}}$$ into the above equation to obtain: (ewHj)1*ewHj*ceHi=ceH*(geHj)*(geHj)1$${\left( {_e^w{H_j}} \right)^{ - 1}}*_e^w{H_j}*_c^e{H_i} = _c^eH*\left( {_g^e{H_j}} \right)*{\left( {_g^e{H_j}} \right)^{ - 1}}$$

Let X=ceH$$X = _c^eH$$, A=(ewHj)1*cwHj$$A = {\left( {_e^w{H_j}} \right)^{ - 1}}*_c^w{H_j}$$, B=(gcHj)*(geHi)1$$B = \left( {_g^c{H_j}} \right)*{\left( {_g^e{H_i}} \right)^{ - 1}}$$, you can simplify the above equation to AX = XB form.

In industrial applications usually use the nine-point calibration method to determine the camera coordinate system and mechanical end coordinates of each household transformation relationship, through the camera to obtain nine Mark points in the camera coordinate system under the image coordinates, control the robot end of the actuator in the actuator aligned to the nine Mark points in turn, to obtain the coordinates of the Mark points in the robot coordinate system, the use of the nine sets of corresponding coordinates can be set up as shown in the formula (16) Using the nine sets of corresponding coordinates, a coordinate transformation matrix can be established as shown in equation (16), where R is the rotation matrix and T is the translation matrix. HomMat2D=[ R T 0 1]=H(T)H(R)$$HomMat2D = \left[ {\begin{array}{*{20}{c}} R& \cdots &T \\ \vdots & \ddots & \vdots \\ 0& \cdots &1 \end{array}} \right] = H(T) \cdot H(R)$$

Analysis of AGV kinematic model and motion control principle
AGV kinematic model based on McNamee wheel

Based on the fact that AGVs can accomplish omnidirectional movements during operation. Understanding the principle of kinematics can better control the omnidirectional movement of AGV. Model the forward and reverse kinematics of an AGV based on a McNamee wheel. Establish a coordinate system for the kinematic model of the AGV fitted with a McNamee wheel. Take the positive center of the AGV as the coordinate origin, the X-axis positive horizontal to the left, the Y-axis positive vertical upward, and the ω-rotation positive counterclockwise. Let the translational velocity of the AGV be v, the orthogonal decomposition velocities be vx and vy, and the rotational angular velocity be ω, which is expressed by Eq: { v=vx+vy ω$$\left\{ {\begin{array}{*{20}{l}} {\vec v = \overrightarrow {{v_x}} + \overrightarrow {{v_y}} } \\ \omega \end{array}} \right.$$

The four wheels of the AGV are labeled, clockwise from the upper left wheel as wheel 1, 2, 3 and 4. Taking wheel 1 as an example, the speed of the wheel axis is v1, and v1 is synthesized by the translational speed v and rotational speed vω1, i.e.: { v1=v+vω1 vω1=ω×l l=a+b$$\left\{ {\begin{array}{*{20}{l}} {\overrightarrow {{v_1}} = \vec v + \overrightarrow {{v_{\omega 1}}} } \\ {\overrightarrow {{v_{\omega 1}}} = \vec \omega \times \vec l} \\ {\vec l = \vec a + \vec b} \end{array}} \right.$$

v1 is decomposed orthogonally into v1x and v1y, and from Eq. (17) and Eq. (12) the relation with the given speeds vx and vy can be introduced as: { v1=v1x+vty v1x=vx+ωb v1y=vyωa$$\left\{ {\begin{array}{*{20}{l}} {\overrightarrow {{v_1}} = \overrightarrow {{v_{1x}}} + \overrightarrow {{v_{ty}}} } \\ {{v_{1x}} = {v_x} + \omega \cdot b} \\ {{v_{1y}} = {v_y} - \omega \cdot a} \end{array}} \right.$$

The kinematic model of the roller in the wheel where it meets the ground, the speed of the roller is consistent with the wheel’s axial speed as vl, the speed of the roller can be decomposed into v and v, v is the speed component along the axis of the roller, v is the speed component in the direction perpendicular to the axis of the roller, which can be obtained from the knowledge of vectors: { vli=v1u vli=22v1z+22v1y$$\left\{ {\begin{array}{*{20}{l}} {{v_{li}} = {{\vec v}_1} \cdot \vec u} \\ {{v_{li}} = - \frac{{\sqrt 2 }}{2}{v_{1z}} + \frac{{\sqrt 2 }}{2}{v_{1y}}} \end{array}} \right.$$

u$$\vec u$$ is the unit vector in direction v.

vr1 is the linear velocity of the wheel, and the angle between vr and v is 45°, which can be deduced from equation (20): vr1=vijcos45°=v1yv1x=vyvxω(a+b)$${v_{r1}} = \frac{{{v_{ij}}}}{{\cos 45^\circ }} = {v_{1y}} - {v_{1x}} = {v_y} - {v_x} - \omega (a + b)$$

Similarly, the linear velocities vr2, vr3, and vr4 of wheels 2, 3, and 4 can be obtained, thus obtaining the inverse kinematic equations of the AGV: { vr1=v1yv1z=vyvxω(a+b) vr2=v2x+v2y=vy+vx+ω(a+b) vr3=v3yv3x=vyvx+ω(a+b) vr4=v4x+v4y=vy+vxω(a+b)$$\left\{ {\begin{array}{*{20}{l}} {{v_{r1}} = {v_{1y}} - {v_{1z}} = {v_y} - {v_x} - \omega (a + b)} \\ {{v_{r2}} = {v_{2x}} + {v_{2y}} = {v_y} + {v_x} + \omega (a + b)} \\ {{v_{r3}} = {v_{3y}} - {v_{3x}} = {v_y} - {v_x} + \omega (a + b)} \\ {{v_{r4}} = {v_{4x}} + {v_{4y}} = {v_y} + {v_x} - \omega (a + b)} \end{array}} \right.$$

Let the radius of the wheel be R, vir = ωiR brought into the above equation and written in matrix form: [ ω1 ω2 ω3 ω4]R=[ 1 1 (a+b) 1 1 a+b 1 1 a+b 1 1 (a+b)][ vy vx ω]$$\left[ \begin{array}{rcl} {\omega _1} \\ {\omega _2} \\ {\omega _3} \\ {\omega _4} \\ \end{array} \right]R = \left[ {\begin{array}{*{20}{c}} 1&{ - 1}&{ - (a + b)} \\ 1&1&{a + b} \\ 1&{ - 1}&{a + b} \\ 1&1&{ - (a + b)} \end{array}} \right]\left[ \begin{array}{rcl} {v_y} \\ {v_x} \\ \omega \\ \end{array} \right]$$

The positive kinematic equation of the AGV is obtained from the above equation: { vx=R4(ω1+ω2ω3+ω4) vy=R4(ω1+ω2+ω3+ω4) ω=R4(a+b)(ω1+ω2+ω3ω4)$$\left\{ {\begin{array}{*{20}{l}} {{v_x} = \frac{R}{4}\left( { - {\omega _1} + {\omega _2} - {\omega _3} + {\omega _4}} \right)} \\ {{v_y} = \frac{R}{4}\left( {{\omega _1} + {\omega _2} + {\omega _3} + {\omega _4}} \right)} \\ {\omega = \frac{R}{{4(a + b)}}\left( { - {\omega _1} + {\omega _2} + {\omega _3} - {\omega _4}} \right)} \end{array}} \right.$$

The AGV is controlled by the forward and inverse kinematics equations of the AGV, which are given (vx,vy,ω)$$\left( {{v_x},{v_y},\omega } \right)$$ by the main control system, and the rotational speeds of the four wheels (ω1,ω2,ω3,ω4)$$\left( {{\omega _1},{\omega _2},{\omega _3},{\omega _4}} \right)$$ are calculated by equation (23), and the Robo Module DC servo motor driver will control the motors to output the corresponding rotational speeds according to the corresponding rotational speed parameters.

AGV motion control principle

The key to AGV motion control [23] is the control of the motor speed, DC motors for speed regulation generally use PWM speed regulation, i.e., pulse width modulation, to control the motor speed by changing the ratio of the motor armature voltage on-time to the energization time. In this paper, this method is used to regulate the speed of the motor. Figure 1 shows the AGV action control flow.

Figure 1.

AGV action control process

When the AGV is running in a straight line, the same PWM value is used to control the four motors to output the same rotational speed, and the AGV travels at a constant speed. When AGV performs position adjustment, or performs turning action or panning action at special intersections, the master control system outputs different PWM adjustment parameters to the motors, and adjusts the rotational speed and steering of each motor to complete the corresponding action. When the AGV body moves away from the navigation line, the vision system determines the deviation between the navigation line and the center of the body. The diagonal line in the figure is the line center line extracted by the vision system, and the angle between the line center line and Y axis is θ, which indicates the angle offset between the AGV and the route, and the larger the value of θ, the larger the offset is; a is the intercept of the line center line on X axis, which indicates the distance deviation between the line center and the midpoint of the field of view, and the larger the value of a, the larger the distance of the AGV deviation from the route is. At this time, it is necessary for the AGV control system to output the PWM adjustment parameters of the corresponding McNamee wheel motor according to the size of the deviation, change the motor speed, and utilize the rotational speed difference between the wheels to complete the position adjustment.

Adaptive fuzzy PID control system
PID control

Conventional PID control [24] is a linear controller whose control is a typical unit negative feedback control.

The input quantity is the deviation e(t) between the set value r(t) and the output value y(t), and the control quantity u(t) is formed by using a linear combination of proportional, integral, and differential, which stabilizes the output at the desired value by adjusting the controlled object. The role of the proportional link stage is used to eliminate the error, affecting the dynamic response speed and the ability to eliminate the error; the role of the integral link is to eliminate the steady state error generated by the proportional link, to improve the system’s degree of non-differential; the role of the differential link is to introduce correction signals in advance, suppressing the system’s overshooting and oscillations, and to improve the stability of the system.

Conventional PID control of the control object when the control law can be expressed as: u(t)=Kp[e(t)+1Ti0te(t)dt+Tdde(t)dt]$$u(t) = {K_p}\left[ {e(t) + \frac{1}{{{T_i}}}\int_0^t e (t)dt + {T_d}\frac{{de(t)}}{{dt}}} \right]$$

Where Kp is the proportionality coefficient, Ti is the integral time constant, Td is the differential time constant.

After the further development of microcontroller chips and computers, the digital PID to complete the program to slowly replace the analog PID control circuit. Through the program to achieve the PID control not only to complete the control task, and control is more versatile.PID control law equation is continuous, due to the digital PID control is a sampling control, can only be sampling the moment of deviation value to the operation, so the need for PID control law discretization process.

The following approximation can be made before discretization: u(t)u(k)$$u(t) \approx u(k)$$ e(t)e(k)$$e(t) \approx e(k)$$ 0te(t)dt=j=0ke(j)Δt=j=0kTe(j)$$\int_0^t e (t){\text{d}}t = \sum\limits_{j = 0}^k e (j)\Delta t = \sum\limits_{j = 0}^k T e(j)$$ de(t)dte(k)e(k1)Δt=e(k)e(k1)T$$\frac{{de(t)}}{{dt}} \approx \frac{{e(k) - e(k - 1)}}{{\Delta t}} = \frac{{e(k) - e(k - 1)}}{T}$$

Eq. (28), Eq. (29) where T is the sampling period and k is the sampling sequence number (k = 0, 1, 2, ⋯). Two forms of PID algorithms, positional PID and incremental PID, can be obtained using this approximation.

The discretized positional PID control law can be expressed as: u(k) = Kp{e(k)+TTij=0ke(j)+TdT[e(k)e(k1)]} = Kpe(k)+Kij=0ke(j)+Kd[e(k)e(k1)]$$\begin{array}{rcl} u(k) &=& {K_p}\left\{ {e(k) + \frac{T}{{{T_i}}}\sum\limits_{j = 0}^k e (j) + \frac{{{T_d}}}{T}[e(k) - e(k - 1)]} \right\} \\ &=& {K_p}e(k) + {K_i}\sum\limits_{j = 0}^k e (j) + {K_d}[e(k) - e(k - 1)] \\ \end{array}$$

From equation (30), it can be seen that the control quantity u(k) of the positional PID output is related to all the past deviation signals, and when the integral term continues to accumulate the error and reaches saturation, it will still continue to integrate the error. However, once the direction of the error is reversed, the system has to have a certain process to exit the saturation state. Since the output of each control quantity is related to the past state, it increases the workload of operation, and the system is prone to drastic changes once there is a problem with the control state. In order to reduce the amount of arithmetic, reduce the impact of false action, it can be rewritten as an incremental PID.

The control volume output of the controller at k − 1 sampling moments is: u(k1) = Kp{e(k1)+TTij=0k1e(j)+TdT[e(k1)e(k2)]} = Kpe(k1)+Kij=0k1e(j)+Kd[e(k1)e(k2)]$$\begin{array}{rcl} u(k - 1) &=& {K_p}\left\{ {e(k - 1) + \frac{T}{{{T_i}}}\sum\limits_{j = 0}^{k - 1} e (j) + \frac{{{T_d}}}{T}[e(k - 1) - e(k - 2)]} \right\} \\ &=& {K_p}e(k - 1) + {K_i}\sum\limits_{j = 0}^{k - 1} e (j) + {K_d}[e(k - 1) - e(k - 2)] \\ \end{array}$$

Then the incremental PID control equation is: Δu(k)=u(k)u(k1) =Kp{[e(k)e(k1)]+TTie(k)+TdT[e(k)2e(k1)+e(k2)]}$$\begin{array}{l} \Delta u(k) = u(k) - u(k - 1) \\ = {K_p}\left\{ {[e(k) - e(k - 1)] + \frac{T}{{{T_i}}}e(k) + \frac{{{T_d}}}{T}[e(k) - 2e(k - 1) + e(k - 2)]} \right\} \\ \end{array}$$

In the above equation Kp is the proportional coefficient, Ki=KpTTi$${K_i} = {K_p}\frac{T}{{{T_i}}}$$ is the integral coefficient and Kd=KpTdT$${K_d} = {K_p}\frac{{{T_d}}}{T}$$ is the differential coefficient. From the above equation, it can be seen that the control quantity Δu(k) is only related to the sampling value of the last three times, and there is no accumulation of the error of the past state, and the calculation error has less influence on the output of the control quantity, so that it can be weighted to get a better control effect, and it will not have a serious impact on the system work when there is a problem with the control of the system.

Fuzzy control

Fuzzy control is a control method based on fuzzy logic, and its main principle is to mimic the human way of thinking and transform the fuzzy and uncertain information into control actions.

The fuzzy controller is mainly composed of fuzzification interface, knowledge base, fuzzy inference and defuzzification interface, and the fuzzy controller composition is shown in Figure 2.

Figure 2.

Fuzzy controller components

There are various defuzzification methods such as maximum affiliation method, weighted average method, center of gravity method, etc. In this paper, the defuzzification method chosen is the center of gravity method, the specific formula is as follows: u=abxμC(x)dxabμC(x)dx$$u = \frac{{\int_a^b x \mu C(x)dx}}{{\int_a^b \mu C(x)dx}}$$

In the above equation, u is the specific value after defuzzification, x is the element in the output domain, μC(x) is the affiliation function of the fuzzy set of the output quantity on the output domain, and [a, b] is the range of the output domain.

Adaptive fuzzy PID control

In order to ensure the control characteristics and accuracy in the motion of the automated production line, adaptive fuzzy PID control is used on the basis of using PID control. The block diagram of the adaptive fuzzy PID control system is shown in Figure 3. Adaptive fuzzy PID control combines the ideas of fuzzy control and adaptive control on the basis of the conventional PID algorithm, which is mainly used to improve the stability and robustness of the system compared to the conventional PID algorithm, and is suitable for complex time-varying nonlinear systems. Without changing the control effect of the original PID algorithm, the adaptive fuzzy PID controller calculates the control output through the deviation of turbidity and the rate of change of the deviation, and it can adaptively adjust the parameters of the PID controller in real time according to the actual operation of the system in order to maintain the stability and performance of the system, and to realize accurate control in the dynamically changing environment.

Figure 3.

Block diagram of adaptive fuzzy PID control system

Adaptive fuzzy PID control incrementally adjusts the three parameters of the PID controller through the fuzzy control rules established in the fuzzy controller. According to the fuzzy relationship between the deviation e of the set turbidity from the measured turbidity and the rate of change of the deviation ec and the fuzzy amount of the increment of the three parameters of the PID in the fuzzy controller, the turbidity deviation e and the rate of change of the deviation ec are updated in real time during the operation of the adaptive fuzzy PID control system, and the increment of the three parameters of the PID is rectified on-line according to the fuzzy control rules set up. The equations of Kp, Ki and Kd of the adaptive tuning PID controller are: Kp=Kp0+λpΔKp$${K_p} = {K_{p0}} + {\lambda _p} \cdot \Delta {K_p}$$ Ki=Ki0+λiΔKi$${K_i} = {K_{i0}} + {\lambda _i} \cdot \Delta {K_i}$$ Kd=Kd0+λdΔKd$${K_d} = {K_{d0}} + {\lambda _d} \cdot \Delta {K_d}$$

Where Kp0, Ki0, Kd0 for the PID controller pre-adjusted initial parameters, ΔKp, ΔKi, ΔKd after the fuzzy controller output corresponds to the fuzzy amount of Kp, Ki, Kd changes in the fuzzy amount of λp, λi, λd for ΔKp, ΔKi, ΔKd, respectively, the proportionality factor. In the set turbidity and measured turbidity deviation e and deviation rate of change ec change, ΔKp, ΔKi, ΔKd corresponding to the defuzzified value will also change. Therefore, Kp, Ki, Kd of the PID controller will adaptively adjust the parameters of the PID controller according to the real-time operating state of the control system to adapt to the dynamic changes in the system and external interference, which improves the adaptive ability of the control system, thus making it have a better control effect in all control processes.

Performance analysis of automated production line system based on adaptive control
Fuzzy Adaptive PID Simulation

In the editing window of the fuzzy control box, input and output variables can be edited, added, or deleted. After determining the input and output variables, the range of affiliation between the variables, the shape of the affiliation function, and the division of its subsets can be designed. After editing each variable, the graph of the affiliation function for each input and output parameter can be exported to check for errors. ΔKpΔKiΔKd affiliation function is shown in Fig. 4.

Figure 4.

ΔKpΔKiΔKd membership function

In Matlab software, open the Simulink module and create a blank model. Then, combine components to build the simulation model for PID and fuzzy adaptive PID. The simulation model of fuzzy adaptive PID realizes the function of fuzzy inference through the fuzzy logic controller, and it needs to be associated with the Fis file in the workspace of Matlab to realize the fuzzy reasoning on the input and output variables. The simulation model for fuzzy adaptive PID and PID in Simulink can be seen in Fig. 5. In the upper part of the figure, the simulation model of fuzzy adaptive PID is constructed by the fuzzy logic controller together with the PID simulation model, and in the lower part, the simulation model of PID is constructed, and the outputs of both of them are connected with the same oscilloscope so as to facilitate the comparison of the control performance of both of them. The form used in the experiment is to set the Kp, Ki, Kd of the fuzzy adaptive PID controller and the Kp, Ki, Kd of the PID controller to the same values, and the values of its parameters are set in a randomized way to observe the changes in its output.

Figure 5.

Simulink simulation model

The results of the output curve variation are shown in Figure 6 below, where (a) to (d) represent the output curves under the conditions of kp=3 ki=3 kd=3, kp=3 ki=2 kd=2, kp=0.5 ki=0.5 kd=0.5, and kp=1 ki=0.5 kd=0.5, respectively. kp=3 ki=3 kd=3, it can be seen from the figure that the peak of the PID curve (1.18) is smaller than the peak of the fuzzy PID curve (1.27), which is steeper than that of the red curve before it reaches the set value 1. Therefore, from the results of the experiment (Fig. 6(a)), the fuzzy adaptive PID controller performs better in suppressing overshooting than the PID controller, and the response speed is slightly faster than that of the PID controller, which reaches the steady state in about 6.5 s, which is 3.5 s less than that of the PID controller. However, in Fig. 6(d), when kp=1 ki=0.5 kd=0.5, the fuzzy adaptive PID and PID reach the peak value around 4.5 s and 5 s respectively, and with the increase of time, the output value of the two basically stays at 1. It can be seen that, when the parameters of the PID controller are set to a more appropriate value, the gap between its performance and that of the fuzzy adaptive PID controller is not large. In general the performance of the fuzzy adaptive PID controller is better than the PID controller. Considering that in actual production, the change of processing raw materials, the change of variables in the determination of the domain of the theory and various factors in the production site, this control system adopts the principle of localization and designs three control methods of open-loop, PID, and fuzzy adaptive PID for the engineers of the oil extraction process to choose, in order to improve the efficiency of the enterprise.

Figure 6.

The result of the change of the output curve

Automated production line motion control system performance testing

The surface of the target sample chosen in this design is flat, so the positional accuracy of the robot’s actuating end in the X and Y directions is an important factor affecting its motion accuracy. To test the motion accuracy of the robot in station 4, the experimental instrument used is a percentage meter with a measuring accuracy of 0.01mm. Under the world coordinate system, the motion control commands are sent through the host computer to control the robot’s actuating end to repeatedly move along the X and Y directions to hit the percentage meter, and then the experimental data are recorded and processed so as to calculate the motion accuracy of the robot.

The experimental process is as follows: firstly, the lifting cylinder lifts the substrate and the target workpiece to the working position, and then the software of the host computer controls the robot to move in the X and Y directions with the travels of -120mm, -60mm, 60mm and 120mm respectively to the photographing position, and then locates the center of the screw holes at the top of the triangular prototype for 50 times, records the image coordinates and then converts them to the mechanical coordinates, and then fixes the head of the percentile meter. In the position above the center of the screw holes at the top of the triangular sample, according to the mechanical coordinates obtained from the vision to control the robot end of the electric batches in the same way lightly against the meter head to start the experimental test, impact the meter 50 times, record the data. The final purpose of this experiment is to determine the accuracy of the four-station robot’s movements using visual localization.

Motion Accuracy Testing for Automated Production Lines

The robot motion accuracy test results are shown in Fig. 7, with (a) and (b) representing the X-direction motion deviation and Y-direction motion deviation, respectively. The larger value of the motion deviation along the X-direction and Y-direction in different travel tests is taken as the motion deviation in that direction, and it can be seen from Fig. 7(a) that when the X-axis travel is 120mm in the negative direction, the magnitude of the motion deviation is the largest, which is between 0.002mm~0.035mm, and the motion deviation in the X-direction can be obtained to be ±0.016mm.From Fig. 7(b), when the Y-axis travel is 60mm in the positive direction, the magnitude of the motion deviation is the largest, which is between -0.29mm~0.035mm. 60mm, the amplitude of motion deviation is the largest, between -0.29mm~0.004mm, from which the motion deviation in Y direction can be obtained as ±0.017mm.

Figure 7.

Results of the robot motion accuracy test

Trajectory fitting accuracy test

The vision-guided right-angle coordinate-based robot trajectory planning algorithm proposed in this design can be oriented to arbitrary curves, obtain the discrete points of the target trajectory contour from the vision image, fit the motion trajectory by the arc-length parameterized Akim method, and guide the robot to realize the motion of any given trajectory by using the improved five-segment S-type velocity planning, which effectively improves the quality of the robot’s motion trajectory.

The visually acquired contour discrete points of any given trajectory are used as experimental data points, and the arc-length parameterized Akima trajectory fitting simulation test is carried out in Matlab 2020 environment, and 216 contour discrete points of the visually acquired trajectory are screened by the double chord error and chord tangent error constraints test, and the final selection of,50 data points is made to fit the Akima curves through the adjustment of error constraints, the results are shown in the following figure. The Akima curve simulation results are shown in Fig. 8, (a) is the Akima fitted curve, while (b)~(c) are the local zoomed-in plots of 1 and 2 in the fitted curve, respectively. The simulation data show that the obtained Akima curves can well depict the shape and convexity of the visual data points, and the maximum error between the fitted points and the theoretical points at the same X-coordinate is 0.114 mm, and the average error is 0.0287 mm.

Figure 8.

The Akima curve simulation results

After simulation and analysis, the continuous trajectory motion of a Cartesian Coordinate robot is applied to the gluing and sealing station of an assembly line. In order to obtain higher gluing quality, the motion speed of the glue gun needs to be limited, i.e., the path length and motion speed constraints are introduced in the velocity planning. The Akim curve interpolation path length obtained by the modified chord length parameter method is 1250 units, and the end actuator feed speed is selected as 1350 units/s. Four sets of experimental parameters, 3200, 1400, 5500, and 5500 (unit/s2), are used for the experiments. The experimental results of the four S-curves are shown in Table 1. From the data in the table, it can be seen that: after the introduction of the path length and speed constraints, it is verified that the four-segment and six-segment S-type speed planning curves can not reach the optimal speed within the constraints, and the five-segment speed planning can reach the optimal speed faster than seven-segment speed planning, and maintain the optimal feed rate for a longer time and a shorter time, which is a more ideal speed planning curve.

Four kinds of s curve experimental results

Maximum speed(unit/s) Maximum acceleration(unit/s2) Motion time(s)
Type 7 1350 3200 1.555
Type 6 1075.465 1400 2.443
Type 5 1350 4605.137 1.501
Type 4 1417.527 2618.452 1.862
Conclusion

In this paper, in order to realize the control of high-precision motion of automated production line under adaptive control, AGV kinematics is combined with adaptive fuzzy PID control method, and a set of industrial robot vision guidance system for automated production line is proposed.

The performance of the fuzzy adaptive PID controller is compared with that of the PID controller, and it is found that the performance of the fuzzy adaptive PID controller is better than that of the PID controller.

The performance test results of the motion control system for the automated production line show that the motion deviation in the X and Y directions are ±0.016mm and ±0.017mm respectively, which are relatively small errors. Simulation data show that the Akima curve obtained in this paper can well depict the shape and convexity of the visual data points, and the maximum error between the fitted point and the theoretical point at the same X coordinate is 0.114mm, and the average error is 0.0287mm. When the path length and motion speed constraints are introduced, the results of the trajectory fitting accuracy test maintain the optimal feed rate for a longer period of time and the total time taken is shorter, which is a more desirable speed planning curve.

Language:
English