―Give examples (students schedule classes)―
Normal thinking processing methods and optimization processing methods:
For example, scheduling classes for students. Students and courses are a many-to-many relationship.
According to normal logic, there should be an association table to maintain the relationship between the two.
Now add a constraint for verification. For example: The courses Zhang San learned last semester should not be scheduled for such courses when scheduling classes.
Therefore, a constraint table (i.e., historical grade table) needs to appear.
That is: students’ course selection schedules require students’ grade tables as constraints.
―Scheme 1: Normal processing method―
When a student takes the course selection again. You need to check the student's course selection schedule to see if it already exists.
That is, the following verification is:
//Query whether the data with the student code and course code A and B respectively exists //The list set stores all the data of the student course selection records in the list<StudentRecordEntity> ListStudentRecord=service.findAll(); //Query the data to see if StudentRecordEntity already exists enSr=ListStudentRecord.find(s=>s.StudentCode==A && s.Code==B); If(enSr==null){ //Student did not choose this course//.... }else{ //Student has selected this course//.... }The above code is very concise. And it's very easy to understand.
First, suppose there are 5,000 students and 100 courses. Then for the data set for students to choose courses, the data volume will be 5,000*100. The data volume will be the order of magnitude of the 100,000 level.
Among the 100,000 data, query a record of student = A course = B. The execution will be inefficient. Because the query of the find method is also where query, that is, searching by traversing the data set.
So, use the above code. As the data volume gradually increases, the execution efficiency of the program will drop significantly.
ps: The data volume is growing, which is not very suitable in this example. The example may not be appropriate. Anyway, that's probably what it means. )
―Scheme 2: Use memory for optimization efficiency―
This practice requires memory consumption. Or do the verification work forward (data initialization is carried out during the deployment of the system). That is: when the page is loaded, the data is only called to the provided public method for verification.
//Student Code to array index Private Dictionary<string,int> _DicStudentCodeToArrayIndex;//Course Code to data index Private Dictionary<string,int> _DicCourseCodeToArrayIndex;//All students List<StudentEntity> ListStudent=service.findAllStudent();//All course List<CourseEntity> ListCourse=service.findAllCourse();//All students’ course selection record List<StudentCourseEntity> ListStudentRecord=service.finAll();Private int[,] _ConnStudentRecord=new int[ListStudent.count,ListCourse.count];//Construct an array of students and courses to quickly find dictionary index Private void GenerateDic(){For(int i=0;i<ListStudent.Count;i++) _DicStudentCodeToArrayIndex.Add(ListStudent[i].code,i)}For(int i=0;i<ListCourse.Count;i++){_DicCourseCodeToArrayIndex.Add(ListCourse[i].code,i)}}//Construct a two-dimensional array for students' course selection matching. 1 means that the student has selected the course Private void GenerateArray(){Foreach(StudentRecordEntity sre in ListStudentRecord){int x=_DicStudentCodeToArrayIndex[sre.StudentCode];int y=DicCourseCodeToArrayIndex[sre.CourseCode];ConnStudentRecord[x,y]=1;}}//Open method: query whether the course selection record exists based on student Code and course Code /// <returns> Return 1 means that it exists. Return 0 means that it does not exist</returns> Public void VerifyRecordByStudentCodeAndCourseCode(String pStudentCode,String pCourseCode){int x=_DicStudentCodeToArrayIndex[pStudentCode];int y=_DicCourseCodeToArrayIndex[pCourseCode];Return ConnStudentRecord[x,y];}―Performance Analysis―
Analyze the appearance of the second solution.
1. There are many ways.
2. There are many variables used.
First of all, I have to talk about it. The purpose of this optimization is to improve the lag phenomenon (large amount of verification data) that students encounter when choosing courses.
The above two schemes are analyzed separately:
Assume that the student is N and the course is M
The first solution:
Time complexity is easy to calculate the first solution with a minimum of O(NM)
The second solution:
1. There are many codes. But the user only provides a VerifyRecordByStudentCodeAndCourseCode method.
2. There are many variables, because this solution is to use memory to improve efficiency.
Execution process of this method: 1. Use Code in Dictionary to find Index2, and use Index to query the array.
In the first step, the query in Dictionary uses the Hash search algorithm. The time complexity is O(lgN) and the time is relatively fast. The second step is that the time complexity is O(1), because the array is continuous. The index will directly look up the corresponding address.
Therefore, the second solution is used for verification, and the time complexity of the second solution is O (lgN+lgM)
-Summarize-
Through the above analysis, it can be seen that memory efforts can improve the execution efficiency of the program. The above is just an example, and the quality of optimization depends on the data structure used.
The above is the entire content of this article about Java performance optimization data structure instance code, I hope it will be helpful to everyone. Interested friends can continue to refer to other related topics on this site. If there are any shortcomings, please leave a message to point it out. Thank you friends for your support for this site!