AI-Powered Grading
This project uses a computer program to understand and grade parts of an assessment. First, it loads a special program (called a neural network) that has been trained to recognize different letters. Then, it looks at images of assessments and splits them into smaller sections. Next, it goes through each section and tries to figure out what letter is written there. For example, if the section shows the letter 'A', it records that. After it's done with all the sections, it saves its findings in a special type of file called a CSV. This file helps people see the grades the program gave for each part of the assessment. So, in short, the program helps with grading by looking at pictures of assessments and figuring out what letters are written in each part.
The multi-grade assessment system uses five special labels to create a thorough picture of what's being evaluated. These labels go beyond simple categories, acting as powerful tools to understand the subject in many ways. Imagine a growth grade as a crystal ball, peeking into the future to see how well something might improve. A stability grade, on the other hand, checks how steady and reliable something is, like judging how trustworthy an investment might be. The quality grade dives straight to the heart of the matter, assessing the overall goodness or standard. Think of it like grading a report - a high quality grade means it's well-written and easy to follow. But the system goes further. A valuation grade considers not just how good something is, but also its importance. Imagine evaluating a business partnership - a high valuation grade suggests it could be very beneficial and bring significant value. Finally, the sentiment grade explores the emotional side, checking the feeling or mood associated with something. Think about reading customer reviews - a high sentiment grade means most customers feel positive and satisfied. By combining all these diverse labels, the system creates a well-rounded picture, not just a single number. It considers future potential, reliability, overall quality, importance, and even how people feel about it. This more complete approach allows for a richer and deeper understanding of what's being evaluated.
Our tool has five sections, each looking at a different part of a company: Growth Grade, Stability Grade, Quality Grade, Valuation Grade, and Sentiment Grade. In each section, there are grades from A to F, showing how good or bad that part is for a company. To use our tool, you just put in some information about the company you're checking. The model then looks at this info and suggests the best grade for each section. Once you pick a grade, the box for that section turns blue to show your choice.
Let's imagine we're checking out a tech company. For the Growth Grade, we'd look at things like how much money it's making, how much of the market it has, and if it's making any cool new products. If the model suggests an 'A' grade, meaning it's growing a lot, we'd pick that, and the Growth Grade box would turn blue.
We'd do similar checks for Stability, Quality, Valuation, and Sentiment. The model would help us pick grades based on the info we put in, and we'd pick the one we think fits best.
Using our tool makes checking out companies much quicker and easier. Seeing the blue boxes helps us know we've made our choices, which makes deciding where to invest less stressful and confusing.
Sure! Imagine a big grid, like a giant tic-tac-toe board. This grid helps us organize information neatly. In each square of the grid, there's a different aspect we're looking at: Growth, Stability, Quality, Valuation, and Sentiment. Now, picture this grid divided into sections. There are three rows going across and two columns going up and down. This setup makes it easy to see how each aspect is doing. Instead of a messy list, we have a clear layout where we can quickly understand the evaluation process. It's like having a map that guides us through assessing each aspect step by step.
Each of the six boxes in the grid acts as a mini-assessment zone. Inside these boxes, you'll find five choices, like a pick-one question. These choices, ranging from A to F, represent the different levels of performance or quality for that specific label (Growth, Stability, etc.). An "A" signifies the top score, indicating exceptional performance or outstanding quality. "B" reflects a very good showing, while "C" lands in the average range. Ratings of "D" and "F" suggest below-average or poor performance, respectively. Ultimately, the chosen grade depends on the specific criteria used for evaluation and how well the subject being assessed measures up in that particular category.
The system adds another layer of clarity with colour! Once you've chosen a grade (A, B, C, D, or F) for a specific label inside its box, something cool happens. That box magically transforms and gets coloured blue! This colour-coding trick makes it super easy to see which grade was chosen at a quick glance. Imagine looking at a report card - instead of just reading letters, you can also see them highlighted in blue. This way, the whole picture of the assessment becomes even clearer and easier to understand. It's like having a visual key that helps you instantly grasp how each aspect (Growth, Stability, etc.) was evaluated.
The system gets even more interesting here! Imagine having a super smart helper who can learn and make guesses based on information. That's kind of like what the system uses - a special tool called a "classification model." This model is a type of artificial intelligence (AI), which means it's a computer program that can get smarter over time. Here's how it works with the assessment system:
The model acts like a super fast grader, looking at each label (Growth, Stability, etc.) one by one.
For each label, it checks all the information available about what's being graded.
Based on this information, the model tries to guess the most fitting grade (A, B, C, D, or F).
It's important to note that right now, based on your description, the model only seems to consider the color of the first box (which might be accidentally chosen). In a fully working system, the model would use all the information it can find to make the best guess for each grade.
This "super smart helper" makes the assessment process faster and potentially more consistent. However, it's still under development, and human expertise might still be needed to ensure the accuracy of the final grades.
Powered by Froala Editor