Visit nullfake.com to try it out or checkout the nullfake github repo
We now officially live in an era where AI interpretative email clients exist to read the AI generated emails coming through your inbox. AI is talking to AI and we are all slowly getting comfortable in the backseat (for now) pulling the strings. Its too easy to take comfort in doing less.
Inevitably this will slide deeper and deeper into more dependence, more comfort. Being able to scale a team with little to no additional cost is too appealing and lucrative to pass up.
So when we are looking to spend our hard earned money making an online purchase, why should this be any different? We need a quick way to assess whether a product lives up! Why wouldn’t 4,000+ reviews of a product on Amazon with a 4.5/5 star rating live up to expectation?
Well it sometimes doesn’t live up to expectations. And sometimes its hard to tell.
The Rise and Fall of Fakespot
Fakespot emerged as a beacon for online shoppers aiming to cut through the noise of potentially deceptive reviews. By analyzing review patterns and assigning grades to products, it provided users with a clearer picture of product authenticity. Its integration into browsers and mobile platforms made it a go-to tool for many.
However, as of July 1, 2025, Fakespot will be shutting down. Mozilla, which acquired Fakespot in 2023, cited a strategic shift in focus as the reason for discontinuation . This move leaves a void for consumers seeking tools to verify the authenticity of online reviews.
Amazon Restricting Access to Review Data
Recently, Amazon has implemented measures that restrict access to comprehensive review data. Users have reported being prompted to sign in to view, filter, or sort reviews beyond the initial few displayed on product pages . This change hampers the ability of third-party tools and consumers to analyze and verify the authenticity of reviews.
Amazon’s rationale behind these restrictions is to protect the integrity of its rating system. By limiting access, the company aims to ensure that reviews are authentic and come from real buyers, thereby enhancing customer trust. Why blocking access to even parse and read reviews via public APIs (or even scraping) would protect the integrity of such reviews is completely lost on me, but maybe I’m missing something.
Enter Null Fake : An Open Source Solution
Why try to replace Fakespot? Well its clear to me that there is a void already developing from those who are looking for alternatives. But because this kind of approach to ensuring the validity and authenticity of something humans instinctively want to trust is often met with resistance, legal threats, and technical pushback, a fully open-source model is the only viable path forward.
In response to the challenges posed by fake reviews and restricted access to review data, we developed Null Fake – a free, open-source web application designed to analyze Amazon product reviews for authenticity. Built with Laravel, Null Fake leverages OpenAI to evaluate individual reviews, assigning each a score from 0 (genuine) to 100 (likely fake). The platform then aggregates these scores to provide an adjusted product rating, offering a clearer picture of a product’s true quality.
Why Open Source?
Transparency and community collaboration are at the heart of the idea behind this. Null Fake is by no means a perfect solution, but not all solutions start out that way. Only through refinement, expansion of ideas will this type of solution expand and most importantly last.
At the very least, Null Fake is a proof of concept that perhaps can serve as inspiration for others to roll out their own solution.
Under the Hood: How Null Fake Works
Null Fake is a Laravel 12 application utilizing Livewire 3 for dynamic interfaces. It integrates with Unwrangle’s API to extract Amazon review data and then OpenAI to analyze Amazon product reviews, determining their authenticity. The process involves several key steps :
1. Pulling Reviews via Unwrangle API
Given Amazon’s restrictions on unauthenticated review access, the Unwrangle API requires an Amazon session cookie to retrieve more than the most recent reviews. This approach allows us to access a broader set of reviews for analysis.
// app/Services/ReviewFetcher.php
public function fetchReviews(string $asin): array
{
$response = Http::withHeaders([
'Cookie' => 'session-id=your-session-id; ...',
])->get("https://api.unwrangle.com/v1/reviews/{$asin}");
return $response->json();
}
Explanation:
- The
fetchReviews
method sends an HTTP GET request to the Unwrangle API, including necessary headers such as the Amazon session cookie. - The response is then parsed into a JSON array containing the product reviews.
2. Caching Review Data for Efficiency
To optimize performance and reduce redundant API calls, we save (or cache) fetched reviews in a local database. This ensures that repeat analyses of the same product are faster and more efficient.
// app/Models/ProductReview.php
protected $fillable = ['asin', 'reviews', 'fetched_at'];
// app/Services/ReviewService.php
public function getCachedReviews(string $asin): ?ProductReview
{
return ProductReview::where('asin', $asin)
->where('fetched_at', '>=', now()->subDays(7))
->first();
}
Explanation:
- The
ProductReview
model represents the cached reviews in the database. - The
getCachedReviews
method retrieves reviews for a given ASIN that were fetched within the last 7 days.
3. Submitting Review Data to OpenAI
The collected reviews are compiled into a prompt and sent to OpenAI’s GPT-4 API for analysis. The prompt is carefully crafted to instruct the model to assess each review’s authenticity based on linguistic patterns and content.
// app/Services/OpenAIService.php
public function analyzeReviews(array $reviews): array
{
$prompt = $this->buildPrompt($reviews);
$response = Http::withToken(config('services.openai.api_key'))
->post('https://api.openai.com/v1/chat/completions', [
'model' => 'gpt-4',
'messages' => [
['role' => 'system', 'content' => 'You are an AI that detects fake reviews.'],
['role' => 'user', 'content' => $prompt],
],
]);
return $response->json();
}
Explanation:
- The
analyzeReviews
method constructs a prompt using the provided reviews and sends it to OpenAI’s GPT-4 API. - The API response is then parsed into a JSON array containing the analysis results.
4. Parsing and Interpreting OpenAI’s Response
OpenAI returns a structured JSON response containing authenticity scores for each review. Null Fake parses this response, extracting the necessary data to evaluate the overall authenticity of the product’s reviews.
// app/Services/ReviewAnalyzer.php
public function parseAnalysis(array $openAiResponse): array
{
$scores = [];
foreach ($openAiResponse['choices'][0]['message']['content'] as $reviewAnalysis) {
$scores[] = [
'review_id' => $reviewAnalysis['id'],
'score' => $reviewAnalysis['score'],
'explanation' => $reviewAnalysis['explanation'],
];
}
return $scores;
}
Explanation:
- The
analyzeReviews
method constructs a prompt using the provided reviews and sends it to OpenAI’s GPT-4 API. - The API response is then parsed into a JSON array containing the analysis results.
5. Scoring System Breakdown
Each review is assigned a score from 0 (genuine) to 100 (likely fake). Null Fake aggregates these scores to determine the percentage of fake reviews and adjusts the product’s overall rating accordingly. This adjusted rating provides users with a clearer picture of the product’s true quality.
// app/Services/ScoreCalculator.php
public function calculateAdjustedRating(array $scores): float
{
$totalScore = array_sum(array_column($scores, 'score'));
$averageScore = $totalScore / count($scores);
return max(0, 5 - ($averageScore / 20));
}
Explanation:
- The
calculateAdjustedRating
method computes the average authenticity score and adjusts the product rating on a 5-point scale, penalizing higher fake review scores.

How Does Null Fake Score Reviews?
At the heart of Null Fake is a transparent and simple scoring system. The idea isnt just to jam a structured json object down OpenAI’s throat and parse the results and be done. The idea was to build some sort of interpretation of the analysis into a regimented scoring system.
Step 1: Score Each Review with OpenAI
When a product’s reviews are pulled in, we send them off to OpenAI for analysis. Each review is passed into a custom prompt designed to highlight markers of artificiality such as overly generic language, unearned enthusiasm, excessive length, and lack of specificity — all classic hallmarks of fake reviews.
The model responds with a JSON payload like this (simplified example):
{
"detailed_scores": {
"R1": 12,
"R2": 78,
"R3": 35,
"R4": 92
}
}
Each key represents a unique review ID, and the value is a fake score between 0–100. A score above 70 is considered suspicious enough to be treated as a fake.
Step 2: Calculate Fake Review Percentage
We then analyze this score set against all reviews pulled. Typically, even with the current restrictions in place, Amazon will only let you parse up to about 100 reviews. The goal ultimately is to figure out the percentage of reviews that cross the “fake” threshold.
From the code:
if ($fakeScore >= 70) {
$fakeCount++;
$fakeReviews[] = [ ... ];
} else {
$genuineCount++;
$genuineReviews[] = [ ... ];
}
This loop tallies both fake and genuine reviews. It also keeps a detailed log — mostly to support debugging and future AI training.
If a product has 200 reviews and 58 of them score above 70, that’s a 29% fake rate. Pretty straightforward.
Step 3: Adjust the Product Rating
Amazon’s star rating is based on every review fake or not. But once we separate the fakes, we want to recalculate the product’s rating using only the genuine ones.
$adjustedRating = $this->calculateAdjustedRating($genuineReviews);
That method simply averages the rating
field of only the reviews that fell below the fake threshold.
For example, if the product had a 4.5-star rating based on all reviews, but genuine ones average out to just 3.9 stars. The goal is to give you an idea of what the new adjusted rating would be compared to the original Amazon rating.
This is not a “scientific” adjustment by any means – its really meant to put things into perspective against the calculated percentage of fake reviews, because again Amazon would only let us look at the 100 most recent reviews.
Step 4: Assign a Grade from A to F
To keep things human-readable, we map the fake percentage to a familiar letter grade. Here’s the actual grading function:
private function calculateGrade($fakePercentage): string
{
if ($fakePercentage <= 10) {
return 'A';
} elseif ($fakePercentage <= 20) {
return 'B';
} elseif ($fakePercentage <= 35) {
return 'C';
} elseif ($fakePercentage <= 50) {
return 'D';
} else {
return 'F';
}
}
Step 5: Human-Friendly Explanation
Finally, we include a plain-language explanation that puts it all together. This part was important — not everyone wants a spreadsheet of fake scores.
Here’s the logic:
return "Analysis of {$totalReviews} reviews found {$fakeCount} potentially fake reviews (".round($fakePercentage, 1).'%). ' .
($fakePercentage <= 10 ? 'This product has very low fake review activity and appears highly trustworthy.' :
($fakePercentage <= 20 ? 'This product has low fake review activity and appears trustworthy.' :
($fakePercentage <= 35 ? 'This product has moderate fake review activity. Exercise some caution.' :
($fakePercentage <= 50 ? 'This product has high fake review activity. Exercise caution when purchasing.' :
'This product has very high fake review activity. We recommend avoiding this product.')));
Planned Improvements
To enhance Null Fake’s capabilities and user experience, the following improvements are planned:
- Performance Optimization: Reduce the analysis time per product URL from the current 30–60 seconds to a more efficient duration.
- Enhanced Product Data Extraction: Incorporate additional product information, such as descriptions and images, into the analysis results to provide more context.
- Improved AI Detection Algorithms: Refine the criteria and models used to assess review authenticity, ensuring more accurate detection of fake reviews.
- Platform Expansion: Extend support to analyze reviews from other e-commerce platforms like Walmart, Best Buy, and eBay, broadening the tool’s applicability.
These enhancements aim to make Null Fake a more robust and versatile tool for consumers seeking trustworthy product reviews.
Contributions welcome!
Visit nullfake.com to experience the tool firsthand, or explore the GitHub repository to delve into the code and contribute to the project!