docs(specs): added specs for code quality

This commit is contained in:
Nicola Zangrandi 2025-03-04 18:12:13 +01:00
parent c3b15a14d2
commit 60f3340536
Signed by: wasp
GPG key ID: 43C1470D890F23ED
7 changed files with 1339 additions and 1 deletions

View file

@ -0,0 +1,174 @@
---
description: Code quality thresholds and linting standards for consistent and maintainable code
globs: **/*.{ts,tsx,js,jsx,go,css,scss}
---
<rule>
filters:
- type: file_extension
pattern: "\\.(ts|tsx|js|jsx|go|css|scss)$"
actions:
- type: suggest
message: |
# Code Quality Metrics
## Code Complexity
1. Cyclomatic Complexity:
- Maximum function/method complexity: 15
- Maximum file complexity: 40
- Strive for functions with complexity < 10
2. Function/Method Size:
- Maximum lines per function: 50
- Target average: 15-25 lines
- Never exceed 100 lines
3. File Size:
- Maximum lines per file: 500
- Target average: 150-300 lines
- Split files that grow beyond these limits
4. Nesting Depth:
- Maximum nesting depth: 4 levels
- Target: 1-3 levels
- Refactor deeply nested code blocks
## Code Documentation
1. TypeScript/JavaScript:
- Use JSDoc for public API functions/methods
- Document complex algorithms
- Include @param and @returns tags
- Document thrown exceptions with @throws
2. Go:
- Document all exported functions, types, and variables
- Follow GoDoc conventions
- Provide examples for non-obvious functionality
## Linting Standards
1. TypeScript/JavaScript:
- Zero ESLint errors
- Zero TypeScript errors
- Max warnings: 5 per file
- Prettier for consistent formatting
2. Go:
- Zero golint errors
- All go vet checks must pass
- gofmt for consistent formatting
- Zero golangci-lint errors
3. CSS/SCSS:
- Zero stylelint errors
- BEM naming convention
- Use CSS variables for theming
## Test Coverage
1. Minimum Coverage Requirements:
- Overall project: > 70%
- Critical paths: > 90%
- Business logic: > 85%
- UI components: > 60%
2. Test Quality:
- Each test case should test one thing
- Arrange-Act-Assert pattern
- Mock external dependencies
- No flaky tests
## Code Review Checklist
1. Readability:
- Descriptive variable/function names
- Consistent naming conventions
- Comments for complex logic
2. Maintainability:
- DRY (Don't Repeat Yourself)
- SOLID principles
- Single responsibility per function
3. Performance:
- Avoid N+1 queries
- Optimize loops
- Avoid unnecessary re-renders
## Automated Checks
1. Run pre-commit checks:
```bash
./scripts/pre-commit.sh
```
2. Monitor quality metrics:
```bash
./scripts/code-quality.sh
```
3. Address all issues before merging
metadata:
priority: high
version: 1.0
</rule>
examples:
- input: |
// Bad: Complex and nested code
function processData(data) {
let result = [];
for (let i = 0; i < data.length; i++) {
if (data[i].active) {
for (let j = 0; j < data[i].items.length; j++) {
if (data[i].items[j].value > 10) {
if (data[i].items[j].type === 'special') {
result.push({
id: data[i].id,
item: data[i].items[j]
});
}
}
}
}
}
return result;
}
output: |
// Good: Refactored with lower complexity and better readability
/**
* Extracts special items with values greater than 10 from active data entries
* @param {Array<Object>} data - Array of data objects to process
* @returns {Array<Object>} Filtered and processed items
*/
function processData(data) {
return data
.filter(entry => entry.active)
.flatMap(entry =>
entry.items
.filter(item => item.value > 10 && item.type === 'special')
.map(item => ({
id: entry.id,
item
}))
);
}
- input: |
// Bad: Undocumented function with poor naming
function fn(a, b) {
return a.filter(i => i > b).map(i => i * 2);
}
output: |
/**
* Filters values greater than the threshold and doubles them
* @param {number[]} values - Array of numbers to process
* @param {number} threshold - Minimum value to include in results
* @returns {number[]} Filtered and doubled values
*/
function filterAndDoubleValues(values, threshold) {
return values.filter(value => value > threshold).map(value => value * 2);
}

View file

@ -0,0 +1,204 @@
---
description: Standards for consistent error handling patterns across frontend and backend
globs: **/*.{ts,tsx,js,jsx,go}
---
<rule>
filters:
- type: file_extension
pattern: "\\.(ts|tsx|js|jsx|go)$"
- type: content
pattern: "(?s)try|catch|error|err|throw|panic|recover"
actions:
- type: suggest
message: |
# Error Handling Standards
## Frontend (TypeScript/JavaScript)
1. Always use typed error handling:
```typescript
try {
// Operation that might fail
} catch (error) {
if (error instanceof ApiError) {
// Handle API errors
} else if (error instanceof ValidationError) {
// Handle validation errors
} else {
// Handle unexpected errors
console.error('Unexpected error:', error);
}
}
```
2. Define custom error classes:
```typescript
class ApplicationError extends Error {
constructor(message: string) {
super(message);
this.name = 'ApplicationError';
}
}
class ApiError extends ApplicationError {
statusCode: number;
constructor(message: string, statusCode: number) {
super(message);
this.name = 'ApiError';
this.statusCode = statusCode;
}
}
```
3. For async functions, always use try/catch with async/await:
```typescript
async function fetchData() {
try {
const response = await api.get('/endpoint');
return response.data;
} catch (error) {
handleError(error);
throw error; // Re-throw if needed
}
}
```
4. For React components, implement error boundaries:
```tsx
import { ErrorBoundary } from 'react-error-boundary';
function ErrorFallback({ error, resetErrorBoundary }) {
return (
<div role="alert">
<p>Something went wrong:</p>
<pre>{error.message}</pre>
<button onClick={resetErrorBoundary}>Try again</button>
</div>
);
}
function MyComponent() {
return (
<ErrorBoundary FallbackComponent={ErrorFallback}>
<ComponentThatMightError />
</ErrorBoundary>
);
}
```
## Backend (Go)
1. Return errors rather than using panic:
```go
func ProcessData(data []byte) (Result, error) {
if len(data) == 0 {
return Result{}, errors.New("empty data provided")
}
// Process data
return result, nil
}
```
2. Use error wrapping for context:
```go
import "fmt"
func FetchUserData(userID string) ([]byte, error) {
data, err := database.Query(userID)
if err != nil {
return nil, fmt.Errorf("fetching user data: %w", err)
}
return data, nil
}
```
3. Use custom error types for specific cases:
```go
type NotFoundError struct {
Resource string
ID string
}
func (e NotFoundError) Error() string {
return fmt.Sprintf("%s with ID %s not found", e.Resource, e.ID)
}
// Usage
if data == nil {
return NotFoundError{Resource: "User", ID: userID}
}
```
4. Check errors immediately:
```go
resp, err := http.Get(url)
if err != nil {
return nil, err
}
defer resp.Body.Close()
```
## General Principles
1. Log errors appropriately:
- Debug: For development information
- Info: For tracking normal operation
- Warn: For potential issues
- Error: For actual errors affecting operation
- Fatal: For errors requiring application shutdown
2. Don't expose system errors to users:
- Log the technical details
- Return user-friendly messages
3. Include contextual information:
- What operation was being performed
- What resources were involved
- Any IDs or references that help identify the context
4. Handle all error cases - never silently ignore errors
metadata:
priority: high
version: 1.0
</rule>
examples:
- input: |
// Bad: Untyped error handling
try {
const data = await fetchData();
} catch (error) {
console.log(error);
}
output: |
// Good: Typed error handling with proper logging and user feedback
try {
const data = await fetchData();
} catch (error) {
if (error instanceof ApiError) {
toast.error('Could not connect to the server. Please try again later.');
logger.error('API Error:', { error, endpoint: '/api/data' });
} else {
toast.error('An unexpected error occurred.');
logger.error('Unexpected error:', { error });
}
}
- input: |
// Bad: No error handling in Go function
func GetUser(id string) User {
data := db.Find(id)
return User{Name: data.Name, Email: data.Email}
}
output: |
// Good: Proper error handling and propagation
func GetUser(id string) (User, error) {
data, err := db.Find(id)
if err != nil {
return User{}, fmt.Errorf("failed to find user %s: %w", id, err)
}
return User{Name: data.Name, Email: data.Email}, nil
}

View file

@ -0,0 +1,345 @@
---
description: Standards for performance benchmarks and optimization strategies
globs: **/*.{ts,tsx,js,jsx,go,css,scss}
---
<rule>
filters:
- type: file_extension
pattern: "\\.(ts|tsx|js|jsx|go|css|scss)$"
- type: content
pattern: "(?s)performance|optimization|render|benchmark|profil|memory|cpu|network|latency|throughput"
actions:
- type: suggest
message: |
# Performance Optimization Standards
## Frontend Performance
1. Core Web Vitals Targets:
- Largest Contentful Paint (LCP): < 2.5s
- First Input Delay (FID): < 100ms
- Cumulative Layout Shift (CLS): < 0.1
- First Contentful Paint (FCP): < 1.5s
2. Bundle Size Targets:
- Initial JS bundle: < 170KB compressed
- Initial CSS: < 50KB compressed
- Total page size: < 1MB
- Use code splitting for routes
3. Rendering Optimization:
- React memo for pure components
- Virtual lists for long scrollable content
- Debounced/throttled event handlers
- useCallback/useMemo for expensive operations
4. React Component Guidelines:
- Move state up to appropriate level
- Use context API judiciously
- Avoid prop drilling
- Implement shouldComponentUpdate or memo
5. Asset Optimization:
- Optimize images (WebP/AVIF formats)
- Lazy load images and non-critical components
- Use font-display: swap
- Compress SVGs
## Backend Performance
1. API Response Time Targets:
- P50 (median): < 200ms
- P95: < 500ms
- P99: < 1000ms
2. Database Optimization:
- Use indexes for frequently queried fields
- Optimize queries with EXPLAIN
- Use database connection pooling
- Implement query caching for repeated requests
3. Go Performance:
- Use goroutines appropriately
- Avoid unnecessary allocations
- Profile with pprof
- Consider sync.Pool for frequent allocations
4. API Design:
- GraphQL for flexible data fetching
- Pagination for large result sets
- Partial responses
- Batch operations
5. Caching Strategy:
- Cache calculation results
- HTTP caching headers
- In-memory caching for frequent reads
- Distributed cache for shared data
## Network Optimization
1. HTTP/2 or HTTP/3 Support:
- Enable multiplexing
- Server push for critical resources
- Header compression
2. API Compression:
- Enable gzip/Brotli compression
- Compress responses > 1KB
- Skip compression for already compressed formats
3. CDN Usage:
- Static assets via CDN
- Edge caching for API responses
- Regional deployments for global users
## Monitoring and Benchmarking
1. Tools:
- Lighthouse for frontend performance
- New Relic/Datadog for backend monitoring
- Custom traces for critical paths
2. Load Testing:
- Benchmark on specification-matched environments
- Test at 2x expected peak load
- Identify bottlenecks
- Establish baseline and regression tests
3. Regular Performance Reviews:
- Weekly performance dashboard review
- Monthly deep-dive analysis
- Continuous monitoring alerts
## Performance Budgets
1. Regression Prevention:
- No more than 5% performance degradation between releases
- Alert on exceeding performance budgets
- Block deployment on critical performance failures
2. Optimization Targets:
- Identify top 3 performance issues each sprint
- Continuously improve critical user journeys
- Set specific targets for identified bottlenecks
## Implementation Guidelines
1. Performance First:
- Consider performance implications during design
- Profile before and after significant changes
- Document performance considerations in PRs
2. Known Patterns:
- Use established patterns for common performance issues
- Document performance tricks and techniques
- Share learnings across teams
3. Testing Environment:
- Test on low-end devices
- Test on slower network connections
- Test with representative datasets
metadata:
priority: high
version: 1.0
</rule>
examples:
- input: |
// Bad: Inefficient React component
function UserList({ users }) {
const [filter, setFilter] = useState('');
// Expensive operation on every render
const filteredUsers = users.filter(user =>
user.name.toLowerCase().includes(filter.toLowerCase())
);
return (
<div>
<input
type="text"
value={filter}
onChange={e => setFilter(e.target.value)}
/>
{filteredUsers.map(user => (
<div key={user.id}>
<img src={user.avatar} />
{user.name}
</div>
))}
</div>
);
}
output: |
// Good: Optimized React component
function UserList({ users }) {
const [filter, setFilter] = useState('');
// Memoized expensive operation
const filteredUsers = useMemo(() =>
users.filter(user =>
user.name.toLowerCase().includes(filter.toLowerCase())
),
[users, filter]
);
// Debounced filter change
const handleFilterChange = useCallback(
debounce(value => setFilter(value), 300),
[]
);
return (
<div>
<input
type="text"
defaultValue={filter}
onChange={e => handleFilterChange(e.target.value)}
/>
{filteredUsers.length > 100 ? (
<VirtualList
height={500}
itemCount={filteredUsers.length}
itemSize={50}
renderItem={({ index, style }) => {
const user = filteredUsers[index];
return (
<div key={user.id} style={style}>
<img
src={user.avatar}
loading="lazy"
width={40}
height={40}
alt={`${user.name}'s avatar`}
/>
{user.name}
</div>
);
}}
/>
) : (
filteredUsers.map(user => (
<div key={user.id}>
<img
src={user.avatar}
loading="lazy"
width={40}
height={40}
alt={`${user.name}'s avatar`}
/>
{user.name}
</div>
))
)}
</div>
);
}
- input: |
// Bad: Inefficient database query
func GetUserPosts(db *sql.DB, userID int) ([]Post, error) {
var posts []Post
rows, err := db.Query("SELECT * FROM posts WHERE user_id = ?", userID)
if err != nil {
return nil, err
}
defer rows.Close()
for rows.Next() {
var post Post
err := rows.Scan(&post.ID, &post.Title, &post.Content, &post.UserID, &post.CreatedAt)
if err != nil {
return nil, err
}
// N+1 query problem
commentRows, err := db.Query("SELECT * FROM comments WHERE post_id = ?", post.ID)
if err != nil {
return nil, err
}
defer commentRows.Close()
for commentRows.Next() {
var comment Comment
err := commentRows.Scan(&comment.ID, &comment.Content, &comment.PostID, &comment.UserID)
if err != nil {
return nil, err
}
post.Comments = append(post.Comments, comment)
}
posts = append(posts, post)
}
return posts, nil
}
output: |
// Good: Optimized database query
func GetUserPosts(db *sql.DB, userID int) ([]Post, error) {
// First, fetch all posts in one query
var posts []Post
postRows, err := db.Query(`
SELECT id, title, content, user_id, created_at
FROM posts
WHERE user_id = ?
ORDER BY created_at DESC`,
userID)
if err != nil {
return nil, fmt.Errorf("error querying posts: %w", err)
}
defer postRows.Close()
postIDs := []int{}
postMap := make(map[int]*Post)
for postRows.Next() {
var post Post
err := postRows.Scan(&post.ID, &post.Title, &post.Content, &post.UserID, &post.CreatedAt)
if err != nil {
return nil, fmt.Errorf("error scanning post: %w", err)
}
posts = append(posts, post)
postIDs = append(postIDs, post.ID)
postMap[post.ID] = &posts[len(posts)-1]
}
if len(postIDs) == 0 {
return posts, nil
}
// Use a single query with IN clause to fetch all comments for all posts
query, args, err := sqlx.In(`
SELECT id, content, post_id, user_id
FROM comments
WHERE post_id IN (?)
ORDER BY created_at ASC`,
postIDs)
if err != nil {
return nil, fmt.Errorf("error preparing IN query: %w", err)
}
query = db.Rebind(query)
commentRows, err := db.Query(query, args...)
if err != nil {
return nil, fmt.Errorf("error querying comments: %w", err)
}
defer commentRows.Close()
// Populate the comments for each post
for commentRows.Next() {
var comment Comment
var postID int
err := commentRows.Scan(&comment.ID, &comment.Content, &postID, &comment.UserID)
if err != nil {
return nil, fmt.Errorf("error scanning comment: %w", err)
}
if post, ok := postMap[postID]; ok {
post.Comments = append(post.Comments, comment)
}
}
return posts, nil
}

View file

@ -0,0 +1,203 @@
---
description: Standards for versioning Cursor rules as they evolve over time
globs: .cursor/rules/*.mdc
---
<rule>
filters:
- type: file_extension
pattern: "\\.mdc$"
- type: file_path
pattern: "^\\.cursor\\/rules\\/"
actions:
- type: suggest
message: |
# Rule Versioning Strategy
## Version Format
1. Use semantic versioning for rules:
```
metadata:
version: MAJOR.MINOR.PATCH
```
- MAJOR: Breaking changes that require significant adjustments
- MINOR: New features or guidance that's backward compatible
- PATCH: Bug fixes, clarifications, or examples
2. Version history section:
```
version_history:
- version: 1.2.0
date: 2023-07-15
changes:
- Added guidance for new technology X
- Updated examples for framework version Y
- version: 1.1.1
date: 2023-06-10
changes:
- Fixed incorrect example in section Z
```
## When to Version
1. MAJOR version increment:
- Changing fundamental principles or approaches
- Removing previously required processes
- Modifying the rule's core purpose
- Restructuring sections in a non-backward compatible way
2. MINOR version increment:
- Adding new guidance or best practices
- Expanding rule scope to new file types
- Adding new sections
- Enhancing examples
3. PATCH version increment:
- Correcting typos or grammar
- Clarifying existing points
- Fixing incorrect examples
- Updating links or references
## Version Management Process
1. Documentation:
- Always include a changelog in the version_history section
- Summarize changes in commit messages
- Reference related issues or discussions
2. Review and Approval:
- Major versions require team review
- Minor versions need at least one reviewer
- Patch versions can be applied directly for urgent fixes
3. Communication:
- Announce major version changes to all team members
- Document migration paths for breaking changes
- Provide context for why changes were made
## Rule Deprecation
1. Deprecation Process:
- Mark rule as deprecated with reason:
```
metadata:
deprecated: true
deprecation_reason: "Replaced by new-rule-name.mdc"
removal_date: "2023-12-31"
```
- Keep deprecated rules for at least 3 months
- Reference replacement rules when applicable
2. Archiving:
- Move to .cursor/rules/archived/ directory
- Retain version history
- Document archival reason
## Compatibility Considerations
1. Tool Compatibility:
- Test rule changes with current tooling
- Ensure rules work with target Cursor versions
- Document minimum required tool versions
2. Backward Compatibility:
- Maintain backward compatibility when possible
- Document migration steps when breaking
- Support transition periods for major changes
metadata:
priority: high
version: 1.0
version_history:
- version: 1.0
date: 2023-09-01
changes:
- Initial version of rule versioning strategy
</rule>
examples:
- input: |
# Bad: Rule with no version information
---
description: Frontend styling guidelines
globs: **/*.css
---
<rule>
# CSS Guidelines
Use BEM naming conventions.
</rule>
output: |
# Good: Properly versioned rule
---
description: Frontend styling guidelines
globs: **/*.css
---
<rule>
# CSS Guidelines
Use BEM naming conventions.
metadata:
priority: medium
version: 1.0
version_history:
- version: 1.0
date: 2023-09-01
changes:
- Initial CSS guidelines
</rule>
- input: |
# Bad: Rule update without version increment
# Previous version:
# metadata:
# version: 1.2.0
# Updated rule with new section but same version
---
description: API design standards
globs: **/*.ts
---
<rule>
# API Design Standards
## REST Guidelines
Use proper HTTP methods.
## GraphQL Guidelines (New section)
Use fragments for reusable components.
metadata:
priority: high
version: 1.2.0
</rule>
output: |
# Good: Rule update with proper version increment
---
description: API design standards
globs: **/*.ts
---
<rule>
# API Design Standards
## REST Guidelines
Use proper HTTP methods.
## GraphQL Guidelines
Use fragments for reusable components.
metadata:
priority: high
version: 1.3.0
version_history:
- version: 1.3.0
date: 2023-09-15
changes:
- Added GraphQL guidelines section
- version: 1.2.0
date: 2023-08-10
changes:
- Updated REST guidelines with authentication best practices
</rule>

View file

@ -0,0 +1,178 @@
---
description: Standards for Git workflow, branching, pull requests, and code reviews
globs: **/*
---
<rule>
filters:
- type: event
pattern: "git|branch|commit|pull_request|merge|rebase"
actions:
- type: suggest
message: |
# Version Control Workflow
## Branch Naming Convention
1. Format: `<type>/<issue-id>-<short-description>`
- Example: `feature/123-user-authentication`
- Example: `fix/456-login-crash`
2. Branch types:
- `feature/` - New features
- `fix/` - Bug fixes
- `refactor/` - Code restructuring
- `docs/` - Documentation updates
- `test/` - Test additions/fixes
- `chore/` - Maintenance tasks
3. Description:
- Use lowercase
- Use hyphens instead of spaces
- Keep it brief but descriptive
- Include ticket/issue number when applicable
## Commit Best Practices
1. Follow conventional commits format:
```
<type>(<scope>): <description>
[optional body]
[optional footer(s)]
```
- See the conventional_commits rule for details
2. Keep commits atomic:
- Each commit should address a single concern
- Separate logical changes into separate commits
- Avoid "fix typo" commits - squash them
3. Rebase and cleanup before pushing:
```bash
git rebase -i origin/main
```
- Squash related commits
- Fix up minor changes
- Ensure each commit passes tests
## Pull Request Process
1. PR Template:
```markdown
## Description
[Describe the changes made and why]
## Issue
Closes #[issue-number]
## Type of change
- [ ] Bug fix
- [ ] New feature
- [ ] Breaking change
- [ ] Documentation update
## Checklist
- [ ] My code follows style guidelines
- [ ] I have performed a self-review
- [ ] I have added tests for new functionality
- [ ] Tests pass locally
- [ ] Documentation has been updated
```
2. PR Size:
- Aim for < 400 lines of code changed
- Break large changes into smaller PRs
- Each PR should be focused on a single feature/fix
3. PR Lifecycle:
- Create draft PR for work in progress
- Request reviews when ready
- Address all review comments
- Squash commits before merging
## Code Review Standards
1. Reviewer Responsibilities:
- Review within 24 business hours
- Check functionality, design, and style
- Provide constructive feedback
- Approve only if confident in the changes
2. Author Responsibilities:
- Respond to all comments
- Explain complex changes
- Make requested changes or explain why not
- Keep PR updated with latest main branch
3. Review Comments:
- Be specific and actionable
- Explain the "why" not just the "what"
- Distinguish between required and optional changes
- Use questions for clarification
## Merging Strategy
1. Prefer squash merging:
```bash
git checkout main
git merge --squash feature/123-user-authentication
git commit -m "feat(auth): implement user authentication (#123)"
```
2. Keep main/master branch stable:
- All tests must pass
- CI pipeline must be green
- Required reviews must be approved
3. Delete branches after merging:
```bash
git branch -d feature/123-user-authentication
```
## Git Hooks
1. Pre-commit:
- Run linters
- Run tests related to changes
- Verify commit message format
2. Pre-push:
- Run full test suite
- Check for build errors
- Verify branch is up to date
metadata:
priority: high
version: 1.0
</rule>
examples:
- input: |
# Bad: Poor branch naming
git checkout -b fix-login-issue
output: |
# Good: Proper branch naming with type and issue number
git checkout -b fix/123-login-authentication-failure
- input: |
# Bad: Vague commit message
git commit -m "fixed stuff"
output: |
# Good: Conventional commit with scope and clear description
git commit -m "fix(auth): resolve login failure when using special characters in password"
- input: |
# Bad: Direct commits to master/main
git checkout main
# make changes
git commit -m "Quick fix for production"
git push origin main
output: |
# Good: Work on feature branch and create PR
git checkout -b fix/456-production-hotfix
# make changes
git commit -m "fix(core): prevent service crash when database connection fails"
git push origin fix/456-production-hotfix
# Then create pull request through UI or API

View file

@ -22,6 +22,7 @@ The following table lists all the specification documents for the QuickNotes app
| API | API endpoints and communication | [API](specs/api.md) |
| Frontend | User interface and client-side features | [Frontend](specs/frontend.md) |
| Authentication | Local-only authentication and potential future enhancements | [Authentication](specs/authentication.md) |
| Code Quality | Refactoring plan to meet new code quality standards | [Code Quality Refactoring](specs/code_quality_refactoring.md) |
## Key Features
@ -45,4 +46,5 @@ The following table lists all the specification documents for the QuickNotes app
- Designed for local use with potential for scalability
- Documents can be stored on local disk or blob storage (S3) in future
- Comprehensive API and Emacs mode for enhanced user interaction
- Comprehensive API and Emacs mode for enhanced user interaction
- Code quality standards enforced through linting, testing, and code reviews

View file

@ -0,0 +1,232 @@
# Code Quality Refactoring Specification
## Overview
This specification defines a comprehensive plan for refactoring the QuickNotes application to comply with the newly established code quality rules. These rules represent our commitment to maintainable, performant, and error-resistant code. The refactoring effort prioritizes the following rule sets:
1. Error Handling Standards
2. Code Quality Metrics
3. Performance Optimization
## Refactoring Scope
The refactoring targets all components of the QuickNotes application:
- Frontend (Svelte/SvelteKit)
- Backend (Go with Gin)
- Database interaction layer
- CI/CD pipeline and developer workflows
## Frontend Refactoring Plan
### Phase 1: Error Handling Refactoring
1. **Create Error Class Hierarchy**:
- Implement a base `ApplicationError` class
- Define specific error subtypes like `ApiError`, `ValidationError`, `NetworkError`
- Add proper type information for all error classes
2. **Implement Error Boundaries**:
- Add a global error boundary for the application
- Create component-specific error boundaries for critical features
- Implement standardized fallback UI components
3. **Adopt Typed Error Handling**:
- Refactor all `catch` blocks to use instance checking
- Ensure proper error logging with contextual information
- Implement user-friendly error messages with toast notifications
4. **API Error Handling**:
- Standardize API error response format
- Implement error response parsers for typed handling
- Ensure errors bubble up appropriately or are handled locally
### Phase 2: Code Quality Improvements
1. **Reduce Code Complexity**:
- Identify and refactor functions with high cyclomatic complexity
- Split large files (>500 lines) into smaller modules
- Reduce nesting depth to maximum of 4 levels
2. **Enhance Documentation**:
- Add JSDoc comments to all public functions and components
- Document complex algorithms with clear explanations
- Include examples for non-obvious functionality
3. **Test Coverage Improvements**:
- Increase test coverage to meet minimum requirements
- Focus on critical paths (>90% coverage)
- Implement proper mocking for external dependencies
### Phase 3: Performance Optimization
1. **Core Web Vitals Optimization**:
- Measure current performance metrics (LCP, FID, CLS, FCP)
- Implement lazy loading for non-critical components
- Optimize asset loading and rendering
2. **Bundle Size Reduction**:
- Implement code splitting for routes
- Tree-shake unused dependencies
- Optimize CSS and JavaScript bundles
3. **Component Rendering Optimization**:
- Identify and fix unnecessary re-renders
- Implement memoization for expensive computations
- Virtualize long lists (feeds, readlist items)
4. **Asset Optimization**:
- Convert images to WebP format
- Implement responsive image loading
- Optimize SVGs and compress static assets
## Backend Refactoring Plan
### Phase 1: Error Handling Refactoring
1. **Implement Error Types**:
- Create custom error types for domain-specific errors
- Add error wrapping for proper context
- Implement error response standardization
2. **Refactor Error Propagation**:
- Ensure all functions properly return errors
- Replace panics with proper error returns
- Add context to errors with `fmt.Errorf` and `%w`
3. **Enhance Error Logging**:
- Implement structured logging for errors
- Include contextual information with errors
- Ensure proper error level classification
4. **User-Facing Error Handling**:
- Create user-friendly error messages
- Sanitize sensitive information from errors
- Implement proper HTTP status codes for API errors
### Phase 2: Code Quality Improvements
1. **Reduce Function Complexity**:
- Refactor functions exceeding complexity limits
- Break long functions into smaller, focused ones
- Improve function and variable naming
2. **Enhance Documentation**:
- Document all exported functions, types, and variables
- Follow GoDoc conventions
- Add examples for complex functionality
3. **Test Coverage Improvements**:
- Increase test coverage to meet minimum requirements
- Implement table-driven tests
- Add integration tests for critical flows
### Phase 3: Performance Optimization
1. **Database Query Optimization**:
- Eliminate N+1 query problems
- Implement proper indexing
- Use query explain and analyze for tuning
2. **API Response Time Improvement**:
- Implement caching for frequent requests
- Add pagination for large result sets
- Optimize data retrieval patterns
3. **Resource Utilization**:
- Profile memory usage and fix leaks
- Optimize goroutine usage
- Implement connection pooling
4. **Caching Implementation**:
- Add in-memory caching for static data
- Implement HTTP caching headers
- Consider Redis for distributed caching
## Implementation Strategy
### Prioritization
1. Error handling refactoring (highest priority)
2. Critical performance issues affecting user experience
3. Code quality improvements that impact maintainability
4. Test coverage improvements
5. Documentation enhancements
### Phases and Timeline
**Phase 1 (Weeks 1-2)**: Error handling standardization
- Complete error class hierarchy
- Standardize error handling patterns
- Implement error boundaries
**Phase 2 (Weeks 3-4)**: Code quality improvements
- Reduce complexity in high-priority modules
- Fix critical maintainability issues
- Address test coverage gaps
**Phase 3 (Weeks 5-6)**: Performance optimization
- Fix critical performance bottlenecks
- Implement caching strategy
- Optimize rendering and bundle size
**Phase 4 (Weeks 7-8)**: Documentation and finalization
- Enhance documentation
- Address remaining issues
- Final testing and validation
### Version Control Workflow
All refactoring work should follow the version control workflow rule:
1. Create feature branches for each refactoring task:
- `refactor/error-handling-frontend`
- `refactor/error-handling-backend`
- `refactor/performance-feeds-list`
2. Ensure commit messages follow conventional format:
- `refactor(frontend): implement typed error handling`
- `perf(api): optimize note retrieval query`
- `test(backend): improve coverage for error cases`
3. Keep PRs focused and manageable (<400 lines of code)
## Testing and Validation
### Validation Metrics
1. **Code Quality**:
- Linting passes with zero errors
- Complexity metrics meet thresholds
- No regressions in existing functionality
2. **Performance**:
- API response times meet targets
- Frontend rendering metrics improved
- Bundle sizes reduced
3. **Error Handling**:
- All edge cases properly handled
- Error messages are user-friendly
- Proper logging of errors
### Testing Approach
1. **Unit Tests**:
- Test individual components and functions
- Focus on error scenarios and edge cases
- Meet or exceed coverage requirements
2. **Integration Tests**:
- Test interactions between components
- Verify error propagation works correctly
- Test performance under typical conditions
3. **End-to-End Tests**:
- Verify complete user flows
- Test error recovery scenarios
- Validate performance in production-like environment
## Conclusion
This refactoring plan provides a comprehensive approach to elevating the codebase quality to meet the new standards. By prioritizing error handling, code quality, and performance, we ensure that QuickNotes becomes more maintainable, robust, and performant while providing a better experience for users.