Skip to main content
Abishek Neupane
Self Learner
View all authors

eSewa vs Khalti: how I keep Nepali payments honest

· 3 min read
Abishek Neupane
Self Learner

I explored the mostly used online payment gateways of Nepal: Esewa and Khalti and learned that anyone can change the payment amount in the browser. If we rely on that number, someone can buy a Rs 1000 item for Rs 1. Here is how I keep things honest with eSewa and Khalti.

Why the paranoia?

Browsers are editable. Inputs can be changed, scripts can be rerun, URLs can be faked. The only place that should decide whether money actually moved is your backend, not the page.

eSewa: signed forms with HMAC

  • eSewa V2 gives you a secret key. You build a small string that includes the amount, product id, and that secret, then hash it with HMAC-SHA256.
  • You post the amount plus that signature to eSewa. On their side, they recompute the same hash. If it matches, the request is legit; if a user tampered with the amount, it fails.
  • After payment, eSewa redirects back with Base64 encoded JSON. I decode it, recompute the signature with my secret, and only then mark the order as paid. Anything that fails this check is logged and rejected.

Khalti: server-to-server with a pidx

  • Khalti V2 is more API-first. My backend calls their API with Authorization: Key <secret> and the amount in paisa. Khalti responds with a pidx and a redirect URL.
  • The user pays on Khalti. When they come back with ?pidx=..., I do not trust that alone. I call Khalti again and ask, "What is the status of this pidx?" Only their response flips the order to PAID.

Quick comparison

WhateSewa (V2)Khalti (V2)
SetupHTML form + signatureBackend API call
Trust anchorHMAC-SHA256 signatureBearer token + pidx lookup
Final checkRecompute signature locallyAsk Khalti server for status

What I actually do in code

  1. When a user clicks Checkout, I fetch the merchant keys from a safe store (vault/env) and never expose them to the frontend.
  2. For eSewa, I sign the payload with HMAC-SHA256 before redirecting; on callback I decode, re-sign, and compare.
  3. For Khalti, I create the pidx server-side, redirect the user, and on callback I immediately call Khalti's verify endpoint.
  4. Only after these checks do I update the order to PAID. Anything mismatched becomes an audit log entry.

Takeaways

  • Never trust frontend amounts or statuses.
  • Keep secrets off the browser; everything sensitive happens on the backend.
  • Always verify after redirect: re-sign for eSewa, lookup for Khalti. Those two steps have saved me from fake Rs 1 checkouts.

Secure Authentication in NestJS E-Commerce: JWT, Access Tokens, and Refresh Tokens Explained

· 9 min read
Abishek Neupane
Self Learner

1. What is JWT (JSON Web Token)?

JWT is a compact, self-contained way to securely transmit information between parties as a JSON object. It's digitally signed, so it can be verified and trusted.

JWT Structure

A JWT looks like this:

eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4ifQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

It has 3 parts separated by dots (.):

HEADER.PAYLOAD.SIGNATURE

Part 1: Header

{
"alg": "HS256", // Algorithm used to sign (HMAC SHA256)
"typ": "JWT" // Type of token
}

This is Base64Url encoded → eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9

Part 2: Payload (Claims)

{
"sub": "82919d9f-7c7c-4e82-a613-3ce0b7bc523b", // Subject (user ID)
"email": "[email protected]",
"role": "seller",
"iat": 1766993470, // Issued At (timestamp)
"exp": 1766994370 // Expiration (timestamp)
}

This is Base64Url encoded → eyJzdWIiOiI4MjkxOWQ5Zi03...

Part 3: Signature

HMACSHA256(
base64UrlEncode(header) + "." + base64UrlEncode(payload),
"your-super-secret-jwt-key" // Your JWT_SECRET from .env
)

This creates a signature that verifies the token wasn't tampered with.


2. Access Token vs Refresh Token

FeatureAccess TokenRefresh Token
PurposeAuthenticate API requestsGet new access tokens
LifespanShort (15 min - 1 hour)Long (7 days - 30 days)
Stored inMemory / localStoragelocalStorage / httpOnly cookie
Sent withEvery API requestOnly to /auth/refresh endpoint
If stolenLimited damage (expires soon)More dangerous (can get new tokens)

Why Two Tokens?

Security vs User Experience tradeoff:

  1. If we only had long-lived tokens:

    • User stays logged in for weeks ✅
    • But if stolen, attacker has access for weeks ❌
  2. If we only had short-lived tokens:

    • If stolen, attacker only has 15 minutes ✅
    • But user must re-login every 15 minutes ❌
  3. With Access + Refresh tokens:

    • Access token expires quickly (15 min) - limits damage if stolen ✅
    • Refresh token gets new access tokens - user stays logged in ✅
    • Refresh token is only sent to ONE endpoint - smaller attack surface ✅

3. The Complete Authentication Flow

┌─────────────────────────────────────────────────────────────────────────────┐
│ AUTHENTICATION FLOW │
└─────────────────────────────────────────────────────────────────────────────┘

┌──────────┐ ┌──────────┐ ┌──────────┐
│ Browser │ │ Backend │ │ Database │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
│ 1. POST /auth/login │ │
│ { email, password } │ │
│──────────────────────────────>│ │
│ │ 2. Verify password │
│ │──────────────────────────────>│
│ │<──────────────────────────────│
│ │ │
│ │ 3. Generate tokens: │
│ │ - Access (15min) │
│ │ - Refresh (7 days) │
│ │ │
│ │ 4. Store refresh token │
│ │──────────────────────────────>│
│ │<──────────────────────────────│
│ │ │
│ 5. Return both tokens │ │
│<──────────────────────────────│ │
│ │ │
│ 6. Store in localStorage: │ │
│ auth_access_token │ │
│ auth_refresh_token │ │
│ │ │

4. Making Authenticated Requests

┌──────────┐                    ┌──────────┐                    ┌──────────┐
│ Browser │ │ Backend │ │ Database │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
│ GET /api/products │ │
│ Header: Authorization: │ │
│ Bearer eyJhbGci... │ │
│──────────────────────────────>│ │
│ │ │
│ │ 1. Extract token from header │
│ │ 2. Verify signature with │
│ │ JWT_SECRET │
│ │ 3. Check expiration │
│ │ 4. Extract user info │
│ │ │
│ │ If valid: │
│ ✅ Return products │ Get products for user │
│<──────────────────────────────│──────────────────────────────>│
│ │ │
│ │ If expired/invalid: │
│ ❌ 401 Unauthorized │ │
│<──────────────────────────────│ │
│ │ │

5. Token Refresh Flow (When Access Token Expires)

┌──────────┐                    ┌──────────┐                    ┌──────────┐
│ Browser │ │ Backend │ │ Database │
└────┬─────┘ └────┬─────┘ └────┬─────┘
│ │ │
│ GET /api/categories │ │
│ Bearer: [expired token] │ │
│──────────────────────────────>│ │
│ │ │
│ ❌ 401 Unauthorized │ Token expired! │
│<──────────────────────────────│ │
│ │ │
│ ┌─────────────────────────┐ │ │
│ │ Axios Interceptor │ │ │
│ │ catches 401, tries │ │ │
│ │ to refresh │ │ │
│ └─────────────────────────┘ │ │
│ │ │
│ POST /auth/refresh │ │
│ { refreshToken: "eyJ..." } │ │
│──────────────────────────────>│ │
│ │ 1. Find token in database │
│ │──────────────────────────────>│
│ │<──────────────────────────────│
│ │ │
│ │ 2. Verify not expired │
│ │ 3. Verify JWT signature │
│ │ 4. Generate NEW tokens │
│ │ 5. Delete OLD refresh token │
│ │ 6. Store NEW refresh token │
│ │──────────────────────────────>│
│ │ │
│ Return new tokens │ │
│<──────────────────────────────│ │
│ │ │
│ Store new tokens in │ │
│ localStorage │ │
│ │ │
│ RETRY original request │ │
│ GET /api/categories │ │
│ Bearer: [NEW access token] │ │
│──────────────────────────────>│ │
│ │ │
│ ✅ Return categories │ │
│<──────────────────────────────│ │
│ │ │

6. How It's Implemented in Your Codebase

Backend (NestJS)

1. Generating Tokens - auth.service.ts

async generateTokens(user: any) {
const payload = {
sub: user.id, // Subject - who this token is for
email: user.email,
username: user.username,
role: user.role,
};

// Access token - short lived (15 minutes)
const accessToken = this.jwtService.sign(payload, {
expiresIn: '15m',
});

// Refresh token - long lived (7 days)
const refreshToken = this.jwtService.sign(payload, {
expiresIn: '7d',
});

// Store refresh token in database (so we can invalidate it)
await this.prisma.refreshToken.create({
data: {
token: refreshToken,
userId: user.id,
expiresAt: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000),
},
});

return { accessToken, refreshToken, expiresIn: 900 };
}

2. JWT Guard - Protects routes

// jwt-auth.guard.ts
@Injectable()
export class JwtAuthGuard extends AuthGuard('jwt') {
canActivate(context: ExecutionContext) {
// Check if route is marked @Public()
const isPublic = this.reflector.get(IS_PUBLIC_KEY, context.getHandler());
if (isPublic) return true; // Skip auth for public routes

return super.canActivate(context); // Verify JWT
}
}

3. JWT Strategy - Validates tokens

// jwt.strategy.ts
@Injectable()
export class JwtStrategy extends PassportStrategy(Strategy) {
constructor() {
super({
jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(),
secretOrKey: process.env.JWT_SECRET, // Same secret used to sign
});
}

// Called after JWT is verified
async validate(payload: any) {
return {
userId: payload.sub,
email: payload.email,
role: payload.role,
};
}
}

Frontend (React/Next.js)

1. Storing Tokens After Login - authSlice.ts

const response = await authService.login(email, password);

// Store both tokens in localStorage
tokenManager.setTokens(response.accessToken, response.refreshToken);

2. Attaching Token to Every Request - axios-instance.ts

axiosInstance.interceptors.request.use((config) => {
const token = tokenManager.getAccessToken();
if (token) {
config.headers.Authorization = `Bearer ${token}`;
}
return config;
});

3. Auto-Refresh on 401 - axios-instance.ts

axiosInstance.interceptors.response.use(
(response) => response,
async (error) => {
if (error.response?.status === 401 && !originalRequest._retry) {
originalRequest._retry = true;

const refreshToken = tokenManager.getRefreshToken();

// Get new tokens
const response = await axios.post('/auth/refresh', { refreshToken });

// Store new tokens
tokenManager.setTokens(response.accessToken, response.refreshToken);

// Retry the failed request with new token
return axiosInstance(originalRequest);
}
}
);

7. The Relationship Diagram

┌─────────────────────────────────────────────────────────────────────────────┐
│ JWT ECOSYSTEM │
└─────────────────────────────────────────────────────────────────────────────┘

┌─────────────────┐
│ JWT_SECRET │
│ (in .env file) │
└────────┬────────┘

┌──────────────────┼──────────────────┐
│ │ │
▼ ▼ ▼
┌────────────────┐ ┌────────────────┐ ┌────────────────┐
│ Sign Tokens │ │ Verify Tokens │ │ Decode Tokens │
│ (on login) │ │ (on requests) │ │ (get payload) │
└────────┬───────┘ └────────────────┘ └────────────────┘

┌───────────┴───────────┐
▼ ▼
┌───────────────┐ ┌───────────────┐
│ ACCESS TOKEN │ │ REFRESH TOKEN │
│ (15 minutes) │ │ (7 days) │
└───────┬───────┘ └───────┬───────┘
│ │
│ │
▼ ▼
┌───────────────────┐ ┌───────────────────┐
│ localStorage │ │ localStorage │
│ auth_access_token │ │ auth_refresh_token│
└───────────────────┘ └─────────┬─────────┘


┌───────────────────┐
│ DATABASE │
│ RefreshToken │
│ table (for │
│ invalidation) │
└───────────────────┘


┌─────────────────────────────────────────────────────────────────────────────┐
│ REQUEST LIFECYCLE │
└─────────────────────────────────────────────────────────────────────────────┘

User Action Frontend Backend DB
│ │ │ │
│ Click "Create" │ │ │
│──────────────────>│ │ │
│ │ │ │
│ │ 1. Get access token │ │
│ │ from localStorage │ │
│ │ │ │
│ │ 2. Add to header: │ │
│ │ Authorization: │ │
│ │ Bearer eyJ... │ │
│ │ │ │
│ │ POST /api/categories │ │
│ │─────────────────────────>│ │
│ │ │ │
│ │ │ 3. JwtAuthGuard │
│ │ │ - Extract token │
│ │ │ - Verify with │
│ │ │ JWT_SECRET │
│ │ │ - Check exp │
│ │ │ │
│ │ │ 4. @CurrentUser() │
│ │ │ - Get user from │
│ │ │ token payload │
│ │ │ │
│ │ │ 5. Create category │
│ │ │─────────────────────>│
│ │ │<─────────────────────│
│ │ │ │
│ │ ✅ 201 Created │ │
│ Show success │<─────────────────────────│ │
│<──────────────────│ │ │
│ │ │ │

8. Security Best Practices

PracticeWhyYour Code
Short access token expiryLimits damage if stolen✅ 15 minutes
Store refresh token in DBCan invalidate on logout✅ RefreshToken table
Rotate refresh tokensOld tokens become invalid✅ Delete old, create new
Use HTTPSPrevents token interception⚠️ Use in production
HttpOnly cookiesPrevents XSS theft❌ Using localStorage
Strong JWT_SECRETPrevents token forgery⚠️ Change in production

9. Common Questions

Q: Why store refresh token in database? A: So you can invalidate it on logout. Without DB storage, the token is valid until expiration.

Q: Can I decode a JWT without the secret? A: Yes! The payload is just Base64 encoded. But you can't verify it without the secret.

Q: What happens if someone steals my refresh token? A: They can get new access tokens until you logout (which deletes the refresh token from DB).

Q: Why not just use sessions? A: JWTs are stateless - server doesn't need to store session data. Better for scaling and microservices.

Spanner - Google solution for distributed database in easier

· 5 min read
Abishek Neupane
Self Learner

Overview: Spanner is Google's scalable, globally-distributed database that combines the benefits of traditional relational databases with massive scalability across datacenters worldwide.

Key Features:

  1. Global Distribution & Replication

    • Shards data across multiple datacenters globally using Paxos consensus
    • Automatic failover and resharding
    • Configurable replication (typically 3-5 replicas)
  2. TrueTime API

    • Novel time API that explicitly exposes clock uncertainty
    • Uses GPS and atomic clocks for synchronization
    • Typical uncertainty (ε) is 1-7ms
    • Enables strong consistency guarantees
  3. Transaction Support

    • External consistency (linearizability) for distributed transactions
    • Lock-free snapshot transactions
    • Non-blocking reads in the past
    • Two-phase commit across Paxos groups
  4. Data Model

    • Semi-relational tables with SQL-like query language
    • Hierarchical schema with directories as units of data placement
    • Versioned key-value storage with automatic timestamps

Performance Highlights:

  • Write latency: ~15ms (including commit wait)
  • Snapshot read latency: ~1.4ms
  • Successfully handles datacenter failures with minimal disruption
  • First major customer (F1 - Google's advertising backend) migrated from sharded MySQL

Significance: First system to provide external consistency and globally-consistent reads at global scale, making it suitable for applications requiring strong consistency across continents.

1. The Basic Problem Spanner Solves

Imagine you have a banking app used worldwide. You need:

  • Data everywhere: Users in Japan and USA both need fast access
  • Consistency: If someone transfers money, both accounts must update correctly
  • Availability: System must work even if a datacenter fails

Traditional databases struggle with this. Spanner solves it.

2. Core Concepts

A. Data Distribution (Sharding)

Think of your data like a library:

  • Instead of one giant building (single server), you have multiple branches (servers) worldwide
  • Each branch holds certain books (data shards)
  • Users go to their nearest branch for faster service
User Data Split Across Servers:
Server 1 (USA): Users A-M
Server 2 (Europe): Users N-Z
Server 3 (Asia): Backup copies

B. Replication (Copies)

  • Each piece of data exists in multiple locations (typically 3-5 copies)
  • If one datacenter fails, others keep working
  • Like having backup copies of important documents in different safe deposit boxes

C. Paxos Consensus

This is how servers agree on what data is correct:

Simple Example:
1. Server A wants to write "Balance = $100"
2. It asks other servers: "Can I write this?"
3. Majority (3 out of 5) must agree
4. Once majority agrees, write is committed
5. All servers eventually get updated

3. The TrueTime Innovation

This is Spanner's "secret sauce" for handling time across the globe.

The Time Problem:

Server in USA: "Transaction happened at 10:00:00"
Server in Japan: "Transaction happened at 10:00:01"
But which really came first? Clocks aren't perfectly synchronized!

TrueTime Solution:

Instead of saying "it's exactly 10:00:00", TrueTime says:

"It's between 10:00:00.003 and 10:00:00.007"
(uncertainty range)

How it works:

  • Uses GPS satellites and atomic clocks
  • Admits uncertainty explicitly
  • Waits out uncertainty when needed for correctness
Example:
Transaction 1 commits at time [10:00:00.000 - 10:00:00.007]
Spanner waits until 10:00:00.007 passes
Then Transaction 2 can start at [10:00:00.008 - 10:00:00.015]
Now we KNOW Transaction 1 finished before Transaction 2 started!

4. Transaction Types

Read-Write Transactions (Strong Consistency)

Example: Transfer $50 from Account A to Account B

1. Lock both accounts
2. Read current balances
3. Calculate new balances
4. Get timestamp from TrueTime
5. Write changes to multiple servers (Paxos)
6. Wait for commit-wait (ensures ordering)
7. Release locks

Snapshot Transactions (Read-Only, No Locks)

Example: Generate monthly report

1. Pick a timestamp in the past (e.g., "end of last month")
2. Read data as it existed at that exact time
3. No locks needed - just reading history
4. Can read from any replica that's up-to-date

5. Data Organization

Directories

Think of directories as folders that keep related data together:

Directory for User "Alice":
├── Profile Info
├── Purchase History
└── Preferences

This whole directory:
- Stays together on same servers
- Moves as a unit if needed
- Can be replicated to different locations

Schema Example

-- Users table
CREATE TABLE Users {
user_id INT64,
email STRING,
name STRING
} PRIMARY KEY (user_id);

-- Orders table (nested under Users)
CREATE TABLE Orders {
user_id INT64,
order_id INT64,
amount FLOAT
} PRIMARY KEY (user_id, order_id),
INTERLEAVE IN PARENT Users;

The INTERLEAVE means orders are stored physically close to their user - faster queries!

6. How a Write Works (Step-by-Step)

User wants to update their email:

1. REQUEST arrives at nearest Spanner server

2. SERVER identifies which Paxos group owns this data

3. LEADER of that group receives the request

4. ACQUIRE LOCKS on the data

5. PROPOSE change to other replicas via Paxos
"I want to write: user_123.email = '[email protected]'"

6. REPLICAS vote (need majority to agree)

7. TIMESTAMP assigned using TrueTime
timestamp = TT.now().latest

8. COMMIT WAIT - wait until TT.after(timestamp) is true
(ensures external consistency)

9. APPLY the change to all replicas

10. RELEASE LOCKS

11. RETURN success to user

7. Key Benefits

FeatureWhat It Means
External ConsistencyTransactions appear in the order they actually happened in real time
Global AvailabilityWorks even if entire datacenters fail
SQL SupportUse familiar database queries
Automatic ScalingAdd servers without downtime
Strong ConsistencyNo "eventual consistency" surprises

8. Real-World Example: Google's F1

Google's advertising system uses Spanner:

  • Before: Manually sharded MySQL (nightmare to manage)
  • After: Spanner handles sharding automatically
  • Scale: Tens of terabytes, millions of operations/second
  • Replication: 5 replicas across US (2 west coast, 3 east coast)

Understanding Transactions and Locking in PostgreSQL

· 7 min read
Abishek Neupane
Self Learner

This guide provides a comprehensive overview of transactions and locking mechanisms in PostgreSQL, based on provided examples and outputs. It covers transaction basics, error handling, savepoints, transactional DDLs, and basic locking with Multi-Version Concurrency Control (MVCC). The outputs from the provided examples are included to illustrate the behavior.

Table of Contents

  1. Working with PostgreSQL Transactions
  2. Handling Errors Inside a Transaction
  3. Using SAVEPOINT
  4. Transactional DDLs
  5. Understanding Basic Locking and MVCC

Working with PostgreSQL Transactions

In PostgreSQL, every operation is part of a transaction, even single statements. Transactions ensure data consistency and integrity.

Key Functions for Timestamps

  • now(): Returns the same timestamp for all statements within a transaction, ensuring consistency.

    • Example: Useful for logging actions with a consistent timestamp.
    • SELECT now(), now();
      Output:
                 now                |               now
      ----------------------------------+----------------------------------
      2025-08-14 13:05:37.943104+05:45 | 2025-08-14 13:05:37.943104+05:45
      (1 row)
      • Both calls to now() return the same timestamp within a single transaction.
  • clock_timestamp(): Returns the real-time timestamp for each call, even within the same transaction.

    • Example: Useful for measuring query execution time.
    • SELECT clock_timestamp();
      Output:
               clock_timestamp
      ----------------------------------
      2025-08-14 13:06:42.247353+05:45
      (1 row)

Explicit Transactions

To group multiple statements into a single transaction, use the BEGIN, COMMIT, and ROLLBACK commands:

  • BEGIN: Starts a transaction.

  • COMMIT or END: Commits the transaction, making changes permanent.

  • ROLLBACK or ABORT: Discards changes made in the transaction.

  • Example:

    BEGIN;
    SELECT now();
    SELECT now();
    COMMIT;

    Output:

    BEGIN
    now
    ----------------------------------
    2025-08-14 13:11:50.579118+05:45
    (1 row)

    now
    ----------------------------------
    2025-08-14 13:11:50.579118+05:45
    (1 row)

    COMMIT
    • Both now() calls return the same timestamp, as they are part of the same transaction.

Read-Only Transactions

Set a transaction to read-only to prevent modifications:

  • SHOW transaction_read_only;
    BEGIN TRANSACTION READ ONLY;
    SELECT 1;
    COMMIT AND CHAIN;
    SHOW transaction_read_only;
    COMMIT;

    Output:

     transaction_read_only
    -----------------------
    off
    (1 row)

    BEGIN
    ?column?
    ----------
    1
    (1 row)

    COMMIT
    transaction_read_only
    -----------------------
    on
    (1 row)

    COMMIT
    • Initially, transaction_read_only is off (read/write mode).
    • BEGIN TRANSACTION READ ONLY sets it to on.
    • COMMIT AND CHAIN commits the transaction and starts a new one with the same read-only property.

To get help for commands in psql:

  • \h COMMIT;

    Output:

    Command:     COMMIT
    Description: commit the current transaction
    Syntax:
    COMMIT [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]
    URL: https://www.postgresql.org/docs/16/sql-commit.html
  • \h ROLLBACK;

    Output:

    Command:     ROLLBACK
    Description: abort the current transaction
    Syntax:
    ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ]
    URL: https://www.postgresql.org/docs/16/sql-rollback.html

Handling Errors Inside a Transaction

If an error occurs within a transaction, PostgreSQL aborts the transaction, and subsequent commands are ignored until the transaction is rolled back or committed.

  • Example:

    BEGIN;
    SELECT 1;
    SELECT 1/0;
    SELECT 2;
    COMMIT;

    Output:

    BEGIN
    ?column?
    ----------
    1
    (1 row)

    ERROR: division by zero
    ERROR: current transaction is aborted, commands ignored until end of transaction block
    ROLLBACK
    • The division by zero error aborts the transaction.
    • The subsequent SELECT 2 is ignored until ROLLBACK resets the transaction state.
    • Use ROLLBACK to discard changes and recover from the error.

Using SAVEPOINT

Savepoints allow partial rollbacks within a transaction, enabling recovery from errors without discarding all changes.

  • Example:

    BEGIN;
    SELECT 1;
    SAVEPOINT a;
    SELECT 2/0;
    ROLLBACK TO SAVEPOINT a;
    SELECT 2;
    SELECT 3;
    COMMIT;

    Output:

    BEGIN
    ?column?
    ----------
    1
    (1 row)

    SAVEPOINT
    ERROR: division by zero
    ERROR: current transaction is aborted, commands ignored until end of transaction block
    ROLLBACK
    ?column?
    ----------
    2
    (1 row)

    ?column?
    ----------
    3
    (1 row)

    COMMIT
    • SAVEPOINT a marks a point in the transaction.
    • After the division by zero error, ROLLBACK TO SAVEPOINT a reverts to the state at a.
    • Subsequent commands (SELECT 2, SELECT 3) execute successfully.
    • Use RELEASE SAVEPOINT <name> to remove a savepoint if no longer needed.

Transactional DDLs

PostgreSQL supports transactional Data Definition Language (DDL) operations, meaning DDL commands (e.g., CREATE TABLE, ALTER TABLE) can be rolled back if the transaction fails.

  • Example:

    BEGIN;
    CREATE TABLE testme (id int);
    ALTER TABLE testme ALTER COLUMN id TYPE int8;
    CREATE TABLE testme2 (name VARCHAR(20));
    ALTER TABLE testme2 ALTER COLUMN name TYPE VARCHAR(50);
    \d testme;
    \d testme2;
    ROLLBACK;
    \d testme;

    Output:

    BEGIN
    CREATE TABLE
    ALTER TABLE
    CREATE TABLE
    ALTER TABLE
    Table "public.testme"
    Column | Type | Collation | Nullable | Default
    --------+--------+-----------+----------+---------
    id | bigint | | |

    Table "public.testme2"
    Column | Type | Collation | Nullable | Default
    --------+-----------------------+-----------+----------+---------
    name | character varying(50) | | |

    ROLLBACK
    Did not find any relation named "testme".
    • The tables testme and testme2 are created and modified within the transaction.
    • After ROLLBACK, both tables are removed, as the transaction was not committed.
    • The final \d testme; confirms the table no longer exists.

Understanding Basic Locking and MVCC

PostgreSQL uses Multi-Version Concurrency Control (MVCC) to handle concurrent transactions, ensuring that read operations do not block write operations and vice versa.

Example: Concurrent Read and Write

  • Transaction 1:

    BEGIN;
    SELECT * FROM testme;
    UPDATE testme SET id = id + 1 RETURNING *;
    COMMIT;

    Output:

    BEGIN
    id
    ----
    0
    (1 row)

    id
    ----
    1
    (1 row)

    UPDATE 1
    COMMIT
  • Transaction 2 (concurrent):

    BEGIN;
    SELECT * FROM testme;
    COMMIT;

    Output:

    BEGIN
    id
    ----
    0
    (1 row)

    COMMIT
    • Transaction 2 sees the data as it was before Transaction 1’s update (id = 0) due to MVCC.
    • Write transactions (e.g., UPDATE) do not block read transactions (e.g., SELECT).

Concurrent Updates

When multiple transactions update the same table:

  • Transaction 1:

    BEGIN;
    UPDATE testme SET id = id + 1 RETURNING *;
    UPDATE testme SET id = id + 1 RETURNING *;
    COMMIT;

    Output:

    BEGIN
    id
    ----
    2
    (1 row)

    UPDATE 1
    id
    ----
    3
    (1 row)

    UPDATE 1
    COMMIT
  • Transaction 2 (after Transaction 1 commits):

    BEGIN;
    UPDATE testme SET id = id + 1 RETURNING *;
    COMMIT;

    Output:

    BEGIN
    id
    ----
    4
    (1 row)

    UPDATE 1
    COMMIT
    • Transaction 2 updates the row after Transaction 1 commits, incrementing id to 4.
    • PostgreSQL locks only the rows affected by an UPDATE, allowing concurrent updates on different rows in the same table.
    • For example, with 1,000 rows, 1,000 concurrent updates can occur on different rows without conflict.

Additional Example

  • Transaction:
    BEGIN;
    SELECT * FROM testme;
    Output:
    BEGIN
    id
    ----
    1
    (1 row)

Additional Notes

  • Use \h <command> in psql to get help for commands like SELECT, COMMIT, or ROLLBACK.
  • PostgreSQL’s MVCC ensures high concurrency by maintaining multiple versions of data, allowing readers to see a consistent snapshot without waiting for writers.
  • Always commit or roll back transactions explicitly to avoid leaving connections in an aborted state.
  • The note "aba jaba, transaction 1 commit vayo, ani transaction 2 update huncha" translates to "now then, transaction 1 is committed, and transaction 2 updates" in Nepali, indicating the sequence of commits and updates.

For more details, refer to the PostgreSQL documentation.

Database Comparision between PostgreSQL, MySQL, Oracle, MS SQL Server

· 10 min read
Abishek Neupane
Self Learner

This project explores database options for small to medium-sized enterprises (SMEs), startups, and academic projects in Nepal, with a focus on building a scalable accounting system. I’ve compared popular databases and implemented an accounting system using PostgreSQL, including a Python script to generate and load 10,000 transaction records. This is tailored for contexts like ABC Warehouse, where I study, to estimate storage and performance needs.


Why This Project?

In Nepal, startups, colleges, and SMEs need cost-effective, scalable databases for applications like accounting systems. I’ve compared PostgreSQL, MySQL, MS SQL Server, and Oracle, and built a practical accounting system with storage estimates and performance tests. PostgreSQL is my top choice due to its power, flexibility, and zero licensing cost, perfect for Nepali use cases.


Database Comparison

Here’s a friendly breakdown of the databases I considered, with pros, cons, and use cases for Nepali users:

PostgreSQL

“The modern, feature-rich open-source option — great for new, technically-savvy teams.”

  • Pros:
    • No licensing cost, ideal for startups and colleges.
    • Advanced SQL features (window functions, CTEs, partitioning).
    • JSONB for flexible schemas (SQL + NoSQL).
    • PostGIS for geospatial apps (e.g., nearest hospital or delivery routing).
    • Strong concurrency with MVCC (Multi-Version Concurrency Control) for heavy read/write apps.
    • Easy to deploy on VPS or managed services (AWS RDS, DigitalOcean, Aiven).
  • Cons: Slightly steeper learning curve than MySQL for admin tasks; less common in some regulated industries (e.g., finance/healthcare).
  • Use When: You want power without cost, especially for modern web apps, analytics, or geospatial projects.

MySQL (Community)

“Simple, common, and cheap — great for small or legacy web apps.”

  • Pros: Easy to find cheap hosting; widely used in PHP-based apps (LAMP stack).
  • Cons: Fewer advanced features compared to PostgreSQL or Oracle; often limited to simpler or legacy systems.
  • Use When: You’re on shared hosting or building simple PHP-based apps with minimal cost.

MS SQL Server

“The comfortable, enterprise choice for Windows/.NET shops.”

  • Pros: Easy to learn, great GUI tools (SSMS), automation-friendly, common in business settings.
  • Cons: Expensive licensing for large deployments (Express edition is free for small projects); often tied to Windows Server costs.
  • Use When: Your organization uses the Microsoft stack or needs fast onboarding for DBAs.

Oracle

“The power tool — huge feature set and top pay, but steep learning and cost.”

  • Pros: Extremely powerful for large, mission-critical systems; top-tier enterprise features.
  • Cons: Expensive and complex licensing; steep learning curve; often requires dedicated DBAs.
  • Use When: Large enterprises (banks, telecoms, ERP systems) with big budgets and existing Oracle setups.

Recommendation for Nepali Context: I recommend PostgreSQL for most startups, colleges, and SMEs due to its power, flexibility, and no licensing cost. Use MySQL for cheap hosting or legacy PHP apps, MS SQL Server for Microsoft-heavy environments, and Oracle only for large enterprise needs.


Accounting System Database Design

I designed a simple accounting system for a college like ABC Warehouse, focusing on scalability and performance. The system tracks transactions, students, invoices, and invoice line items. Below is the schema and storage estimation.

Schema Overview

Transactions Table

Stores journal entries for accounting (e.g., date, particulars, credit, debit).

CREATE TABLE transactions (
id SERIAL PRIMARY KEY,
date DATE NOT NULL,
particulars VARCHAR(200) NOT NULL,
credit NUMERIC(12,2) DEFAULT 0,
debit NUMERIC(12,2) DEFAULT 0,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Students Table

Stores student information for invoicing.

CREATE TABLE students (
student_id SERIAL PRIMARY KEY,
name TEXT NOT NULL,
email TEXT,
phone VARCHAR(15),
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Invoices Table

Tracks invoices issued to students.

CREATE TABLE invoices (
invoice_id SERIAL PRIMARY KEY,
student_id INT REFERENCES students(student_id),
issue_date DATE NOT NULL,
due_date DATE NOT NULL,
total_amount NUMERIC(12,2) NOT NULL,
paid_amount NUMERIC(12,2) DEFAULT 0,
balance_due NUMERIC(12,2) GENERATED ALWAYS AS (total_amount - paid_amount) STORED,
status VARCHAR(20) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Invoice Line Items Table

Details items in each invoice (e.g., tuition fees, lab fees).

CREATE TABLE invoice_line_items (
line_id SERIAL PRIMARY KEY,
invoice_id INT REFERENCES invoices(invoice_id),
item_name VARCHAR(50) NOT NULL,
description TEXT,
quantity INT NOT NULL,
unit_price NUMERIC(12,2) NOT NULL,
amount NUMERIC(12,2) GENERATED ALWAYS AS (quantity * unit_price) STORED,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

Storage Estimation

I estimated storage needs for the accounting system based on 10 transactions/day, 10 students/day, and 10 invoices/day with 3 line items per invoice. Here’s the breakdown:

Transactions Table

  • Fields: id (4 bytes), date (4 bytes), particulars (200 bytes), credit (8 bytes), debit (8 bytes), created_at (8 bytes), updated_at (8 bytes).
  • Total per row: 236 bytes + 27 bytes (PostgreSQL overhead) ≈ 263 bytes.
  • Daily: 10 transactions × 263 bytes = 2.57 KB.
  • Monthly: 30 days × 2.57 KB ≈ 77.1 KB.
  • Yearly: 360 days × 2.57 KB ≈ 925 KB.

Students Table

  • Fields: student_id (4 bytes), name (100 bytes), email (50 bytes), phone (15 bytes), created_at (8 bytes), updated_at (8 bytes).
  • Total per row: 185 bytes + 27 bytes (overhead) ≈ 212 bytes.
  • Daily: 10 students × 212 bytes ≈ 2.07 KB.
  • Monthly: 30 days × 2.07 KB ≈ 62.1 KB.
  • Yearly: 360 days × 2.07 KB ≈ 746 KB.

Invoices Table

  • Fields: invoice_id (4 bytes), student_id (4 bytes), issue_date (4 bytes), due_date (4 bytes), total_amount (8 bytes), paid_amount (8 bytes), balance_due (8 bytes), status (20 bytes), created_at (8 bytes), updated_at (8 bytes).
  • Total per row: 76 bytes + 23 bytes (overhead) ≈ 99 bytes.
  • Daily: 10 invoices × 99 bytes ≈ 0.97 KB.
  • Monthly: 30 days × 0.97 KB ≈ 29.1 KB.
  • Yearly: 360 days × 0.97 KB ≈ 349 KB.

Invoice Line Items Table

  • Fields: line_id (4 bytes), invoice_id (4 bytes), item_name (50 bytes), description (200 bytes), quantity (4 bytes), unit_price (8 bytes), amount (8 bytes), created_at (8 bytes), updated_at (8 bytes).
  • Total per row: 294 bytes + 27 bytes (overhead) ≈ 321 bytes.
  • Daily: 10 invoices × 3 line items × 321 bytes ≈ 9.4 KB.
  • Monthly: 30 days × 9.4 KB ≈ 282 KB.
  • Yearly: 360 days × 9.4 KB ≈ 3.38 MB.

Total Storage

  • Daily: 2.57 KB (transactions) + 2.07 KB (students) + 0.97 KB (invoices) + 9.4 KB (line items) ≈ 12.44 KB/day.
  • Monthly: 12.44 KB × 30 ≈ 373 KB/month.
  • Yearly: 12.44 KB × 360 ≈ 4.48 MB/year.

Scalability Notes

  • PostgreSQL’s MVCC: Handles high concurrency without locking, ideal for heavy read/write workloads (e.g., frequent transactions at a college).
  • Storage Growth: Even with 360 transactions/year, the database size remains small (~4.48 MB/year), easily manageable on a VPS or managed service.
  • Performance: Importing 100,000 transactions took ~204 ms, showing PostgreSQL’s efficiency for moderate workloads.
  • Balance Sheet Queries: Use SQL aggregates (e.g., SUM(credit), SUM(debit)) to generate balance sheets or trial balances.

Generating and Loading Data with Python

To test the system’s performance, I wrote a Python script to generate 10,000 transaction records and load them into the transactions table. The script creates a tab-delimited CSV file (transactions.txt) and uses PostgreSQL’s \copy command to import the data efficiently.

Python Script for Data Generation

import csv
import random
from datetime import datetime, timedelta

# Generate 10,000 transaction records
num_records = 10000
start_date = datetime(2025, 1, 1)
particulars_list = ["Tuition Fee", "Lab Fee", "Library Fee", "Exam Fee", "Miscellaneous"]

with open('/tmp/transactions.txt', 'w', newline='') as f:
writer = csv.writer(f, delimiter='\t')
writer.writerow(['date', 'particulars', 'credit', 'debit']) # CSV header
for i in range(num_records):
date = (start_date + timedelta(days=random.randint(0, 365))).strftime('%Y-%m-%d')
particulars = random.choice(particulars_list)
credit = round(random.uniform(0, 5000), 2) if random.random() > 0.5 else 0
debit = round(random.uniform(0, 5000), 2) if credit == 0 else 0
writer.writerow([date, particulars, credit, debit])

print(f"Generated {num_records} transaction records in /tmp/transactions.txt")

Loading Data into PostgreSQL

I ran into some permission issues when trying to load the data initially (e.g., Permission denied for /home/virtualabishek/Desktop/code/learning/postgres-notes/transactions.txt). To fix this, I moved the file to /tmp/ and used the \copy command, which is client-side and avoids server permission issues:

\copy transactions(date, particulars, credit, debit) FROM '/tmp/transactions.txt' DELIMITER E'\t' CSV HEADER;

This successfully loaded 100,000 records (I tested with a larger dataset to stress-test), taking ~204 ms, as shown in my terminal output:

COPY 100000
Time: 204.345 ms

Lessons Learned

  • COPY vs. \copy: I initially tried COPY, but got a Permission denied error because it runs server-side. Switching to \copy resolved this.
  • File Permissions: Moving the file to /tmp/ avoided permission issues, as /tmp/ is accessible to the PostgreSQL client.
  • Performance: Loading 100,000 records in ~204 ms shows PostgreSQL’s efficiency for bulk imports.

Why PostgreSQL for This Project?

I chose PostgreSQL because:

  • No License Cost: Perfect for budget-conscious Nepali startups and colleges.
  • Rich Data Types: Supports JSONB for flexible schemas, GEOMETRY for geospatial queries (e.g., campus mapping), and NUMERIC for precise accounting.
  • Concurrency: MVCC ensures readers and writers don’t block each other, ideal for busy systems.
  • Extensibility: PostGIS for geospatial apps, full-text search, and more.
  • Deployment: Easy to run on a VPS or use managed providers (AWS RDS, Aiven).

Setup Instructions

  1. Install PostgreSQL:

    • On Ubuntu: sudo apt update && sudo apt install postgresql postgresql-contrib
    • Verify: psql --version
  2. Create Database:

    sudo -u postgres createdb accountdb
  3. Set Up Tables:

    • Connect: psql -d accountdb
    • Run the CREATE TABLE statements above for transactions, students, invoices, and invoice_line_items.
  4. Generate Data:

    • Run the Python script above to create /tmp/transactions.txt with 10,000 records.
  5. Import Data:

    • Import:
      psql -d accountdb -c "\copy transactions(date, particulars, credit, debit) FROM '/tmp/transactions.txt' DELIMITER E'\t' CSV HEADER"
  6. Check Database Size:

    SELECT pg_size_pretty(pg_database_size(current_database()));

    My test showed ~18 MB for the database with 100,000 transaction records.

  7. Monitor Performance:

    • Enable timing: \timing on
    • Run imports or queries to measure execution time (e.g., ~204 ms for 100,000 records).

Example Queries

  • List Transactions:

    SELECT * FROM transactions LIMIT 10;
  • Calculate Balance Sheet:

    SELECT SUM(credit) AS total_credit, SUM(debit) AS total_debit
    FROM transactions
    WHERE date BETWEEN '2025-01-01' AND '2025-12-31';
  • Find Unpaid Invoices:

    SELECT i.invoice_id, s.name, i.total_amount, i.balance_due
    FROM invoices i
    JOIN students s ON i.student_id = s.student_id
    WHERE i.status = 'UNPAID';

Future Improvements

  • Add indexes on frequently queried columns (e.g., date, student_id) for faster searches.
  • Implement triggers to auto-update updated_at timestamps.
  • Use JSONB for flexible transaction metadata (e.g., store tax info).
  • Explore PostGIS for campus-related geospatial queries (e.g., mapping student locations).
  • Optimize the Python script to generate larger datasets or simulate realistic transaction patterns.

Final Note

I built this accounting system to be lightweight (4.48 MB/year for 10 transactions/day), scalable, and cost-effective using PostgreSQL. The Python script made it easy to generate and load 10,000 records, and PostgreSQL’s performance (~204 ms for 100,000 records) proves it’s ready for real-world use. Let’s keep building awesome tech for Nepal!

Village Realities: A Four-Day Revelation

· 3 min read
Abishek Neupane
Self Learner

Okay, so here's the real talk: I always imagined villages as disciplined, polite, and honest places. But after a four-day tour to a rural area of Nepal, I've come to realize that villages can actually be more challenging than urban areas. It was quite a shock; everyone seemed to be comparing themselves to each other and constantly getting into conflicts. I stayed in more than six different houses, and almost every single one had some kind of underlying problem.

Let me share some profound truths I learned from an 82-year-old man:

  • Manage your expectations and always act with good behavior. (This is something I already knew, but it was a powerful reminder.)
  • God always provides a way. If you face a problem in one area, a solution will emerge in another. For instance, if someone lacks sight, their other senses become incredibly sharp.
  • Steer clear of political parties. They're often just trouble. Just vote for who you believe in.
  • Choose your companions wisely. Always surround yourself with good people, even if it means being alone sometimes.
  • Never pursue love or marriage based solely on phone interactions. It's often an illusion; the real world is entirely different.
  • Your mother is paramount. She carried you for nine months, enduring so much, and then showed you the world. Cherish her.
  • He also taught me a lot about Karma.

Tech Wisdom from a Nonagenarian

Perhaps the most fascinating encounter was with an 85-year-old man who knew an incredible amount about computers – almost as much as a computer science student! He shared insights into computer evolution and even knew QBASIC, a language that's practically ancient now. He explained different types of printers and emphasized that to truly understand technology, "you have to know everything about everything. You should know instructions and all."

He also gave me valuable advice on how to approach answers, especially in exams. He stressed the importance of incorporating modern trends to show the examiner you're well-read. For example, if you're writing about computer viruses, you should include examples of current virus names. He also touched upon versioning, explaining how different versions of software like Word or Excel (e.g., 2016) can complicate things. He gave a relatable example: we were taught what PowerPoint is and how it works, but we were rarely shown any practical demonstrations.

This led to his core message: "Practical is the way. Just focus on practical rather than theory." I completely agree with him on this. Thank you, Buwa, for these invaluable lessons.

From Chaos to Clarity

· 3 min read
Abishek Neupane
Self Learner

Lately, I’ve been working on a project I never thought would see the light of day. The tech stack was completely different from anything I’d tackled before. I built a portal using PHP, with the entire UI crafted in HTML. Coming from a Node.js background, I was out of my comfort zone, unsure of what to do and what to avoid. I promised myself I wouldn’t be that developer who sticks to just one tech stack—especially in today’s world, where AI makes it possible for anyone to create almost anything. Still, the process wasn’t easy. I wrestled with a ton of PHP libraries—installing, importing, and figuring out how to use them.

For the PHP backend, I went old-school. I handled routing with a switch statement, manually creating all the routes and APIs from scratch. Where I really struggled was the file structure. In my Node.js projects, I prided myself on clean, organized files. But with this project? Total chaos. Making the UI dynamic added another layer of confusion. I was second-guessing every decision, unsure of what would work.

I finished the project about a month ago, and to my surprise, I recently spoke with the stakeholders, and they want to deploy it. I was shocked. I know the portal works, but I’m not fully confident in the UI or some other parts. If I’d used React with something like Shadcn, I’d feel much better about it. But they saw the demo, liked it, and are ready to move forward. I was floored—part of me was like, “What the heck?” What if I need to add more features? What if it’s deployed and I can’t make changes easily? What if the project grows and becomes too complex?

After some back-and-forth, I convinced them that this is the version for now, and they agreed. Once my board exams are over, I plan to revisit and improve it.

What I’ve Learned

Looking back, I realize I just needed to rant a bit. But here’s the bigger picture: coding, at its core, is about breaking problems into smaller, manageable steps. With AI and modern tools, that’s more true than ever. If you can take one big problem and split it into ten smaller ones—and truly understand what your code is doing—you can build anything. The tools or tech stack? They’re secondary. What matters is understanding the ecosystem and how it all fits together.

I’ve been crazy busy lately, juggling board exam prep and multiple projects, but I’m enjoying the process. It’s messy, it’s challenging, but it’s also rewarding. I’ll probably write another update after my exams. For now, this is just me reflecting on the journey.

Everyday Reflections: Life's Unexpected Moments

· 3 min read
Abishek Neupane
Self Learner

Hey everyone, I just want to share some random thoughts today. Here in Nepal, people who aren’t interested in politicians, parties, or all that drama seem to suffer the most. I’ve had so many experiences—some made me feel terrible, others were good. Everything that’s happened to me, I always thought it’d leave a positive mark, teaching me lessons along the way.

There’s a lot swirling in my head right now. Honestly, people who went abroad right after +2 wouldn’t get what I’m talking about. I think those who left after their bachelor’s might understand more—they’ve probably seen more than I have.

I see quarrels everywhere here. Me, I’ve never been the type to fight or argue. Even when I’m not wrong, I end up saying sorry just to keep the peace. Sometimes, being too polite or asking simple questions makes you feel bad. Take today, for example—I was in Kathmandu for WordCamp 2025. I stopped at a local cafe and ordered a cup of tea. I didn’t have small change, so I handed the guy a 500-rupee note. He said he didn’t have change, which was fine—I get it, maybe I was the first customer of the day. So I asked if there was a QR code to pay digitally. He said no, and I was shocked. I went to a nearby shop, bought something I didn’t even need just to break the 500, and came back to pay. Then he goes, “Why should I get a QR code for this? If it was something like rice or a big meal, I’d understand.” Bro, who comes to your shop at 6 a.m. to eat a heavy meal? So yeah, that’s just one little thing from today.

I’ve been through a lot and forgotten even more. Public transport here? Don’t get me started. I’ve had so many bad experiences. Whenever I travel to Kathmandu, back to Chitwan, or anywhere else, it’s always a mess. I ask for a specific seat when I buy my ticket, and they tell me it’s full, shoving me to the back. Later, I find out it wasn’t full—they just wanted to pack the back seats first. The fares? I’ve been cheated so many times. They charge whatever they feel like. Once, they owed me 300 rupees in change—it was even written on the ticket—but they argued with me and refused to pay. During festivals, not even on the main day of Dashain, but toward the end, they double the fare. Where do I complain? How do I fix this? I just end up letting it go.

And education? It’s a disaster. I study at Tribhuvan University (TU), Institute of Science and Technology. I’ve switched between two government colleges. I’m not into politics or parties—you probably know how things work there if you’re not involved. The education system is a total waste, at least from my perspective. TU and its government colleges feel like a cheap certificate factory. You’re on your own—your browser tabs are your real teachers. I’m not blaming the teachers; it’s the system. Getting anything done takes forever—long processes for even the smallest service.

I’m exhausted, honestly. Still, I’m hoping for the best.