Friday, July 2, 2010

Value types in java

Value types are crucial feature overlooked by the jvm and the java language.

With value types writing high performance, low allocation, cache friendly code is much easier.

For instance: in a physics application you cannot allocate Vector3d on the heap every time you need one. In java, to prevent 1.000.000 allocations per second one has to write convoluted code that reuses vectors every time. With value types this is not necessary, the new keyword does not allocate value types on the heap but on the method stack.

And don't believe when someone says that the JIT will optimize this for you, it is just not true. There are several dataflow patterns that the JIT cannot follow to optimize. Also immutable classes do not fully solve the problem (if you trust that the JIT will understand them), they just create more problems for the programmer.
An immutable Matrix4d is more expensive to update than a mutable one.

I've come across this problem writing game code (physics and rendering) in java.

Wednesday, June 16, 2010

jyield

Recently I created a project on google code, jyield.

jyield aims to provide java continuations support similar to c# yield coroutines.

The methods annotated with @Continuable become generators i.e. they can yield values and interrupt their processing until they are called again.

The following code show just that:

import jyield.Continuable;
import jyield.Yield;

public class Sample {

@Continuable
public static Iterable someNumbers() {
for(int i=0;i<5; i++) {
System.out.print(" #");
Yield.ret(i);
}
return Yield.done();
}

public static void main(String[] args) {
for (int i : someNumbers()) {
System.out.print(" "+ i);
}
}
}
// Output: #0 #1 #2 #3 #4


The foreach iterates over the iterator returned by the someNumbers method. Each time the iterator returns a value is actually a tiny bit of the someNumber being executed and interrupted.

This all happens on the stack of the calling method (main). It is done by changing the bytecode after the java compiler has created the .class.

I took the care of ensuring that try catch and synchronized blocks also behave well inside the continuable methods.

I intend to use this code in future game projects. Right now I am working on a game that uses C# and generators make my life much easier. Too bad java does not have then, ooops, now it has.

Monday, August 31, 2009

Get all MS SQL Server stored procedure dependencies


select distinct 'proc' =s7.name, 'name' = (s6.name+ '.' + o1.name),
type = substring(v2.name, 5, 66)
from sys.objects o1
,master.dbo.spt_values v2
,sysdepends d3
,master.dbo.spt_values u4
,master.dbo.spt_values w5 --11667
,sys.schemas s6
,sys.procedures s7

where o1.object_id = d3.depid
and o1.type = substring(v2.name,1,2) collate database_default and v2.type = 'O9T'
and u4.type = 'B' and u4.number = d3.resultobj
and w5.type = 'B' and w5.number = d3.readobj|d3.selall
and d3.id = s7.object_id
and o1.schema_id = s6.schema_id
and deptype < 2
order by s7.name, s6.name + '.' +o1.name

Thursday, July 16, 2009

Reading

I just finished reading the Dresden Files IX - White Night from Jim Butcher.
Nice book, not my preferred Dresden story but a nice one.
This is the last one I had.

Monday, June 29, 2009

Shuffling a LinkedList (c#)

Since I could not google a sample shuffle function for the LinkedList, here it goes one:

public static void Shuffle(LinkedList list)
{
Random rand = new Random();

for (LinkedListNode n = list.First; n != null; n = n.Next)
{
T v = n.Value;
if (rand.Next(0, 2) == 1)
{
n.Value = list.Last.Value;
list.Last.Value = v;
}
else
{
n.Value = list.First.Value;
list.First.Value = v;
}
}
}

[TestMethod]
public void ShufleTest()
{
LinkedList<string> list = new LinkedList<string>();
list.AddLast("a");
list.AddLast("b");
list.AddLast("c");
list.AddLast("d");
list.AddLast("e");
Shuffle(list);
foreach(string s in list) {
Console.WriteLine(s);
}
}

Sunday, February 15, 2009

Scalability podcast

I found this podcast some time ago while researching about garbage collection.

It is very interesting. I was listening it again and saw how it helps to understand some scalability issues that we are dealing in order to make our game scale.


Extreme Transaction Processing,
Low Latency and Performance

In this podcast, John, who has over 30 years of experience in investment banking and integration technology, John will cover several case studies of extreme transaction processing, low latency and high performance systems and offer insight into what we might expect to see in mainstream in the near future.


http://www.theserverside.com/tt/knowledgecenter/knowledgecenter.tss?l=PodcastJohnDavies


Podcast Slides
http://www.incept5.com/library/TSS%20EJS%202008%20-%20Extreme%20Transaction%20Processing%20-%20John%20Davies.pdf


Some links to read while watching the podcast.

Extreme Transaction Processing
Extreme Transaction Processing (XTP) is an exceptionally demanding form of transaction processing. Transactions of 10,000 concurrent accesses (500 transaction per second) or more would require this form of processing.
500 seens a little, don't you think?

Apache Qpid
"maximum repeatable ingress rate of ~760,000/2 = 380,000 messages
per second (for 256 byte messages). Thus, if doing OPRA (Options Price Reporting Authority)
with a pack factor of 16 in 256 bytes, that would allow for (16 x 380,000 =) 6,080,000 ingress
OPRA messages/ second to 60 consumers on shared queues."

Tangosol (mentioned in the pocast, bought by Oracle)

Oracle Coherence

GigaSpaces (mentioned in the podcast)

eXtreme Scale