tiistai 9. helmikuuta 2021

What makes code maintainable?

What makes code maintainable?

an list of attempts to make code better. not an "end of all knowledge" list, but evolving and living with times.

I have thought these for a while and better to make a post of these than just ponder inside my homogenic head.

Clear (minimal) interfaces

Interface embodies an abstract requirement, that implementation fulfills. 
Minimality tries to ensure the coherency of the concepts/interface, as humans do have limited capacity to reason and understand concepts.

Clear concepts.

The definition of a concept is different for each person, depending on experiences and culture. I think the only way to make an abstraction/concept concrete and clear, is to make it clear to oneself by implementing, reading, testing, and by documenting.

Clear apis

Boundaries, is this a stand-alone api, or part of some other group, is it a whole, or a part. Is it independent of any other apis. Connections, usage, modules.

Self documenting naming

naming is one of the big 3 problems of software engineering, for example often great sounding naming scheme, but actually quite horrible is: 
  • everything is a "Manager"
  • naming variables according to what class they are "UIButtonTextObject uiButtonTextObject;" 

Consistent coding style

If each line of code has different style, programmer spends more time decoding code into mental models, codified coding style helps with read speeds and helps mitigate coding errors.


Tests of/on interfaces, Tests of whole (integration tests), unit tests.


Comments on weird/Complex code. For example if the code has had to have construct where the indexing starts from 0, but skips 1; 0,2,3,4,5,6 being valid numbers.


Examples and usage examples.

CI Building

Continuous builds of the product and creating internal releases constantly, instead of panic mode building on release day.

keskiviikko 27. tammikuuta 2021

Mop; Tools/Tooling

this is a post, I will update, while proceeding and creating tools/tooling.


MOP uses flatbuffers for file-format serialization, fbs files define enumerations, structures, flags, almost everything. Usually Python and C++ sources are generated from the fbs files.

draw.io/diagrams.net is used to visualise the connections of the structures and files; to explain how everything forms coherent mesh/scene/animation/resource presentation.


MOP has python scripts to generate/view mop files.

pyconvert is a collection of scripts to convert gltf files to mop binaries, or list the contents of mop binaries.


  • git repo master should be used to compile flatbuffers flatc, the binaries provided by internets are old and incompatible, with some fbs definitions.
  • python has its quirks, but once it starts working, it seems usable.
  • fbs files needs to define if a structure can be a root structure.
  • fbs files can also define file endings, which is a bit curious thing.
  • flatbuffers is probably not meant for python, the usage patterns are really cumbersome. 

tiistai 26. tammikuuta 2021

Mop; Graphics/Game Asset format

Why?.. where I come from..

I have been creating game/graphics engines for years now, while doing that I need to import 2D/3D/Mesh/Scene/Text/Sound assets efficiently, without hassle.

With 3D objects/meshes, I first end up just generating simple primitives (Balls, Cubes, Planes) on the fly and for complicated things I use gltf or assimp

Assimp is full of features that you do not need and misses features that you need (while providing massive maintenance footprint), frankly it is ill suited for real-time applications. 

GLTF is modern, changing and quite universal format, except, when creating something new, one has to extend the format with "extra" data or creating own custom extensions, which leads to having to maintain custom exporters. I am also assuming that the performance is often bad due to having to support all possible permutations that GLTF file comes in (not all data is required by gltf specs, whereas my engine can require to always need 2 sets of texture coordinates, which is arbitrary, but acceptable requirement).

Custom 3D format gives a lots of benefits;
  • Defining structures that the graphics engine needs and nothing else.
  • Most of the error handling can be offloaded onto tooling side, assets are configured on tooling side to match the engine.
  • No dependencies on 3rd party, for example if one day I want to define meshes with bezier curves, I am free to do so. Or if the cubic-spline interpolation is mathematically imperfect, I can change the specification.
  • Optimization potential by the tooling (gltf, obj, etc. do not support texture compression right off the bat, often textures are just uncompressed).


2019 I was creating a vulkan based graphics engine and started designing 3D mesh/Scene format (kokkeli-mop) at the time I just sketched together some bare bone presentation of how a 3D mesh structures could be presented with flatbuffers. At the time I was trying to learn vulkan rtx raytracing and the 3d fileformat was just a side-thought of a side-project.

2020 I abandoned of trying to do graphics engine from scratch or rather from lowest level and opted to create graphics algorithms and engine side on top of another engine. During development, I considered using GLTF in engine but I reconsidered and revived my MOP fileformat for this project and after couple months of development, friend recommended writing a blog of the journey of developing MOP, hopefully documenting the reasoning behind design choices..

MOP So far..

As this is ongoing process, all things are in constant change, iterations.. once everything has been proven to work, I can say that MOP has reached status 1.0.

First iteration of drawing the MOP graph.. I can say this is purely academical work at this point.

Somewhere along the line, scene graph sketches, but still very academic work.

At the beginning I had defined all the attributes and material/descriptorset bindings, as uint32 ID's, but later on I decided that actually, everything should be bound by semantic strings, the mesh provides with semantics data that the graphics pipelines might use, in attributes, or in uniforms, or somewhere else. I reasoned that parsing a string and having logic that dynamically interprets data versus having hardcoded indexes, the dynamic way wins.

latest version as of jan. 27. 2021 .. Some improvements on naming, and red arrows determining where blobs of data separate into different files.

At the moment I have created fbs files that define flatbuffers structures. Python scripts that convert gltf files into MOP structures. Of the structures, I have tested that Mesh works, and I am currently working on the Material blobs. 

Next post is hopefully going to be a explanation of Mesh structure and if I have completed Material structure, also about that as well.


perjantai 5. tammikuuta 2018

Modern C++

A friend is/was learning C++/opengl stuffs, and was annoyed with all the pointers in C++, I found this odd.. and well, when I threatened to message her in the evening, drunk, something about Modern C++, she suggested I should write a blog post about it.

So, Here I am, drunk, and trying to figure out what to write.

I asked about tips from, fellow coders at IRCNet #OpenGL (yes, I know C++, but I don't know everything about C++, its insane language, (and on the other fact, yes OpenGL is the most awsome graphics api there is)).
All the things here represents my point of view, if someone views the world differently, thats their problem/opportunity to correct my blasphemy.

I think the topics, or points of interests are as follows:
  • Tools
  • Coding conventions
  • collections (stl things, and string)
  • auto
  • enum & using & namespaces
  • nullptr
  • pointers & references, what why when how, and how god awful they really are.
  • lambda
  • initializer lists
  • threads
  • time
  • templates
  • exceptions
  • R-Value / L-Value semantics
  • move semantics
  • Boost libraries
  • (Hell no, I'm staying away from regexp)
  • debugging
  • links


  • more about pointers
  • added more topics
  • update to std::string rant
  • enum & using & namespaces topic and content
  • content for "pointers", "auto", "initializer lists"
  • links added


So lets start at the tooling, currently it seems to be modern to approach C++ projects with CMake tooling, CMake can be used to generate project solutions for whatever toolchains you are using in the project, this includes xCode, Visual studio, and pretty much any sane IDE out there.

For source version control, git, mercurial.
For graphics debugging RenderDoc, nSight (nVidia). AMD also has its own tools GPU Perfstudio and the likes.
Python is nice to all sorts of scripting and maintanence, if you are doing bigger project, and need a build support scripting.
For IDE, Visual studio and Visual Studio Code, are excellent. Clion was also suggested as IDE, I personally have not used it, so can't really say anything about it.

Coding conventions

There are several coding conventions, mine, is a unique mix of BSD coding convention, and my own view that all code should concentrate on readability and simplicity, if there is a chance that code can cause bugs later on, due to how it was written, then the writing is wrong.
Anyhow, all code should go through a formatter ( http://clang.llvm.org/docs/ClangFormat.html ), choose one way to do everything, and let the machine churn the formatting to "One correct way to rule them all" code form (unfortunately the tool doesnt support BSD coding convertions, so I personally am in the minority). This eases up everything in a team, as all the code in source control are in one way.


STL-Collections (vector, unordered_set, unordered_map, list, string), are awesome, and everyone should learn them. There really is no reason to make C type allocations anywhere anymore, and these collections should be used to manage all sorts of memory allocations.

For example, allocation a buffer for an image, could be done:

int width = 100;

int height = 100;

int channels = 4; // 4 == RGBA

int bytesPerChannel = 1;

std::vector<uint8_t> buffer;

buffer.resize(width * height * channels * bytesPerChannel);
// and access the buffer with buffer.data();

std::string is also a collection. it is a very special collection, modern applications consider std::string to be utf8 coded strings.

Update: I had a constructive arguments about this on the ircnet, C++ strings are really old, and well storing utf8 in std::string and treating it as just a container (std::vector<char> style), it works, but doesn't really give tools for string parsing and stuff.

Fact is, that unicode parsing in C++ is shiet.. I deal with this with making a rule "std::string is utf8 encoded", and if there is a case where I need to get the rune representation/unicode codepoints, then Ill convert the string to full 32bit unicode points (funny fact about utf32, its fixed length!, once we get more runes that 32bit can store, utf32 is f*ckd).

Oh another thing, not a very important, but currently, Ill do all sorts of parsings with fmt libary, if I have a text of string, where I want to replace some part with another string, Ill use fmt to generete a string, like: auto str = fmt::format("Hey {0} this works", "cutie");

enum & using & namespaces

C++11 brought enum class, and using keywords, to help the sanity with namespaces in C++. C++ lacks modules or any sane kind of way to manage compilation units, source code etc. we just have a "raw" include thingy, that basically just copy pastes source code from one file into this file. The preprocessor does some smart things with defines and macros (not much really, it cant handle strings and comparisons, c preprocessor is poor mans programming language.. that some people take to bordering insanity.).
enum class foo; lets you define strongly typed enumerations, these are almost god sent, capsulating enumerations inside  the entity that defines the enumeration.
using keyword allows us to finally respect namespaces with aliases, previously all we had were typenames and #defines, now we have namespace respecting { using UTF8 = uint8_t; } kind of construct that seems readable.


auto keyword was added in C++11, it allows you to forget, and write very maintainable code, by omitting the information about what kind of type is being handled. Auto lets the coder to ignore the type, and let the compiler make the decision of what the type is, using auto, as much as possible, lets coder to change types, around the code, much more dynamically (less refactoring), and often makes the code less verbose.
on the flipside, auto can make a codebase, daunting to read afterwards, as type is deducted at compile times, this can potentially make very simple code snippet, to be endless rabbit hole. (to find what the type really is, in the worst case, compile the code, and set a breakpoint there, and let the IDE tell you the type).


nullptr is a C++11 addition, it allows the world to get rid of nonstandard NULL define. Often, if the codebase is riddled with NULL (or functions with void as empty argument in [int foo(void)]), it is a sign that a) codebase is old or b) whoever wrote it, hasn't stayed up to date with times.


Okay, C++11 brought lots of things to pointers, smart pointers, and nullptr type.

The shared_ptr is a reference counting pointer type, these pointers share a counter and a resource (the counter is thread safe, the resource is not), if the counter hits 0, the resource is deleted. weak_ptr is a partner for sharer_ptr, it is used to describe a non ownership relation to a resource, to use it, you have to convert it back to shared_ptr pointer.

unique_ptr owns the resource, once it gets killed, the unique_ptr deletes the resouce.

more often than pointers, references are used, everywhere possible, with containers, these things are immensely powerful.

void consume(char *data, size_t len) { ... }
char *content = malloc(1024);
readFile(myFile, content);

// rather than do mallocs and frees or news and delete[]s, use raii
// and let vector do all that stuff:
void consume(const std::vector<char>& data) { ... } // we get length from vector

std::vector<char> content;
readFile(myFile, content.data()); // not sure, might be actually &content[0]

with references, you could make assumption that references, cannot be nullptr (a sane assumption, I have not seen a codebase where this assumption has not been made).
also with references, it sort of can say to other programmers, what parameters are "in" and what are "out".. for example:

bool toInt(const std::string& str, int& value);

in that example, coder can see, that immutable string is the "input" parameter to the function, and "value" is out parameter. The example is a bit artificial in sense, that it returns bool, as indication did the transformation succeed (it is possible to code in this style), many coders, would prefer that function to return a int, and have it take bool& success, in. But this is entirely about what flavor you yourself prefer. Also in the example, if the codebase allows exceptions (historically, exceptions have been frowned upon, and if a library uses exceptions, it cannot be used on projects, that do not support RTTI or exceptions).
References, are easy to be thought of as "just pointers", but, unfortunately, they have some additionaly magic placed into them by C++, something that transforms non valua objects into real value objects, and some kind of magic called "&&" or "&&&&&" .. ( like auto&&&& foo; ).

Codebases, for example UI library or something, are totally possible to architect, without pointers any/every/where, with references and standard template library, by using std::map, std::list and std::vector (though, you have to know those containers, their allocation behavior, to a degree).


Lambda functions is the greatest thing since invention of bread and butter. They are bit complicated, with all the rules that comes with them in C++, its not like in C# or Java, where you can throw things around and just YOLO your way around. In C++ you can assign lambda functions to C function pointers (following certain rules, not always), and into std::function class (following certain rules). But once you understand the rules, lamdas lets you create all sorts of callback style programmings.

initializer lists

I'm not sure what should I say about these.. In C++ you can initialize stuff in constructors, using initializer lists:
class Foo
  int faa;
  Foo() {}
/// options to initialize faa are
// a) in place (in header file, much preferred, as if you have multiple ctor, the
//    default value is set to 0 automatically)
int faa = 0; 
// b) initializer list in Foo ctor
Foo() : faa(0) {}
// c) default value in ctor parameter
Foo(int faa = 0) : faa(faa) {}

/// Also one other thing would be to use the new '{}' initializers
int faa{0};
Foo() : faa{0} {}
foo(int faa = 0) : faa{faa} {}

at the moment I am trying to start embracing the {} braket initializers everywhere to distinguish calls from function calls.


C++11, brought threads, yey, finally. TODO







R-Value / L-Value semantics


move semantics


Boost libraries

Boost libraries seemed like a good idea at one time, when C++ did not see a release of specs every few years. At that time the problems usually were boost library incompatibility between versions, and different versions used by 3rd party libraries. I doubt this problem has gone away, I've lived my life as far from boost as possible, the code in boost libraries is usually deep template magic, and interconnections between different parts of the library. If you take boost in, your codebase will be married to it, for life. Boost contains solution for everything, polygons and maths? yes, build system? yes, reflection? probably, extended filesystem? yes, networking, yes.
I would avoid it at all cost.
Too much dependencies, with possibly deep dependencies to other parts of the framework. The alternative is to use many small libraries, and just get things done.

(Hell no, I'm staying away from regexp)

in life of a developer, regexp comes before you as a "good idea" every 1-2 years.. usually people deal with it, by relearning it each time. This is also the strategy I am using.
Regexp was added to C++XX, Im not sure, but maybe in C++11. I've used it 1 time, a month ago, on a libarary that was tossed out of the product, I am not suprised.




  • cppreference.com pretty must up to date documentation on standard C/C++ (do not trust microsoft or any other specific vendors, they will screw you over).
  • github.com/fffaraz/awesome-cpp list of C++ libraries (not all, but good list of tools)
  • code.visualstudio.com "ide" for free from microsoft, requires a lot of tinkering to get it to compile whatever you want, cross platform!
  • visualstudio.com ide for "free" from microsoft, requires a little to get used to. windows only (dont be fooled by vs for mac, it is not visual studio).
  • cmake.org cmake tools for project generation (should be used to handle all library and stuff linking)

maanantai 25. marraskuuta 2013


Im alive.. I've been learning javascript (+golang) for few months. Maybe I'll blog about them soon.. but I just wanted to write something about requestAnimationFrame ..
I've been using timeout function in consistant manner in few projects ( http://icegem.net/flip/ , http://icegem.net/the-zombipeli-by-team-omnom/ , + some others ), at the moment I am learning & implementing some particle systems with WebGL ( http://icegem.net/webgl/ (maybe WebCL too in future)), when I got stuff "done" I proudly went and pasted url around the nets "Look another rotating triangle!".. at this time at #opengl ircnet it was pointed out that there was stuttering and the animations lagged.
To this I just thought "are they messing with me, this is just simple test that everything works".. wroong.. different suggestion came up "why there was stuttering" .. now after investigating, it seems that requestAnimationFrame was the culprit. It called the callback function usually at 16ms, but it also sometimes calls it after 32ms, in a pattern fashion. So animating things looked ugly.. On chrome canary this was not observed.

( the testing tool: http://icegem.net/webgl/req/ ) the canvas draws black dot at height of "how many ms passed.

on chrome (Versio 31.0.1650.57 m) I get the following (see those little dots at red X) those are calls that are late:

On chrome canary everything seems fine:

and the webgl animation:

Chrome (those little gaps in otherwise smooth grey gradiant):

Canary (no gaps (well those little gaps are from 16ms animation update, but! they are consistant):

perjantai 9. elokuuta 2013

Audio with OpenAL

I took part in a gamjam ( Assembly2013 GameJam finnishe only sorry ), in preparation I started improving my current game engine/framework (called "craft"  ). While inspecting things, I realized, I lacked audio playback capabilities (also text rendering support.. maybe I'll write post about signed distance field text rendering engine later..). I tried to get these done before Assembly2013, but failed, due to OpenAL not being as trivial as I thought and all openal tutorials being too trivial.

OpenAL has 5 abstract things (in my opinion), Device, Context, Source, Buffer and Listener. The device is where it all begins, device can create context, and context has one listener and may have sources and buffers.

How to initialize openal (sorry for using pictures, but blogged cant handle pure code):

at 153 we open the preferred device, the specifications also give us a way to query for devices, if we want:

Now that we have the "device", we can create context (at line 168), paremeters for the function are device, and NULL terminated attribute list. With the list you can specify how many mono or stereo sources you want (ALC_MONO_SOURCES, ALC_STEREO_SOURCES). With ALC_SYNC to AL_TRUE you can specify if you want your own mixer thread or not (but then you have to call alcProcessContext(context) yourself). ALC_FREQUENCY apparently means the frequency of the context (to which everything will be sampled.. (apparently on windows it it 44,100 hardcoded). ALC_REFRESH which is al refresh rate.

Once we have the context, we make it current, I imagine that all calls to OpenAL needs a context that is current/bound. At line 188 I've opted to use "generateSources" method, the class "Context" has a pool of pregenerated source IDs. The reason for having the sources managed by Context, comes from the fact, that devices have limited amount of channels and if our design assigned a Source ID per player abstraction (like my original code did), we will run out of Source IDs after 32 players (or what ever is the maximum on the device). 

It is pretty simple function, whenever called, It tries to add more Source IDs to the pool.

And this retainSource and releaseSource are how the players then get their SourceIDs, the idea is that they only retain SourceID when they playback something, once the playback is finished, they should release the SourceID.

So now we have some code that handles initialization context and reuse of sources. Lets move on to playback a bit.. 

OpenAL seems unable to playback stereo sounds as "3D" sources, so all 3D sounds needs to be in mono format (MONO8 or MONO16), also sources seem to set to one type of audio once used, for this reason I am using only the alSourceQueueBuffers to queu buffers to source, if I used the setSource with buffer ID, I would not be able to use that source for queue buffer style (and vice versa). At the writing of this article, I am not sure how well Sources respond to changing between input buffer formats (once playing, it should not be possible to mix MONO8 and STEREO8 buffers.. but if I stop the source in between.. maybe).


My first implementation of decoding was "hardcode ogg decoding everywhere".. that wasn't very smart, I came up with an interface Decoder class, that provides all the needed information of the file and when requested, decodes either fully or in chunks the file into provided ByteArray. 
After the Decoder interface was ready, I implemented OggDecoder that supports both ways of decoding a packed ByteArray.


Playing sounds with OpenAL is pretty simple, with audio that fits fully into a Buffer, just queue the buffer into the source, and call play function. For streaming, it gets a tad more complicated, you have to have n buffers, that you joggle in the stream (I prefer 3 buffers), when 1 buffer gets empty, unqueue it from Source, fill it up, and queue it again.
The Source has couple interesting settings, AL_GAIN that in my books equals the volume, and AL_PITCH that I would rather translate as speed.

3D positional audio

The positional audio seems to work only with MONO audio, otherwise it work pretty much the same as other audio sources, though it has some extra settings that gives it the ability to fade, when the source gets farther away from listener.


These settings govern what happens to the source audio, depending on the selected distance model ( alDistanceModel( alEnum ) ), I am not going to say anything more about it, for now, there are nice graphs about it in the OpenAL Programmers guide pdf, page 87 and forward. I think the distance model is Context specific.


Well, having "wav" and "flac" support with Decoder interface would be nice, also I've been reading Game programming gems2 about audio design patters, I think my current approach is a bit too low level and it would benefit from having Music/Stereo system separated from 3D positional audio.


lauantai 21. heinäkuuta 2012

Map generation..

I've been pondering about automatic map generation lately.. I started to approach map generation few months ago, at first through creating QT based tool, that would run lua scripts and generate world. The approach was to create 'workspace' xml file, that contains all the worlds setup properties (width, depth, height, water height... ), list of available lua scripts, and run configuration of scripts aswell as how many iterations should the script pipeline run. In theory this approach is sound, and all.. but the hazzle with QT Xml parsing made me loose my will to live/code/develop/'to remain inspired' and the project became doomed because of it.

After that I've been developing my game engine framework and during this the map generation thing has been itching me.. Yesterday I started sketching about generating the maps, with set of tiny command line executables. Now where to start? creation of the 'grid' of course.. err.. wait.. grids have this bad property of having non uniform distances between points, the hypotenuse in the middle is not same distance away from each neighbour vertex in a primitive shape (thinking this in 2D flat plane).. So, lets try triangles, they seem nice, they seem easy to use too, and should LOD into groups as nicely as quads (so if I zoom away from the triangle 'grid' I can take 4 triangles to make 1 bigger triangle)..
Triangle points are uniform distance away from eachother.
Why is this important? well as the points/vertexes are the data points in the map, I think of them as 'sensor' spots, and to measure area properties in each spot fairly, those measurement spots should be equal distance away from eachother, always. With quads, that is impossible. Also, I feel that this should reduce distortions with the map in the end, as the only source for distortions at the time will come from adding the height dimension to the grid.

So triangle grid, creation of it requires length of the side of the big triangle, aswell as length of small triangle sides.. the command line executable should then be:
  • grid -create -bigsize 1000 ...
oh wait, what is 1000 ? meters centimeters ? millimeters? feets?
lets define it to be meters!
  • grid -create -bigsize 100000 -size 0.5
Still missing something.. selected algorithm ? and parameters for that? min,max?
  • grid -create perlin -bigsize 100000 -size 0.5 -dimension 256 -min 0.0 -max 10.0
Ok so now we have specified that we want to use perlin noise function to create the terrain and for the seed we use 256 sized random texture (256x256).
Initially I tried to create a triangular texture algorithm for this, but that just was too much work and inventing a new wheel, where just a regular rectangular texture would suffice. I decided that it is too complicated and does not really contribute to the project to invent triangle textures.
Also about the data organization in the file, I've decided that it will be 'sharp end upwards' approach, so that the highest point where the width of the structure is 1 is first. The datatype is 32bit floats, using the computers architecture (I am not going to bother with endianess etc. issues) as the theory goes that this is just seed data, the final map, will use this as a resource on this machine, and the output that it produces, is something different, that can take all the funny technical fubar things into account (json?xml?).
First results with parameters:
  • grid -create perlin -bigsize 10 -size 1.0 -dimension 2 -min 0.0 -max 10
 Running create.
Generating perlin BigSize: 10 Size: 1 Min: 0 Max: 10 Dimension: 2
3.21913 6.07814
-0.416768 3.48882 2.15854
6.07814 3.21913 6.07814 6.53172
3.48882 -0.416768 3.48882 2.15854 2.15854
3.21913 6.07814 3.21913 6.07814 6.53172 6.07814
-0.416768 3.48882 -0.416768 3.48882 2.15854 2.15854 2.15854
6.07814 3.21913 6.07814 3.21913 6.07814 6.53172 6.07814 6.53172
3.48882 -0.416768 3.48882 -0.416768 3.48882 2.15854 2.15854 2.15854 2.15854
3.21913 6.07814 3.21913 6.07814 3.21913 6.07814 6.53172 6.07814 6.53172 6.07814

not really what I expected, the negative values shouldnt be there.. Few fixes maybe needed.

I had few bugs in the perlin functions, it added too many times, scaled wrongly and all.. after fixing that, the generation is more healthy looking:
Running create.
Generating perlin BigSize: 10 Size: 1 Min: 0 Max: 10 Dimension: 2
created random seeds: 5.44206 4.15723 9.79308 0.682089
5.42672 4.76487
4.7618 5.10193 5.02163
4.20711 5.42672 4.76487 4.76487
5.10193 4.7618 5.10193 5.02163 5.10193
5.42672 4.20711 5.42672 4.76487 4.76487 4.76487
4.7618 5.10193 4.7618 5.10193 5.02163 5.10193 5.02163
4.20711 5.42672 4.20711 5.42672 4.76487 4.76487 4.76487 4.76487
5.10193 4.7618 5.10193 4.7618 5.10193 5.02163 5.10193 5.02163 5.10193
5.42672 4.20711 5.42672 4.20711 5.42672 4.76487 4.76487 4.76487 4.76487 4.76487
except, for random seed to be 9 and 0.6, i dont see much of those..

Continued the editing, and the perlin noise map looks a bit, odd xD

A bit buggy generation..
Buggy fixed, Still couple oddities.